Topic
stringclasses 9
values | News_Title
stringlengths 10
120
| Citation
stringlengths 18
4.58k
| Paper_URL
stringlengths 27
213
| News_URL
stringlengths 36
119
| Paper_Body
stringlengths 11.8k
2.03M
| News_Body
stringlengths 574
29.7k
| DOI
stringlengths 3
169
|
---|---|---|---|---|---|---|---|
Chemistry | Chiral drug-like building blocks by nickel-catalyzed enantioselective olefin cross-coupling | Chen-Fei Liu et al, Synthesis of tri- and tetrasubstituted stereocentres by nickel-catalysed enantioselective olefin cross-couplings, Nature Catalysis (2022). DOI: 10.1038/s41929-022-00854-8 Journal information: Nature Catalysis | https://dx.doi.org/10.1038/s41929-022-00854-8 | https://phys.org/news/2022-10-chiral-drug-like-blocks-nickel-catalyzed-enantioselective.html | Abstract Asymmetric transition-metal catalysis has had a far-reaching impact on chemical synthesis. However, non-precious metal-catalysed strategies that provide direct entry to compounds with enantioenriched trisubstituted and fully substituted stereogenic centres are scarce. Here we show that a sterically encumbered chiral N -heterocyclic carbene-Ni(0) catalyst, in conjunction with an organotriflate and a metal alkoxide as hydride donor, promotes 1,2-hydroarylation and hydroalkenylation of diverse alkenes and 1,3-dienes. Replacing the metal alkoxide with an organometallic reagent allows installation of two different carbogenic motifs. These multicomponent reactions proceed through regio- and enantioselective carbonickelation followed by carbon–nickel bond transformation, providing a streamlined pathway towards enantioenriched carbon- or heteroatom-substituted tertiary or quaternary stereogenic centres. Through selective carbofunctionalizations, enantiodivergent access to opposite enantiomers may be achieved using the same catalyst antipode. The method enables practical access to complex bioactive molecules and other medicinally valuable but synthetically challenging building blocks, such as those that contain deuterated methyl groups. Main Enantioenriched tri- and tetrasubstituted stereogenic centres, particularly those linked to aromatic or olefinic moieties, feature widely among natural products and synthetic drug candidates 1 , 2 (Fig. 1a ). The spatial arrangement of atoms around such centres often dictates the overall shape of a molecule and influences its biological function or toxicity. Because opposite enantiomers may possess vastly different activities 3 , 4 , the ability to exert exquisite control over the absolute configuration to access enantiomerically pure compounds through asymmetric catalysis 5 , 6 is of the utmost importance in pharmaceutical 7 and agrochemical 8 research. In addition, catalytic enantioselective synthesis helps to avoid the necessity for cumbersome resolutions of racemic mixtures. Fig. 1: The importance of developing enantioselective olefin cross-coupling reactions using non-precious metal catalysis. a , Tri- and tetrasubstituted stereocentres are commonly embedded within natural products and drugs. b , Established base-metal-catalysed carbofunctionalization strategies for introducing stereocentres and their associated challenges. c , Our report on regio- and enantioselective olefin cross-coupling reactions catalysed by a bulky chiral NHC–nickel(0) complex. R, G, functional group; Ar, aryl group; L, ligand; M, metal; X, halide or pseudohalide; Tf, triflyl. Full size image Transition-metal-catalysed enantioselective multicomponent coupling of carbogenic functionalities with prochiral carbon–carbon π -frameworks is an attractive avenue for generating molecular complexity and stereogenicity 9 . In particular, the process of Markovnikov-selective 1,2-hydrocarbofunctionalization 10 , 11 , in which adjacent C−C and C−H bonds can be installed with precision, offers the opportunity to control the stereochemical outcome when a chiral organometallic catalytic species is involved to promote enantiofacial discrimination of the substrate. To this end, seminal advances in Pd-catalysed reductive Heck-type reactions that forge C( sp 2 )−C( sp 3 ) bonds have been reported 12 . In recent years, the surge in demand for non-precious metal catalysis as a more sustainable alternative 13 has led to a number of related developments in enantioselective olefin coupling transformations 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 (Fig. 1b ). A significant proportion of these olefin hydrocarbofunctionalizations proceed through sequential hydrometallation/carbofunctionalization regimes via a chiral metal–hydride intermediate I derived from a hydride donor such as an alcohol 18 , 19 , 20 , 21 , 22 or hydrosilane 23 , 24 , 25 , 26 , 27 . Enantioselectivity is typically dictated during the regioselective hydrometallation event from I to II (electronic stabilization by the α-substituent R), before reaction with a carbon electrophile or organometallic nucleophile to give the final product. On the other hand, transformations that incorporate two carbogenic moieties via III and IV have been documented, although the scope is restricted to styrene homo-diarylation using aryl halides to deliver enantioenriched triarylethanes 28 . Despite these efforts, established base-metal-catalysed regimes are often plagued by critical problems that preclude their widespread adoption across chemical research (Fig. 1b , grey inset). For example, the scope of most reported methods has centred on constructing enantioenriched tertiary stereocentres 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 . Analogous reactions that lead to crowded tetrasubstituted carbon centres 31 remain underdeveloped. This is not surprising, because access to these products is often thwarted by the increased steric congestion within the putative tertiary alkylmetal intermediate 32 obtained after metal–hydride addition to 1,1-disubstituted alkenes, which inadvertently raises the energy barriers for hydrometallation and the subsequent C−C bond formation. Added to this complication is the inability of a chiral catalytic entity to appreciably differentiate the enantiotopic faces of a 1,1-disubstituted olefin (versus a monosubstituted variant) for high selectivity 33 . Although the enantioselective generation of all-carbon quaternary stereocentres is a primal objective in organic synthesis 7 , 31 , 34 , 35 , 36 , 37 , methodologies that achieve this by direct carbofunctionalization of readily available olefin feedstocks, without relying on intramolecularity or activating/directing auxiliaries using non-precious metal catalysis, are unexpectedly scarce. Furthermore, most of these hydroarylation and hydroalkenylation protocols employed either styrenes 18 , 19 , 20 , 21 , 23 , 24 , 25 , 28 , N -acyl enamines 26 , 27 or aryl-1,3-dienes 18 , 20 , 22 as substrates, which give rise to a limited range of enantioenriched products. In addition, poor regio- and enantioselectivities were observed with alkyl-substituted 1,3-dienes 22 , which are prone to allylic rearrangement and thus show difficulty in undergoing selective carbo-additions 38 . The corresponding transformations with the less reactive enol ethers, N -vinylheteroarenes or vinylsilanes affording heteroatom-substituted stereocentres that are prevalent in pharmaceuticals 39 , 40 are also exceedingly rare. For dicarbofunctionalization, efficient addition of two distinct carbon units is challenging owing to undesired side reactions arising from competitive coupling between the reagents 41 , as well as adventitious β-H elimination 42 of the alkylmetal intermediate analogous to IV . In light of the aforementioned challenges, a unified enantioselective olefin cross-coupling manifold that enables access to various categories of enantioenriched tri- and tetrasubstituted stereocentres would be especially desirable and complementary to existing methods 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , including those involving radical relay processes that are limited to stabilized alkyl radical additions 29 , 30 . In this Article, we report the successful implementation of a versatile nickel-catalysed strategy that is broadly applicable to hydrocarbofunctionalization and dicarbofunctionalization (Fig. 1c ). Through mechanistic studies, these three-component reactions are found to operate via a chiral carbonickel complex undergoing carbonickelation with inverse site selectivity to give a β-branched alkylnickel species (versus α-branched II and IV from Fig. 1b ). Results Chiral NHC–nickel complexes as effective catalysts The lack of generality in past catalytic protocols could be attributed to the over-reliance on chiral bidentate N - or P -based ligands. To circumvent this issue, we opted to devise a reaction system that takes advantage of an earth-abundant organonickel(0) catalyst bearing a sizeable enantioenriched monodentate N -heterocyclic carbene (NHC) ligand 43 (Fig. 2a ). Fig. 2: Reaction design and mechanistic studies. a , Proposed mechanistic rationale for regio- and enantioselective olefin cross-coupling using hydrocarbofunctionalization as a model. b , Optimized conditions and X-ray crystal structure of the product. c , Increasing alkoxide loading suppresses the undesired branched-selective Heck reaction that leads to 9 . Hydroarylation of 10 leads to 12 with no trace of ring cleavage. R, G, functional group; Ar, aryl group; L, ligand; M, metal; X, halide or pseudohalide; cod, 1,5-cyclooctadiene; Tf, triflyl; Bn, benzyl. Full size image Based on preliminary findings in non-enantioselective Ni(I)-catalysed processes 44 , we postulated that a chiral aryl(alkenyl)nickel species V (from initial reaction of the NHC–Ni(0) with an sp 2 -hybridized carbon electrophile) would preferentially undergo regio- and enantioselective carbonickelation across the π -bond to give VI (instead of VIII ), to minimize unfavourable steric interactions between the ligand and the olefinic substitutents 45 . We reasoned that the strongly σ -donating and sterically shielded NHC 46 could sufficiently stabilize V and provide the appropriate chiral environment to induce efficient and stereoselective π -complexation/migratory insertion. This is followed by alkoxide ligand substitution and hydride transfer (from β-H elimination of Ni-alkoxide VIII ) 44 to give IX , before the ensuing reductive elimination furnishes the desired hydrocarbofunctionalization adduct. Mechanistically, the overall carbonickelation/hydride transfer sequence differs from previous methods (Fig. 1b ) in two fundamental aspects: a stereo-determining C−C bond-formation precedes C−H bond formation, and the regiochemical outcome from V to VI is largely governed by steric effects (versus electronic effects from I to II ). Because the stereochemical outcome is already set in the carbonickelation step, functionalizing the C−Ni bond in VI with a carbon-based reagent (instead of hydride) will give rise to dicarbofunctionalization adducts (Fig. 4c provides more details), which is not attainable by catalytic systems that commence with hydrometallation (Fig. 1b ). Our conceived approach would offer straightforward access to enantioenriched building blocks, many of which were inaccessible by previous methodologies, for the concise assembly of various biologically active compounds (Fig. 1a ). Furthermore, enantioselective hydro- or deuterofunctionalization of an olefin or its gem -dideutero-substituted derivative could deliver molecules containing differentially D-labelled methyl units. These are prized for their role in medicine and agrochemicals 47 , 48 (for example, 1 – 3 , 5 ) arising from the beneficial effect of C−D bonds 49 , but remain a challenge to synthesize. To test the hypothesis proposed in Fig. 2a , we examined conditions to promote the three-component union of styrene 6 with a carbon electrophile and a hydride source. After an extensive survey (Supplementary Tables 1 – 7 ), we identified Ni(cod) 2 (10 mol%) in combination with a diastereo- and enantioenriched C 2 -symmetric imidazolium salt L1 (10 mol%) and NaO t Bu (20 mol%) as the best catalytic system to mediate enantioselective cross-coupling of 6 , aryl triflate 7 and sodium isopropoxide, furnishing the desired adduct 8 in 95% yield, >98:2 regioisomeric ratio (r.r.) and 98:2 enantiomeric ratio (e.r.) under mild conditions within 16 h (Fig. 2b ). The absolute configuration of 8 was unambiguously ascertained by X-ray crystallography analysis. Poor conversion to the desired product was detected when Ni(cod) 2 was replaced with other Ni(II) precatalysts containing different counterions (Supplementary Table 2 ), whereas changing the solvent (Supplementary Table 4 ) or lowering the temperature (Supplementary Table 5 ) led to diminished efficiency. In line with our proposal, the electron-donating NHC derived from deprotonation of L1 was uniquely crucial for the transformation as other non-NHC ligands failed to promote cross-coupling (Supplementary Table 1 ). Furthermore, increasing the size or modifying the electronics of the NHC ligand resulted in lower yields and selectivities (Supplementary Table 1 ). Switching triflate 7 to other halide/sulfonate substrates also had a detrimental effect on reaction efficiency (Supplementary Table 6 ). Under our conditions, simple metal alkoxides 44 are more effective hydride donors than alcohols 18 , 19 , 20 , 21 , 22 or hydrosilanes 23 , 24 , 25 , 26 , 27 , and sodium isopropoxide was found to be optimal (Supplementary Table 7 ). A proposed model to account for the observed stereochemical outcome 50 is shown in Supplementary Fig. 5 . Control experiments revealed that the presence of excess sodium isopropoxide was key to suppressing the undesired branched-selective Heck reaction 42 , 44 that gave rise to alkene 9 (Fig. 2c ). A decrement in sodium isopropoxide loading (<5 equiv.) led to a corresponding drop in reaction efficiency and increase in formation of 9 44 , without affecting site and enantioselectivity. This may be rationalized by the fate of the alkylnickel intermediate IV generated after regio- and enantioselective carbonickelation (Fig. 2a ), which is susceptible to competitive β-H elimination when R 2 = H (that is, monosubstituted olefins) to afford 9 . Sufficient amounts of the exogenous alkoxide are therefore needed to enable efficient conversion of IV to the hydroarylation product 8 via VI and VII . These results further support that C−C bond formation occurs in a Markovnikov-selective fashion before hydride transfer, as 8 and 9 were both obtained as single regioisomers. Cross-coupling of racemic trans -vinylcyclopropane 10 and 11 under standard conditions gave the expected adduct 12 as a diastereomeric mixture in 71% yield and 95:5 r.r. with minimal cyclopropane ring-opening, intimating that the transformation is unlikely to proceed through initial hydronickelation via α-branched alkylnickel II (Fig. 1b ) or long-lived radical species 51 . Analysis of the standard reaction mixture by electron paramagnetic resonance spectroscopy indicated the absence of paramagnetic species generated in the system (Supplementary Fig. 4 ), suggesting that the present Ni-catalysed olefin cross-coupling probably follows a Ni(0)/Ni(II) mechanism 50 . These results are in contrast with previously reported reactions involving a dimeric Ni(I) complex, which largely proceed through a Ni(I)/Ni(III) pathway 44 , 45 . Scope of enantioselective olefin cross-coupling We next assessed the generality of our established conditions for enantioselective cross-coupling with a range of functionalized alkenes and 1,3-dienes. In contrast to existing catalytic systems that are incompatible with sterically hindered 1,1-disubstituted olefins, we observed that such substrates underwent efficient hydroarylation to secure 13 to 29 bearing fully substituted stereocentres with excellent control of regio- and enantioselectivities (Fig. 3a ). These include products containing silyl ethers ( 14 , 17 ), a Lewis-basic amine ( 15 ), heterocyclic substituents ( 18 , 23 , 24 ), electronically varied arenes ( 19 – 22 ), as well as an olefin ( 26) . Access to 25 shows that exocyclic olefins are amenable substrates, although enantioselectivity was slightly lower. The present protocol could be readily extended to the preparation of enantioenriched drug-like scaffolds with deuterated methyl (CH 2 D, CHD 2 , CD 3 )-substituted quaternary carbon centres ( 27 – 29 ) by judicious hydroarylation or deuteroarylation of an alkene or its gem -dideutero-substituted variant with sodium isopropoxide or its deuterated analogue. Fig. 3: Exploration of olefin scope. a , Reactions to access enantioenriched tetrasubstituted stereocentres. b , Reactions to access enantioenriched trisubstituted stereocentres. Unless otherwise stated, all products were obtained in >98:2 r.r. a Ni(cod) 2 / L1 (20 mol%) was used. b Ni(cod) 2 / L1 (15 mol%) was used. c NaOC(D)Me 2 was used. d NaOC(H)MePh was used. e Ni(cod) 2 / L2 (10 mol%) was used. f Ni(cod) 2 / L2 (15 mol%) was used and 94:6 r.r. R, functional group; Ar, aryl group; cod, 1,5-cyclooctadiene; Tf, triflyl; Bn, benzyl; TBS, tert -butyldimethylsilyl. Full size image Monosubstituted alkenes and 1,3-dienes were also competent substrates for cross-coupling, furnishing a vast array of adducts containing trisubstituted stereocentres in good yields and enantioselectivities (Fig. 3b ). Styrenes with aryl ( 30 – 38 ) and heteroaryl ( 39 – 42 ) groups of diverse electron density participated in hydroarylation, including one application that involves the synthesis of 43 , an N1L protein antagonist 52 . Formation of oestrone-derived 38 proves that reducible functional groups such as ketones are tolerated. The stereochemistry of the chiral catalyst, rather than existing stereocentres on the substrate, predominantly determines the configuration of 38 . To expand the scope beyond carbon centres, we investigated whether the method can be extended to heteroatom-functionalized olefins. As illustrated by 44 to 50 , a variety of synthetically useful products carrying nitrogen-, oxygen- or silicon-functionalized stereocentres were successfully generated, highlighting the robustness of these catalytic conditions. Use of the buttressed NHC 53 derived from imidazolium salt L2 (Supplementary Table 1 ) enhanced the level of enantiocontrol for the reactions leading to 46 – 49 . Contrary to previous catalytic systems, hydroarylation with both aryl-1,3-dienes ( 51 – 63 ) and alkyl-1,3-dienes ( 64 – 66 ) proceeded efficiently on the terminal C=C bond with good regiocontrol and enantioselectivity. This facilitates access to allylic arene moieties such as anticancer drug candidate 4 54 and its tri-deuterated variant 5 , which is of potential interest to the pharmaceutical industry 47 , 48 , 49 . Reaction of a cyclic internal olefin was feasible to give 66 , albeit with slightly diminished enantiopurity. However, cross-coupling of acyclic internal alkenes was inefficient (<10% product), presumably because of increased steric demand (Supplementary Fig. 1 ). The reaction scope of the carbon electrophile was also explored. As shown in Fig. 4a , we found that electron-rich and electron-poor aryl triflates with ortho -, meta - or para -substituents underwent site- and enantioselective hydroarylation to give adducts with quaternary ( 68 – 71 ) or tertiary carbon centres ( 72 – 82 ). These include triflates that bear sterically hindered arenes ( 78 , 79 , 81 ), Lewis-basic pyridines ( 71 , 82 ), as well as C( sp 2 )−Br ( 69 ) and C( sp 2 )−Cl ( 74 ) bonds, which can be further transformed. The latter examples highlight the chemoselectivity of the olefin cross-coupling towards triflates, which allows for orthogonal functionalization 55 . Hydroalkenylation of styrenes using the corresponding alkenyl triflate reagents generated a series of enantioenriched unsaturated building blocks (Fig. 4b ). Through this reaction manifold, carbocyclic olefins containing six- to eight-membered rings ( 83 – 89 , 92 – 94 ) as well as acyclic olefin motifs ( 90 , 91 ) can be reliably incorporated with good efficiency and selectivities. Transformations with a 1,3-diene, a vinyl carbazole and a vinyl silane successfully furnished chiral compounds containing a skipped diene ( 92 ) that is relevant in natural product synthesis 56 , an allylic carbazole ( 93 ) and an allylic silane ( 94 ) that may be used for stereocontrolled C−C bond formation 57 , 58 , respectively. Fig. 4: Exploration of electrophile and nucleophile scope. a , Reactions with aryl and heteroaryl triflate. b , Reactions with alkenyl triflates. c , Extension to 1,2-dicarbofunctionalization processes using organometallic reagents. Through strategic hydrofunctionalization and difunctionalization, synthesis of both enantiomers of 107 can be achieved. Unless otherwise stated, all products were obtained in >98:2 r.r. a 15 mol% Ni(cod) 2 / L1 was used. b Organozinc chloride (3–5 equiv.) was used. c 95:5 r.r. R, G, functional group; Ar, aryl group; cod, 1,5-cyclooctadiene; Tf, triflyl; Bn, benzyl. Full size image Seeking to challenge the limits of the present system, we probed the feasibility of installing two different carbogenic groups by replacing sodium isopropoxide with an organometallic nucleophile, which is expected to afford the requisite alkylnickel intermediate X following transmetallation of VI in Fig. 2a . In contrast to a Ni-catalysed homo-diarylation procedure that only affords trisubstituted carbon centres 28 (Fig. 1b ) as well as radical processes that require substrates with activated alkyl units 29 , 30 , a wide assortment of sp 2 - and sp 3 -hybridized organomagnesium and organozinc compounds served as effective reagents for cross-coupling to afford 95 – 106 with simultaneous control of regio- and enantioselectivities (Fig. 4c ). Arylation ( 95 ), alkenylation ( 96 ) and alkylation ( 97 ) to deliver sterically congested tetrasubstituted centres could be accomplished. The stereochemical identity of 95 was confirmed by X-ray crystallography, which is consistent with the mechanistic proposal outlined in Fig. 2 . 1,2-Diarylation was similarly enantioselective, regardless of the electronic attributes of the arylmetal nucleophile employed ( 98 – 102 ). To minimize side reactions with the susceptible ester, phenylzinc chloride was used instead of a Grignard reagent to secure 101 . Besides styrenes, aromatic- and aliphatic-1,3-dienes as well as heteroatom-substituted C=C bonds could be efficiently converted to the desired diarylation adducts with good enantiomeric purity ( 103 – 106 ), underscoring the excellent regio- and stereochemical fidelity of these processes. By using appropriate combinations of olefin substrates and reagents, hydrocarbofunctionalization or dicarbofunctionalization can be selectively executed to access both enantiomers of the product with the same chiral catalyst under standard conditions. This was exemplified by the enantiodivergent synthesis of mirror-image stereoisomers 107 and 107′ , a notable advantage without having to prepare the opposite antipode of the catalyst. Conclusions In summary, we have developed a Ni-catalysed multicomponent olefin cross-coupling protocol that provides a general platform for the enantioselective carbofunctionalization of a broad spectrum of substituted alkenes and 1,3-dienes. Through the use of a sizeable chiral NHC–Ni(0) catalyst to trigger selective carbonickelation/functionalization cascades, we have shown that high-value organic entities containing tertiary or quaternary stereogenic centres could be efficiently accessed in high stereochemical purity. We believe this methodology significantly enriches the toolbox of asymmetric catalysis to facilitate countless applications in stereoselective natural product synthesis and drug discovery. Methods General procedure for enantioselective hydroarylation(alkenylation) In a N 2 -filled glove box, an oven-dried 4-ml or 8-ml vial equipped with a stir bar was charged with Ni(cod) 2 (0.1 equiv.), L1 (0.1 equiv.), NaO t Bu (0.2 equiv.) and toluene or cyclohexane (2.0 ml or 4.0 ml). The reaction mixture was allowed to stir at room temperature for 1–3 h. Alkene substrate (1.0 equiv.), aryl(alkenyl) triflate (3.0 equiv.) and NaO i Pr (5.0 equiv.) were then added to the system. The vial was sealed and the reaction mixture was allowed to stir rigorously at 40 °C for 16–24 h. After cooling to ambient temperature, the resulting mixture was subjected to gas chromatography analysis to determine the r.r. and then purified by silica gel chromatography. The purified product was subjected to HPLC analysis to determine the e.r. General procedure for enantioselective dicarbofunctionalization In a N 2 -filled glove box, an oven-dried 4-ml vial equipped with a stir bar was charged with Ni(cod) 2 (0.1 equiv.), L1 (0.1 equiv.), NaO t Bu (0.2 equiv.) and toluene (0.5 ml). The reaction mixture was allowed to stir at room temperature for 1 h. Alkene substrate (1.0 equiv.) and aryl triflate (2.0 equiv.) were then added to the system. Subsequently, aryl/alkenyl/alkylmetal reagent (3.0 equiv.) was slowly added to the system. The vial was sealed and the reaction mixture was allowed to stir at 40 °C for 8 h. After cooling to ambient temperature, the crude mixture was quenched by aqueous NH 4 Cl, and the mixture was subjected to gas chromatography analysis to determine the r.r. The mixture was purified by silica gel chromatography, and the purified product was subjected to HPLC analysis to determine the e.r. Data availability All data supporting the findings of this study are available within the Article and its Supplementary Information . Crystallographic data for the structures reported in this Article have been deposited at the Cambridge Crystallographic Data Centre (CCDC), under deposition numbers 2128517 ( 8 ), 2149572 ( 95 ) and 2173668 ( Ni-1 ). Copies of the data can be obtained free of charge via . | NUS chemists have developed an effective method to access enantioenriched drug-like compounds through multicomponent olefin cross-coupling using chiral nickel-based catalysts. Chiral molecules containing enantioenriched tri- and tetrasubstituted stereogenic centers are found among many natural products and drugs. The spatial arrangement of atoms around such centers often dictates the overall shape of a molecule and influences its biological function or toxicity. Thus, the ability to generate enantiomerically pure compounds through asymmetric catalysis is vital in pharmaceutical and agrochemical research. However, related strategies that employ non-precious metal-derived catalyst systems to promote enantioselective synthesis using cheap and abundant olefin starting materials often have limited scope. This constrains their widespread adoption. A research team led by Assistant Professor Koh Ming Joo, from the Department of Chemistry, National University of Singapore has designed a new strategy that leverages widely available nickel catalysts containing hindered N-heterocyclic carbene (NHC) ligands, to merge olefins with an organotriflate and a metal alkoxide as hydride donor. Replacing the metal alkoxide with an organometallic reagent enables installation of two different carbogenic groups. These multicomponent reactions provide a streamlined pathway towards chiral molecules bearing enantioenriched carbon- or heteroatom-substituted tertiary or quaternary stereogenic centers. This is a collaboration with Professor Shi-Liang Shi, from the Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences. The findings were published in Nature Catalysis. Prof. Koh said, "Through selective carbofunctionalizations, we can even access opposite enantiomers of a chiral molecule by using a single chiral catalyst antipode. This is difficult to achieve using alternative systems, demonstrating a unique advantage of our catalytic regime. We can now utilize this new protocol as a general platform to produce valuable chiral molecules with high stereochemical purity." "We believe this methodology will significantly enrich the toolbox of asymmetric catalysis to facilitate countless applications in stereoselective natural product synthesis and drug discovery," added Prof. Koh. The research team is developing new chiral NHC-nickel catalysts to promote olefin cross-coupling transformations that can potentially address other unresolved challenges in organic synthesis. | 10.1038/s41929-022-00854-8 |
Physics | AI-based cytometer detects rare cells in blood using magnetic modulation and deep learning | Yibo Zhang et al. Computational cytometer based on magnetically modulated coherent imaging and deep learning, Light: Science & Applications (2019). DOI: 10.1038/s41377-019-0203-5 This research was led by Dr. Aydogan Ozcan, Chancellor's Professor of Electrical and Computer Engineering at UCLA and an Associate Director of the California NanoSystems Institute (CNSI). The other authors of this work are Dr. Yibo Zhang, Dr. Mengxing Ouyang, Dr. Aniruddha Ray, Tairan Liu, Dr. Janay Kong, Bijie Bai, Dr. Donghyuk Kim, Alexander Guziak, Yi Luo, Alborz Feizi, Katherine Tsai, Zhuoran Duan, Xuewei Liu, Danny Kim, Chloe Cheung, Sener Yalcin, Dr. Hatice Ceylan Koydemir, Dr. Omai Garner and Dr. Dino Di Carlo. This work was supported by the Koç Group, NSF and HHMI. Journal information: Light: Science & Applications | http://dx.doi.org/10.1038/s41377-019-0203-5 | https://phys.org/news/2019-10-ai-based-cytometer-rare-cells-blood.html | Abstract Detecting rare cells within blood has numerous applications in disease diagnostics. Existing rare cell detection techniques are typically hindered by their high cost and low throughput. Here, we present a computational cytometer based on magnetically modulated lensless speckle imaging, which introduces oscillatory motion to the magnetic-bead-conjugated rare cells of interest through a periodic magnetic force and uses lensless time-resolved holographic speckle imaging to rapidly detect the target cells in three dimensions (3D). In addition to using cell-specific antibodies to magnetically label target cells, detection specificity is further enhanced through a deep-learning-based classifier that is based on a densely connected pseudo-3D convolutional neural network (P3D CNN), which automatically detects rare cells of interest based on their spatio-temporal features under a controlled magnetic force. To demonstrate the performance of this technique, we built a high-throughput, compact and cost-effective prototype for detecting MCF7 cancer cells spiked in whole blood samples. Through serial dilution experiments, we quantified the limit of detection (LoD) as 10 cells per millilitre of whole blood, which could be further improved through multiplexing parallel imaging channels within the same instrument. This compact, cost-effective and high-throughput computational cytometer can potentially be used for rare cell detection and quantification in bodily fluids for a variety of biomedical applications. Introduction Rare cell detection aims to identify a sufficient number of low-abundance cells within a vast majority of background cells, which typically requires the processing of large volumes of biological sample. The detection and enumeration of these rare cells are vital for disease diagnostics, the evaluation of disease progression and the characterization of immune response 1 , 2 , 3 . For instance, circulating foetal cells present in maternal blood are recognized as a source of foetal genomic DNA, and their isolation is crucial for the implementation of routine prenatal diagnostic testing 4 . As another example, antigen-specific T cells in peripheral blood play a central role in mediating immune response and the formation of immunological memory, which could lead to the prediction of immune protection and diagnosis of immune-related diseases 5 . Circulating endothelial cells with a mature phenotype are increased in patients with certain types of cancer and several pathological conditions, indicating their potential as disease markers 6 . Circulating tumour cells (CTCs) are implicated in various stages of cancer, and have therefore been collected to study their role in the metastatic cascade and to predict patient outcomes from both the disease and treatments received 7 . To highlight yet another example, haematopoietic stem and progenitor cells, which reside predominantly in bone marrow with low numbers, also found in peripheral blood, possess the unique capacity for self-renewal and multilineage differentiation, and their trafficking in blood may be connected to disease processes 8 . The specific and sensitive detection of these rare cells in human blood and other bodily fluids is therefore of great interest. However, millions of events need to be acquired to obtain a sufficient number of these low-abundance cells (e.g., typically <1000 target cells per millilitre of blood 9 ). The direct detection of rare cells from whole blood requires the processing of large amounts of patient sample (e.g., up to a few hundred millilitres 10 ), which is both unrealistic and time consuming. To alleviate this issue, highly specific labelling methods are often used before detection for sample purification/enrichment to facilitate rapid detection and processing 5 , 10 . Among these labelling techniques, the use of colloidal magnetic particles as labelling reagents offers benefits in forming stable suspensions, fast reaction kinetics 10 and minimum damage to the target cells, with high viability retained 11 . Motivated by these important needs and the associated challenges, various technologies have been developed and employed for detecting rare cells in blood. Most of these existing detection methods involve three steps: capture, enrichment and detection 12 . The capture and enrichment steps use a number of methods, such as barcoded particles 13 , magnetic beads 14 , micro-machines 15 , microfluidic chips 16 and density gradient centrifugation 12 , 17 . Following the enrichment step, these rare cells can be detected via commonly used techniques, such as immunofluorescence 18 , 19 , electrical impedance 20 and Raman scattering 21 measurements, among others. Notably, commercial products for rare cell detection, such as the CellSearch system 22 , which automates magnetic labelling, isolation, fluorescence labelling and automated counting, are generally high cost, limiting their adoption worldwide 12 . Therefore, cost-effective, reliable and high-throughput rare cell detection techniques are urgently needed to improve the early diagnosis of diseases, including cancer, so that earlier treatments can be carried out, helping us to improve patient outcomes while also reducing healthcare costs 23 , 24 . The recent advances in machine learning and, specifically, deep learning have pushed the frontiers of biomedical imaging and image analysis 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , enabling rapid and accurate pathogen detection 39 , 40 , 41 , 42 and computer-assisted diagnostic methods 43 , 44 , 45 , 46 , 47 . Powered by deep learning, we demonstrate here that speckle imaging using lensless chip-scale microscopy can be employed for the specific and sensitive detection of rare cells in blood with low cost and high throughput. This novel cell detection and cytometry technique are based on magnetically modulated lensless speckle imaging, which specifically labels rare cells of interest using magnetic particles attached to surface markers of interest and generates periodic and well-controlled motion on target cells by alternating the external magnetic field applied to a large sample volume. The holographic diffraction and the resulting speckle patterns of the moving cells are then captured using a compact and cost-effective on-chip lensless imager (Fig. 1 ), and are computationally analysed by a deep-learning-based algorithm to rapidly detect and accurately identify the rare cells of interest in a high-throughput manner based on their unique spatio-temporal features. Although previous work has employed the idea of using magnetic modulation for enhancing fluorescence detection 48 , 49 , our work is the first of its kind for combining magnetic modulation, lensless imaging and deep learning to create a unique cytometer that does not require additional labelling (e.g., fluorescence) or custom-designed molecular probes. Fig. 1: Schematics and photos of the computational cytometer. a A magnetically modulated lensless imaging module (inset) that includes a lensless holographic microscope and two electromagnets driven by two alternating currents with opposite phase. The fluid sample that contains magnetic-bead-conjugated cells of interest is loaded into a capillary tube. The imaging module is mounted to a linear motion stage to scan along the sample tube to record holographic images of each section of the tube. b A laptop computer is used to control the device and acquire data. A function generator and a power supply, together with custom-designed circuitry, are used to provide the power and driving current for the linear motion stage and electromagnets Full size image As shown in Fig. 1 , we built a portable prototype of this computational cytometer for rare cell detection. Our magnetically modulated speckle imaging module includes a lensless in-line holographic microscope 41 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 and two oppositely positioned electromagnets (Fig. 1a, b inset). The lensless microscope contains a laser diode (650 nm wavelength) to illuminate the sample from ~5–10 cm above, and a complementary metal–oxide–semiconductor (CMOS) image sensor is placed ~1 mm below the sample for acquisition of a high-frame-rate video to monitor the spatiotemporal evolution of the sample containing the target cells of interest. Because the light-source-to-sample distance is much greater than the sample-to-image-sensor distance, the optical design has a unit magnification, and the field of view (FOV) of a single image is equal to the active area of the image sensor (which can be 10–30 mm 2 using the standard CMOS imagers employed in digital cameras and mobile phones). To increase the screening throughput, target cells are enriched using magnetic separation and loaded inside a capillary tube for imaging (Figs. 1 , 2 ). Magnetic enrichment alone leads to a background of unlabelled cells, bead clusters or weakly labelled cells that are also captured, such that further discrimination of the target cells within this background information is needed to accurately identify and count the rare cells. The imaging module is mounted onto a custom-made linear translation stage, and is translated along the direction of the sample tube to capture a holographic video for each section of the sample tube. During the imaging at each section, the electromagnets are supplied with alternating current with a 180° phase difference to exert an alternating pulling force to the magnetic-bead-conjugated cells in the sample, which causes them to oscillate at the same frequency as the driving current. Extension rods made of permalloy were designed and utilized to enhance the magnetic force at the sample location by ~40-fold (see the Methods section and Fig. S 1 ). The holographic diffraction patterns that are cast by the magnetically modulated target cells are captured using the image sensor and transferred to a laptop computer. A computational motion analysis (CMA) algorithm 41 and a densely connected pseudo-3D convolutional neural network structure (P3D CNN) 58 then analyse the holographic image sequence that contains the 3D dynamic information from the oscillating cells, which allows rapid and specific detection of the target cells. Fig. 2: Sample preparation and imaging procedures. The sample preparation time before scanning is ~1 h, with the first 30 min dedicated to passive incubation, which does not require supervision Full size image The current prototype (Fig. 1 ) screens ~0.942 mL of fluid sample, corresponding to ~1.177 mL of whole-blood sample before enrichment, in ~7 min (Fig. 2 ), while costing only ~$750 for the raw materials (excluding the function generator, power supply and laptop computer) and weighing ~2.1 kg. The platform with a single imaging channel can be expanded to parallel imaging channels by mounting several imaging modules onto the same linear stage, as shown in Fig. 1a (semi-translucent illustrations). The performance of our technique was tested by detecting a model rare cell system of spiked MCF7 cancer cells in human blood. We demonstrate that our technique has a limit of detection (LoD) of 10 cells per millilitre of whole blood using only a single-imaging channel. Because the current LoD is mainly limited by the screening volume, we expect that the LoD can be further improved by including additional parallel imaging channels and increasing the sample volume that is screened. Results Characterization of the oscillation of bead–cell conjugates under alternating magnetic force Our detection technique capitalizes on the periodic oscillatory motion of the target cells of interest, with a large number of labelling magnetic particles, to specifically detect them with high throughput. We designed a magnetic actuator to exert periodic and alternating magnetic force on the magnetic particles bound to these cells of interest (Fig. 1 ). To exert sufficient magnetic force on each labelled cell, we designed and machined extension rods that were made with magnetically soft permalloy, which were attached to the electromagnets to enhance the magnetic force at the sample location by ~40-fold with minimal magnetic hysteresis (see the Methods section and Fig. S 1 ). The movement of MCF7 cells conjugated with Dynabeads was recorded by mounting the magnetic actuator and the labelled cells onto a 40 × 0.6NA benchtop microscope (see Fig. 3 ). The sample preparation procedure is depicted in Fig. 2 , where the Dynabead-conjugated cells were suspended in a methyl cellulose solution (a viscous fluid) and were subjected to alternating magnetic fields with a period of 1 s and a square-wave driving current. As shown in Fig. 3a–o and Video S1, due to the high viscosity of the methyl cellulose solution, the labelled cells mainly demonstrated 3D rotational motion. Typically, the motion of a labelled cell starts at the beginning of a cycle of the magnetic field (e.g., t = 0.5 s), approaching a steady state (e.g., t = 1.0 s) before the magnetic field switches its direction and the cell rotates in the reverse direction (e.g., between t = 1.0 s and t = 1.5 s). The two extreme positions of the rotational motion are demonstrated in Fig. 3p by overlaying the images captured at t = 0.5 s and t = 1.0 s using magenta and green, respectively. Fig. 3: Dynabead-conjugated MCF7 cells demonstrate periodic rotational motion under an alternating magnetic force field. Images were acquired using a 40 × 0.6NA benchtop microscope. a – o Snapshots of three Dynabead-conjugated MCF7 cells at different time points within a period of oscillation (period = 1 s). p Images taken at the two extrema of the oscillation ( t = 0.5 s and t = 1.0 s) were fused together to demonstrate the movement, where the grey regions in the fused image represent the consistency between the two images and the magenta/green colours represent the differences of the two images. Magenta represents the first image ( t = 0.5 s), and green represents the second image ( t = 1.0 s) Full size image Various unbound magnetic beads and bead clusters are also observed within the sample (Fig. 3p reports some examples, marked with text and arrows), which also oscillate at the same frequency as that of the bead-conjugated target cells. If not handled properly, these might form a major cause of false positives. However, the spatio-temporal dynamics of bead-conjugated cells significantly differ from those of unbound beads and bead clusters (see the following subsections and the Methods section). For a given amount of magnetic driving force, the bead-conjugated cells are subjected to more inertia and viscous drag, which is manifested by a slower response to the magnetic field, i.e., a slower rotational motion. In addition, magnetic beads typically form chains when they cluster under an external magnetic field, and these chains exhibit a swinging motion under the alternating magnetic field. This contrasts with the 3D rotational motion, i.e., the “rolling” motion associated with the bead-conjugated cells (see Video S2 for comparison). These intricate spatio-temporal dynamic features, in addition to morphological differences, are utilized by a subsequent classification step (based on a deep neural network) to achieve higher accuracy and eliminate false positive detections, as will be detailed in the following subsections and the Methods section. Cell detection and classification using CMA and deep learning The sample, which contains the periodically oscillating target cells and other types of unwanted background particles, is illuminated with coherent light. The interference pattern recorded by the CMOS image sensor represents an in-line hologram of the target cells, which is partially obscured by the random speckle noise resulting from the background particles, including other unlabelled cells, cell debris and unbound magnetic particles. Recorded at 26.7 frames per second using the CMOS image sensor, these patterns exhibit spatio-temporal variations that are partially due to the controlled cell motion. This phenomenon is exploited for the rapid detection of magnetic-bead-conjugated rare cells from a highly complex and noisy background. Figure 4a–g shows the detailed computational steps for the preliminary screening of cell candidates from a raw holographic image sequence. First, a computational drift correction step mitigates the overall drift of the sample between frames. Then, a high-pass filtered back-propagation step using the angular spectrum method 59 calculates the holographic images at different axial distances within the 3D sample. A CMA step analyses the differences among the frames to enhance the 3D contrast for periodically moving objects that oscillate at the driving frequency and employs time averaging to suppress the random speckle noise caused by background particles. This is then followed by a maximum intensity projection and threshold-based detection to locate potential cell candidates. Fig. 4: Computational detection of rare cells. a – c Preliminary screening of the whole FOV to detect candidates for target cells (MCF7). At each scanning position, 120 frames of raw holograms were taken at 26.7 frames per second. Computational drift correction was applied to mitigate the horizontal shift caused by the fluid drift, where the vertical movement caused by the magnetic field was kept unmodified. The lateral position of each MCF7 candidate was located by CMA, maximum intensity projection and threshold-based detection. d – g Zoomed-in preliminary processing for the example region labelled ① in b , c . h – k Classification process for the two cell candidates labelled ① and ② in c . The axial location for each cell candidate was determined by autofocusing. A video was formed for each cell candidate by propagating each frame to the in-focus plane. The classification was performed by a densely connected P3D convolutional neural network, as detailed in the Methods section Full size image The cell candidates that are detected in this preliminary screening step contain a large number of false positives, which mainly result from unbound magnetic beads that form clusters under the external magnetic field. Therefore, we employ another classification step (Fig. 4h–k ) to improve the specificity of our final detection. For this classification step, we choose to use a densely connected P3D CNN structure to classify the holographic videos to exploit the spatial and temporal information encoded in the captured image sequence. The densely connected P3D CNN structure is modified based on a recently proposed CNN structure 58 by adding dense connections 60 . Compared with other machine-learning techniques, the use of a deep neural network for video classification is typically more powerful, and the network can be retrained to classify other types of cells or objects of interest 58 , 61 . An autofocusing step 62 , 63 is applied to each candidate object to create an in-focus amplitude and phase video, which is then classified (as positive/negative) by a densely connected P3D CNN. These classification results are used to generate the final rare cell detection decisions and cell concentration measurements. The CNN was trained and validated with manually labelled video clips generated from ten samples that were used solely for creating the training/validation data sets. This training needs to be performed only once for a given type of cell-bead conjugate (for details, refer to the Methods section). Evaluation of system performance To quantify the LoD of our platform for detecting MCF7 cells in human blood, we spiked cultured MCF7 cells in whole blood at various concentrations and used our technique to detect the spiked MCF7 cells. Using spiked samples instead of clinical samples provides a well-defined system to characterize and quantify the capabilities of our platform, which is an important step before moving to clinical samples in the future. In each experiment, 4 mL of MCF7-spiked whole human blood at the desired concentration was prepared. Then, the procedure in Fig. 2 was followed to perform magnetic separation and embed the recovered cells in the viscous methyl cellulose medium, resulting in ~3.2 mL of final sample volume. This prepared sample was then loaded into a disposable capillary tube to be screened by our computational cytometer. Because the capillary tube length is designed to be longer than the range of the motion of the linear stage and because the capillary tube was wider than the width of the CMOS sensor, the actual imaged volume per test (within the sample tube) is ~0.942 mL, which corresponds to ~1.177 mL of the blood sample before the enrichment process. MCF7 concentrations of 0 mL −1 (negative control), 10 mL −1 , 100 mL −1 and 1000 mL −1 were tested, where three samples for each concentration were prepared and independently measured. Figure 5 shows the results of the blind testing of our technique using serial dilution experiments. The blue data points correspond to a one-time testing result, where the error bars correspond to the standard deviations of the three detected concentrations at each spiked concentration. Without the detection of any false positives in the negative control samples, our technique was able to consistently detect MCF7 cells from 10 mL −1 samples, measuring a target cell concentration of 1.98 ± 1.06 mL −1 . At this low concentration (10 cells/mL), the detection rate was ~20%. The experimentally measured detection rate dropped to ~5% at a higher concentration of 1000 cells/mL. Fig. 5: Quantification of the LoD of our computational cytometer based on magnetically modulated lensless speckle imaging for the detection of MCF7 cells in whole blood. The axes are a hybrid of logarithmic and linear scales to permit 0 cell/mL to be shown in the same plot. The blue data points represent one-time testing results of a single trained P3D CNN. The error bars represent the respective standard deviation of the three repeated tests at each spiked target cell concentration. The orange data points represent the averaged testing results using five P3D CNNs that were individually trained on a different subset of data. The error bars represent the standard deviation resulting from the detections of the five individual networks; for each trained network, three detected concentrations are averaged at each spiked concentration Full size image Because the training of the deep neural network inherently includes randomness, we further evaluated the repeatability of our network training process. For this, we randomly and equally divided our training data into five subsets, and then we trained five individual networks by assigning one different subset as the validation data set and the combination of the remaining four subsets as the training data set. Each of the five networks was blindly tested to generate the serial dilution results. The mean and standard deviation of the detected concentrations resulting from the five networks are shown in Fig. 5 (orange data points; for each trained network, three detected concentrations are averaged at each spiked concentration). Overall, good consistency between the different network results is observed. The underdetection behaviour of our system is due to a combination of both systematic errors and random factors. A major reason for underdetection is the tuning of the classification network. In the preliminary screening step, because there are typically a large number of false-positive detections and a low number of true-positive detections (since the target cells are quite rare), our classifier must be tuned to have an extremely low false-positive rate (FPR) to have a low LoD. To satisfy this, we applied a widely adopted method for tuning our classifier 64 , where we selected a decision threshold based on the training/validation data set, which leads to a zero FPR (see the Methods section for details). However, an inevitable side effect of reducing the FPR is a reduction in the true positive rate (TPR). Based on the validation results, when a decision threshold of 0.999999 was used, the TPR dropped to 10.5%. This explains a major portion of the reduced detection rate that we observed in the serial dilution tests (Fig. 5 ). Another systematic error that contributes to the underdetection is the imperfect recovery rate of MCF7 cells during the enrichment. We experimentally quantified the recovery rate of MCF7 cells using Dynabeads to be ~85% (Table S1 ). The remainder of the underdetection and fluctuations in the detection rate at different concentrations may be associated with various other factors, e.g., sample handling errors (especially at low cell concentrations), clustering of the target cells, and non-uniform labelling of cells with magnetic beads. In fact, MCF7 cells are known to form clusters and have thus been extensively used for preparing in vitro tumour models 65 , 66 . In an experiment where we spiked MCF7 cells at a concentration of 1.1 × 10 5 /mL (Table S1 ), we observed that ~50% of the MCF7 cells formed clusters after enrichment. However, the amount of clustering is expected to be lower at decreased MCF7 concentrations, which partially explains our reduced detection efficiency at higher cell concentrations. This clustering of cells not only reduces the overall number of target entities but may also exhibit changes in their oscillation patterns and may be misclassified by our classifier. Discussion The presented computational cytometry technique may be applied for the detection of various types of rare cells in blood or other bodily fluids using appropriately selected ligand-coated magnetic beads. There are several advantages of our magnetically modulated speckle imaging technique. The first important advantage is its ability to detect target rare cells without any additional modification, such as labelling with fluorescent or radioactive compounds, unlike the vast majority of the existing techniques. The same magnetic beads that are used for capturing and isolating target cells from whole blood are also used for periodic cell modulation and specific detection within a dense background. False positives are mitigated by identifying the controlled spatio-temporal patterns associated with the labelled target cells through a trained deep neural network. Compared with existing approaches, our technique also has the advantages of a relatively low LoD, rapid detection and low cost, which makes it suitable for the sensitive detection of rare cells in resource-limited settings. For example, fluorescence imaging and Raman microscopy have been widely used to detect rare cells and have been shown to have very low LoDs (e.g., ~1 cell/mL) 12 , 67 , 68 , but they are typically limited by a high system cost and complexity. To address this issue, a low-cost fluorescence system for detecting rare cells was introduced by Balsam et al. 69 , which detects fluorescently labelled cells flowing in a fluidic channel using laser excitation and a low-cost camera for imaging. They demonstrated a LoD comparable to ours (~10 mL −1 ) for SYTO-9-labelled THP-1 monocytes in whole blood. However, the use of fluorescence labelling can suffer from the drawback of photobleaching. As another notable example for sensitive and cost-effective rare cell detection, Issadore et al. proposed using Hall sensors to detect magnetic-bead-labelled target cells in a microfluidic channel and demonstrated a high sensitivity in detecting CTCs 70 . However, their technique requires a relatively long detection time (2.5 h) and a strong expression of biomarkers in target cells. Other rare cell detection technologies, such as chemiluminescence detection based on aptamer-specific cell capture 71 and DNA-oriented shaping of cell features 72 , have also been reported, but their capabilities were demonstrated using only cell mixtures in a buffer solution with limited throughput, i.e., 3 µL 71 or 500 µL 72 cell solution per batch. In our approach, while deep-learning-based classification is instrumental to achieving high detection accuracy, it needs to be retrained on different types of cells, which requires collecting and labelling a large amount of data for each new type of target cell. This is a disadvantage of our approach; however, preparing the training data and manually labelling the target cells is not prohibitively time consuming, and it needs to be performed only once, during the training phase. For example, when we prepared the training/validation data for MCF7 cells, we used 10 experiments to create a manually labelled library containing 17,447 videos of candidate cells (including positives and negatives). The manual labelling process took ~10 h. These procedures only need to be performed once for a given type of target cell. Compared with using fluorescence labelling, which requires additional experimental steps and reagents each time, we believe that this one-time cost of preparing training data for the deep neural network presents advantages. Another limitation of our method is that it can detect only positive cells, which are labelled with magnetic beads; negative cells that are not labelled are not counted. In addition, in this proof-of-concept study, we only demonstrated our detection technique on a single type of target cell. However, a future direction would be to explore the feasibility of multiplexed labelling for different types of target cells. One possibility for multiplexing is to use magnetic particles of different sizes (e.g., varying from ~100 nm to 10 µm), shapes and iron content, where each type of magnetic particle is coated with the corresponding antibody that is specific to a different type of cell. In this approach, different cell-bead conjugates would have distinct dynamics when they are subjected to a varying magnetic force field, which would lead to different patterns of oscillation that can be specifically detected 48 , 49 . The cell-bead conjugates may also exhibit different responses to magnetic modulation when the frequency is varied. These spatio-temporal and morphological signatures may be classified by an appropriately designed and trained deep-learning-based classifier. Therefore, any type of rare cell that can be specifically identified/isolated using antibodies or any targeting moieties can potentially be targeted using our presented system. The spatial resolution and the quality of the images captured using our system are degraded by the random speckle noise generated by background objects, which limits our ability to perform further morphological analysis based on reconstructed images. However, at different frames of a video that is captured with our system, since the target objects of interest (i.e., the bead-labelled MCF7 cells) are modulated with a unique spatio-temporal pattern, thereby exposing different perspectives of the cells (as demonstrated in Video S2), a robust distinction of the target cells from the background is achieved using our deep-learning-based video classifier. The entire prototype of our computational cytometer shown in Fig. 1b (excluding the function generator, power supply and laptop computer) has a raw material cost of ~$750. This cost can be significantly reduced under large volume manufacturing, and currently it is mainly attributed to the image sensor and frame grabber (~$550), the permalloy rod (~$70) and the electromagnets (~$40), with the other components being much more inexpensive. In future versions of this instrument, the power supply and function generator can be replaced with cost-effective integrated circuit chips. For example, the power supply can be replaced with a 20 V power adapter (e.g., TR9KZ900T00-IMR6B, GlobTek, Inc., Northvale, NJ, USA) and a step-down converter (e.g., LTC3630EMSE#PBF, Analog Devices, Norwood, MA, USA) to supply 20 V and 12 V for the electromagnets and the stepper motor, respectively; the function generator can be replaced with an oscillator circuit built from a timer integrated circuit (e.g., NE555DR, Texas Instruments, Dallas, TX, USA). The total cost of these components would be <$25. Furthermore, the device can be easily scaled up to include two or more parallel imaging channels to achieve a higher sample throughput, which is proportionate with the number of imaging channels. Methods Cell preparation MCF7 cell lines were purchased from ATCC (Manassas, Virginia, USA). Cells were plated with 10 mL of growth media in a T75 flask (Corning Inc., New York, USA) at a concentration of 1 × 10 5 cells/mL. The growth media was composed of Dulbecco’s Modified Eagle Medium (DMEM, Gibco®, Life Technologies, Carlsbad, California, USA) supplemented with 10% (v/v) foetal bovine serum (FBS, Gibco®, Life Technologies, Carlsbad, California, USA) and 1% penicillin–streptomycin (Sigma-Aldrich Co., St. Louis, Missouri, USA). Cells were grown in a humidified incubator at 37 °C in a 5% CO 2 environment. Cells were harvested by treating them with 0.25% trypsin-EDTA (Gibco®, Life Technologies, Carlsbad, California, USA) for 3 min 2 to 3 days after seeding, depending on confluency. Then, the cells were pelleted by centrifuging for 3 min at 1200 RPM and resuspended in the growth media to a final concentration of 1 × 10 6 cells/mL. Sample preparation Rare cell dilution : The MCF7 cells were serially diluted in Dulbecco’s phosphate-buffered saline (DPBS, Sigma-Aldrich Co., St. Louis, Missouri, USA) at different concentrations (2 × 10 4 cells/mL, 2 × 10 3 cells/mL, and 2 × 10 2 cells/mL). The dilution of MCF7 cells in whole blood was prepared by mixing the cell solution with whole blood at a ratio of 1:19 (v/v). Most of the experiments were performed by mixing 200 μL of cell solution with 3.8 mL of whole blood. Healthy human whole blood (from anonymous and existing samples) was obtained from the UCLA Blood and Platelet Center. Bead washing : CELLection Epithelial Enrich Dynabeads (Invitrogen, Carlsbad, California, USA) were first resuspended in DPBS and vortexed for 30 s. A magnet (DX08B-N52, K&J Magnetics, Inc., Pipersville, Pennsylvania, USA) was then used to separate the Dynabeads, and the supernatant was discarded. This process was repeated three times, and the Dynabeads were resuspended in DPBS at the initial volume. Rare cell separation : The washed Dynabeads were then added to the MCF7-spiked whole blood sample at a concentration of 2.5 μL beads per 1.0 mL of blood sample. The mixture was incubated for 30 min with gentle tilting and rotation. A magnet was placed under the vial for 5 min, and the supernatant was discarded after that. To this solution, we added 1 mL of cold DPBS buffer and mixed it gently by tilting from side to side. This magnetic separation procedure was repeated five times. After the final step, the sample was resuspended in 0.7 mL of DPBS and gently mixed with 2.5 mL of 400 cP methyl cellulose solution (Sigma-Aldrich Co., St. Louis, Missouri, USA) using a pipette. The sample was incubated for 5 min to reduce the number of bubbles before it was loaded into a glass capillary tube (Part # BRT 2-4-50; cross-section inner dimension of 2 mm × 4 mm; $11.80 per foot; Friedrich & Dimmock, Inc., Millville, New Jersey, USA). The ends of the capillary tube were sealed with parafilm before the tube was mounted onto our computational cytometer for imaging and cell screening. Design of the computational cytometer based on magnetically modulated lensless speckle imaging As shown in Fig. 1 , our device hardware consists of an imaging module and a linear translation module. The imaging module, i.e., the scanning head in Fig. 1 , contains a laser diode (650 nm wavelength, AML-N056-650001-01, Arima Lasers Corp., Taoyuan, Taiwan, China) for illumination, which has an output power of ~1 mW. The sample is loaded inside a capillary tube with a rectangular cross-section, which is placed ~7.6 cm below the light source. A CMOS image sensor (acA3800-14um, Basler, Ahrensburg, Germany) with a pixel size of 1.67 μm, which is placed below the glass tube with a narrow gap (~1 mm), is used to capture the holographic speckle patterns generated by the liquid sample. To induce oscillatory motion to the labelled cells in the sample, two electromagnets (Part #XRN-XP30 × 22, Xuan Rui Ning Co., Ltd., Leqing, Zhejiang Province, China) with custom-machined permalloy extensions are placed on either side of the glass tube. An alternating driving current (square wave) is supplied to either of the electromagnets, with a 180° phase shift between them, which creates alternative pulling force to the magnetic particles within the sample. The low level of the driving current is 0, and the high level of the driving current is ~500 mA. The frequency is 1 Hz, which was experimentally optimized to maximize the signal corresponding to the magnetic-bead-conjugated cancer cells. The linear translation stage is custom-built using off-the-shelf components. A bipolar stepper motor (No. 324, Adafruit Industries LLC., New York, USA) with two timing pulleys and a timing belt is used to provide mechanical actuation, and the imaging module is guided by a pair of linear motion sliders and linear motion shafts on either side of the scanning head. 3D-printed plastic is used to construct the housing for the scanning head, and laser-cut acrylic is used to create the outer shell of the device. Image acquisition After the sample is loaded into the capillary tube and placed onto our computational cytometer, the image acquisition procedure begins. The linear translation stage moves the scanning head to a series of discrete positions along the glass tube. At each position, the stage stops allowing the CMOS image sensor to capture a sequence of 120 holograms at a frame rate of 26.7 fps before moving onto the next position. The image data are saved to a solid-state drive (SSD) for storage and further processing. Because the FOV corresponding to the edges (i.e., top and bottom rows) of the image sensor is subject to a highly unbalanced magnetic force field due to the closeness to one of the electromagnets, only the central 1374 rows of the image sensor’s pixels are used to capture the image sequence, where the magnetic force from the two electromagnets are relatively balanced. Because the CMOS image sensor temperature quickly rises when it is turned on, it tends to cause undesired flow inside the glass tube due to convection. Therefore, a scanning pattern is engineered to reduce the local heating of the sample: if we denote 1, 2, …, 32 as the indices of the spatially adjacent scanning positions, the scanning pattern follows 1, 9, 17, 25, 2, 10, 18, 26, …. This scanning pattern ensures that a given part of the sample cools down before the scanning head moves back to its neighbourhood. The power to the image sensor is also cut-off during the transition between the two successive scanning positions, which is implemented by inserting a MOSFET-based switch into the power line of the USB cable. Computational detection and localization of cell candidates and deep-learning-based classification The image processing procedure (Fig. 4 ) can be divided into two parts: (1) a preliminary screening step, which applies computational drift correction and MCF7 candidate detection to the entire FOV to locate target cell candidates in 2D, and (2) a classification step, which refocuses the holographic image sequence to each individual MCF7 candidate in its local area, generates an in-focus amplitude and phase video for each candidate, and classifies the corresponding video with a trained deep neural network. This procedure is further detailed below. 1. Preliminary screening Computational drift correction The sample fluid in the glass capillary tube often drifts slowly throughout the duration of the image acquisition, which is due to, e.g., the imperfect sealing at the ends of the tube and the convection due to the heat of the image sensor. Because the detection and classification of the target cells are largely based on their periodic motion, the drifting problem should be corrected. Since our sample is embedded within a viscous methyl cellulose, minimal turbulent flow is observed and the drifting motion within our imaged FOV is almost purely translational. We used a phase correlation method 73 to estimate the relative translation between each frame in the sequence with respect to a reference frame (chosen to be the middle frame in the holographic image sequence) and used 2D bilinear interpolation to remove the drift between frames. As shown in Fig. S 2 , this drift correction step successfully removed many false positive detections in the CMA step due to the background drift. Detection of target cell candidates The detection of the target cell candidates plays a key role in automatically analysing the sample, as it greatly narrows down the search space for the rare cells of interest and allows the subsequent deep-learning-based classification to be applied to a limited number of holographic videos. In the preliminary screening stage, the lateral locations of the MCF7 candidates are detected. Each frame of the raw hologram sequence is propagated to various axial distances throughout the sample volume using a high-pass-filtered angular spectrum propagation kernel, which can be written as: $${\mathbf{B}}_{i}\left( {z_j} \right) = {{HP}}\left[ {\mathcal{P}}\left( {{\mathbf{A}}}_{i},z_{j} \right) \right]$$ (1) where HP (·) denotes the high-pass filter (see Supplementary Information for details), \({\mathcal{P}}\) (·) denotes angular spectrum propagation 59 , A i denotes the i th frame of the raw hologram sequence after the drift correction, and z j denotes the j th propagation (axial) distance. The selected propagation distances ranged from 800 μm to 5000 μm with a step size of 100 μm to ensure coverage of all possible MCF7 candidates within the sample tube. A zoomed-in image of B i ( z j ) corresponding to an example region is shown in Fig. 4e . Next, for every given propagation distance, a CMA algorithm is applied to reveal the oscillatory motion of the target cells within the sample, which focuses on periodic changes in the recorded frames: $${\mathbf{C}}\left( {z_j} \right) = \frac{1}{{N_{\mathrm{F}} - N}}\mathop {\sum}\limits_{i = 1}^{N_{\mathrm{F}} - N} {\left( {\frac{1}{2}\left| {{\mathbf{B}}_{i}\left( {z_j} \right) - {\mathbf{B}}_{i + N/2}\left( {z_j} \right)} \right| +\, \frac{1}{2}\left| {{\mathbf{B}}_{i + N/2}\left( {z_j} \right) - {\mathbf{B}}_{i + N}\left( {z_j} \right)} \right| - \left| {{\mathbf{B}}_{i}\left( {z_j} \right) - {\mathbf{B}}_{i + N}\left( {z_j} \right)} \right|} \right)}$$ (2) where C ( z ) and B ( z ) are shorthand notations for C ( x , y ; z ) and B ( x , y ; z ), respectively, N F is the total number of recorded frames (in our case, N F = 120), and N is chosen such that the time difference between the i th frame and the ( i + N )th frame is equal to the period of the alternating magnetic field. Therefore, the first two terms inside the summation in Eq. ( 2 ) represent half-period movements at the j th propagation distance, and the last term represents the whole-period movement. Ideally, for objects that oscillate periodically with the alternating magnetic force field, the first two terms should be relatively large, and the last term should be relatively small. For randomly moving objects, the three terms in the summation approximately cancel each other out. As a result, C ( x , y ; z ) is a 3D contrast map that has high values corresponding to the locations of periodic motion that matches the frequency of the external magnetic field. An example of C is shown in Fig. 4f . To simplify segmentation, a maximum intensity projection along the axial direction (i.e., z ) is applied to flatten the 3D image stack into a 2D image, which can be written as: $${\mathbf{D}}\left( {x,y} \right) = \mathop {{{\mathrm{max}}}}\limits_z \left[ {{\mathbf{C}}\left( {x,y;z_1} \right),{\mathbf{C}}\left( {x,y;z_2} \right),...,{\mathbf{C}}\left( {x,y;z_{N_{\mathrm{H}}}} \right)} \right]$$ (3) where x and y are the lateral indices and N H is the total number of axial positions (in our case, N H = 43). An example of D is shown in Fig. 4c , with a zoomed-in image shown in Fig. 4g . Thresholding-based segmentation was applied to the calculated 2D image D , and the resulting centroids are used as the lateral positions of the MCF7 candidates. 2. Classification Autofocusing and video generation After the preliminary screening, which identifies the lateral centroids of potential target cell candidates, the subsequent processing is applied to each MCF7 candidate only within their local area. Autofocusing 62 , 63 was first performed to locate the MCF7 candidate in the axial direction. Because C ( x , y ; z j ) should have a higher value when approaching the in-focus position of each MCF7 candidate, the approximate axial position was obtained by maximizing (as a function of z j ) the sum of the pixel values of C ( x , y ; z j ) ( j = 1, 2, …, N H ) in a local neighbourhood around each individual MCF7 candidate. We chose to use a local neighbourhood size of 40 × 40 pixels (i.e., 66.8 μm × 66.8 μm). This process can be written as follows: $$\hat z_k = \mathop {{{\mathrm{arg}}\,{\mathrm{max}}}}\limits_{z_j = 1,2,...,N_H} \mathop {\sum}\limits_{x,y = - 19}^{20} {{\mathbf{C}}\left( {x_k + x,y_k + y;z_j} \right)}$$ (4) where \(\widehat z_k\) is the resulting in-focus position for the k th potential target cell candidate and x k and y k are the lateral centroid coordinates of the k th potential target cell candidate. The same criterion to find the focus plane can be applied again with finer axial resolution to obtain a more accurate estimation of the axial distance for each MCF7 candidate. We used a step size of 10 μm in this refined autofocusing step. Two examples of this process are shown in Fig. 4h . Alternatively, the Tamura coefficient 62 , 63 could also be used as the autofocusing criterion to determine the in-focus plane. Finally, the in-focus amplitude and phase video corresponding to each MCF7 candidate was generated by digitally propagating every frame of the drift-corrected hologram sequence to the candidate’s in-focus plane. The final video has 120 frames at 26.67 fps with both the amplitude and phase channels, and each frame has a size of 64 × 64 pixels (pixel size = 1.67 μm). Two examples corresponding to two cell candidates are shown in Fig. 4i . Target cell detection using densely connected P3D CNN Each video of the MCF7 candidate was fed into a classification neural network (Fig. 6 ), which outputs the probability of having an MCF7 cell in the corresponding video (Fig. 4j, k ). We designed a novel structure for the classification neural network, named densely connected P3D CNN, which is inspired by the Pseudo-3D Residual Network 58 and the Densely Connected Convolutional Network 60 . The original P3D CNN 58 used a mixture of three different designs of the P3D blocks to gain structural diversity, which resulted in a better performance. In this work, we introduced a densely connected structure to the P3D CNN structure by adding dense (skip) connections inside the spatio-temporal convolution block (dashed black arrows in Fig. 6 inset) to unify the three different P3D blocks. This allowed a simpler network design that was easier to implement for our task. Fig. 6: Structure of the densely connected P3D CNN. The network consists of convolutional layers, a series of dense blocks, a fully connected layer and a softmax layer. As shown in the inset, each dense spatio-temporal convolution block was constructed by introducing skip connections between the input and output of the convolutional layers in the channel dimension, where red represents the input of the dense block, green and blue represent the output of the spatial and temporal convolutional layers, respectively, and yellow represents the output of the entire block Full size image The detailed structure of the densely connected P3D CNN is shown in Fig. 6 . The network contains five densely connected spatio-temporal convolutional blocks. As shown in the inset of Fig. 6 , each block consists of a 1 × 3 × 3 spatial convolutional layer (Conv s ), a 3 × 1 × 1 temporal convolutional layer (Conv t ), followed by a max pooling layer (Max). Each spatial (or temporal) convolutional layer is a composition of three consecutive operations: batch normalization, a rectified linear unit (ReLU) and a spatial (or temporal) convolution (with stride = 1 and output channel number equal to the growth rate k = 8). In each block, we introduced skip connections between the input and output of the Conv s layer as well as the Conv t layer by concatenating ( ⊕ ) the input and the output in the channel dimensions. For a given input tensor m p , the densely connected spatio-temporal convolutional block maps it to the output tensor m p+1 , which is given by: $$m_{p + 1} = {\mathrm{Max}}\left[ {{\mathrm{Conv}}_t\left( {{\mathrm{Conv}}_s(m_p) \oplus m_p} \right) \oplus \left( {{\mathrm{Conv}}_s(m_p) \oplus m_p} \right)} \right]$$ (5) For example, consider an input video with a size of c × t × h × w , where c , t , h and w denote the number of channels, number of frames (time), height and width of each frame (space), respectively. Here, c = 2, t = 120, and h = w = 64. We first pass the video through a 1 × 7 × 7 spatial convolutional layer (stride = 2) and a 9 × 1 × 1 temporal convolution layer (stride = 3) sequentially. The output channel numbers of the layers are included in Fig. 6 in each box. Then, the data go through five dense blocks, where between the 2nd and 3rd dense blocks, we add an additional 3 × 1 × 1 (stride = 1) convolutional filter with no padding to ensure that the time and space dimensions are equal. A fully connected (FC) layer with a 0.5 dropout rate and a softmax layer are introduced, which output the class probability (target rare cell or not) for the corresponding input video. Finally, a decision threshold is applied to the class probability output to generate the final positive/negative classification, where the decision threshold is tuned based on the training/validation data to reduce the FPR (detailed in the next sub-section “Network training and validation”). Network training and validation We performed ten experiments (i.e., ten samples) to create the training/validation data sets for our classifier and then used the trained classifier to perform blind testing on additional serial dilution experimental data (Fig. 5 ), which had no overlap with the training/validation data. Among the ten experiments for constructing the training/validation data set, five were negative controls, and the other 5 were spiked whole-blood samples at an MCF7 concentration of 10 3 mL −1 . When manually labelling the video clips to create the training/validation data set, we noticed that some videos were difficult to label, where the annotators could not make a confident distinction. Therefore, to ensure an optimal labelling accuracy, our negative training data came from only the five negative control experiments, where all the candidate videos from those experiments were used to construct the negative data set. The positive training data were manually labelled by two human annotators using five experiments spiked at 10 3 mL −1 , where only the video clips that were labelled as positive with high confidence by both annotators were selected to enter the positive training data set, while all the others were discarded. Next, the training/validation data sets were randomly partitioned into a training set and a validation set with no overlap between the two. The training set contained 1713 positive videos and 11324 negative videos. The validation set contained 788 positive videos and 3622 negative videos. The training data set was further augmented by randomly mirroring and rotating the frames by 90°, 180° and 270°. The convolutional layer weights were initialized using a truncated normal distribution, while the weights for the FC layer were initialized to zero. Trainable parameters were optimized using an adaptive moment estimation (Adam) optimizer with a learning rate of 10 −4 , and a batch size of 240. The network converged after ~800–1000 epochs. The network structure and hyperparameters were first optimized to achieve high sensitivity and specificity for the validation set. At a default decision threshold of 0.5, a sensitivity and specificity of 78.4% and 99.4%, respectively, were achieved for the validation set; a sensitivity and specificity of 77.3% and 99.5%, respectively, were achieved for the training set. After this initial step, because our rare cell detection application requires the classifier to have a very low FPR, we further tuned the decision threshold of our classifier to avoid false positives. For this, the training and validation data sets were combined to increase the total number of examples, and we gradually increased the decision threshold (for positive classification) from 0.5 while monitoring the FPR for the combined training/validation data set. We found that a decision threshold of 0.99999 was able to eliminate all false-positive detections in the combined training/validation data set. We further raised the decision threshold to 0.999999 to account for potential overfitting of the network to the training/validation data and further reduced the risk of false-positive detections. At a decision threshold of 0.999999, as expected, the TPR dropped down to 10.5% (refer to Fig. S 3 , which reports the receiver operating characteristic (ROC) curve based on the validation data set, with an area under the curve of 0.9678). This low TPR results in underdetection of the target cells, as also evident in our serial dilution results (Fig. 5 ). The selection of the decision threshold is dependent on the specific application of interest and should be tuned based on the expected abundance of target cells and the desired LoD. For the application considered in this work, because the expected number of target cells at the lowest concentration (i.e., 10 mL −1 ) is extremely low, the decision threshold was tuned to a high level to suppress false positives, which in turn resulted in a very low TPR. However, for less demanding cell detection or cytometry applications where the desired LoD is not as stringent, the decision threshold may be relaxed to a lower level, which also allows the TPR to be higher. Computation time Using our current computer code, which is not optimized, it takes ~80 s to preprocess the data within one FOV (corresponding to a volume of 14.7 mm 2 × 2 mm) for extracting the MCF7 cell candidates, corresponding to the preliminary screening step in Fig. 4 . For each detected cell candidate, it takes ~5.5 s to generate the input video for network classification. The network inference time for each input video is <0.01 s. Based on these numbers, if there are, e.g., ~1500 cell candidates per experiment, the total processing time using the current computer code would be ~3.0 h. However, we should note that the data-processing time depends on various factors, including the computer hardware configuration, the cell concentration in the sample, the programming language and whether the code is optimized for the hardware. In our work, although we used relatively high-performance hardware (an Intel Core i7 CPU, 64 GB of RAM and an Nvidia GeForce GTX 1080Ti GPU) and used some of the GPU functions provided by MATLAB (MathWorks, Natick, MA, USA), we did not extensively optimize our code for improved speed. A careful optimization of the GPU code should bring a significant speedup in our computation time. COMSOL simulation of the magnetic force field generated by the electromagnet and the permalloy extension Because of space constraints, the electromagnet could not be placed sufficiently close to the imaging area, which caused the magnetic force to be low. We used a custom-machined extension rod made of permalloy 74 (relative permeability μ r ~ 100,000) to relay the force field and enhance the relative magnetic force on target cells by ~40 times. To simulate the magnetic force field distribution near an electromagnet with and without the permalloy extension, a finite element method (FEM) simulation was conducted using COMSOL Multiphysics (version 5.3, COMSOL AB, Stockholm, Sweden). A 3D model was developed using the magnetic field interface provided in the COMSOL AC/DC physics package. A stationary study was constructed based on the geometry of a commercially available electromagnet, where the core was modelled with a silicon steel cylinder (radius = 3 mm, height = 10 mm), and the coil was modelled with a surface current of 10 A/m on the side of the core running in the azimuthal direction. The permalloy extension was modelled using Permendur. A thick layer of air was added as a coaxial cylinder with a radius of 10 mm and a height of 30 mm. The magnetic flux density inside the simulation space was simulated using the magnetic field module. Then, a coefficient form PDE module in the mathematics library was used to derive the relative magnetic force field. The magnetic force that is received by superparamagnetic beads is given by: $${\mathbf{F}} = \frac{{V\chi }}{{\mu _0}}\left( {{\mathbf{B}} \cdot \nabla } \right){\mathbf{B}}$$ (6) where V is the volume of the magnetic particle, χ is the magnetic susceptibility, μ 0 is the magnetic permeability in a vacuum and B is the magnetic flux density. Our simulation results are shown in Fig. S 1 . The results in Fig. S 1b indicate that the relative magnetic force rapidly reduces as a function of the distance from the electromagnet. However, by using a permalloy extension, the relative magnetic force at the sample location is enhanced by ~40 times. | Detection of rare cells in blood and other bodily fluids has numerous important applications including diagnostics, monitoring disease progression and evaluating immune response. For example, detecting and collecting circulating tumour cells (CTCs) in blood can help cancer diagnostics, study their role in the metastatic cascade and predict patient outcomes. However, because each millilitre of whole blood contains billions of blood cells, the rare cells (such as CTCs) that occur at extremely low concentrations (typically lower than 100-1000 cells per millilitre) are very difficult to detect. Although various solutions have been developed to address this challenge, existing techniques in general are limited by high cost and low throughput. Researchers at UCLA Henry Samueli School of Engineering have developed a new cytometry platform to detect rare cells in blood with high throughput and low cost. Published in Light: Science and Applications, this novel cytometry technique, termed magnetically modulated lensless speckle imaging, first uses magnetic bead labelling to enrich the target cells. Then the enriched liquid sample containing magnetic bead-labelled target cells is placed under an alternating magnetic field, which causes the target cells to oscillate laterally at a fixed frequency. At the same time, a laser diode illuminates the sample from above and an image sensor positioned below the sample captures a high-frame-rate lensless video of the time-varying optical pattern generated by the sample. The recorded spatiotemporal pattern contains the information needed to detect the oscillating target cells. The researchers built a compact and low-cost prototype of this computational lensless cytometer using off-the-shelf image sensors, laser diodes and electromagnets, and used a custom-built translation stage to allow the imager unit to scan liquid sample loaded in a glass tube. The prototype can screen the equivalent of ~1.2 mL of whole blood sample in ~7 min, while costing only ~$750 and weighing ~2.1 kg. Multiple parallel imaging channels can also be easily added to the system to further increase sample throughput. To ensure optimal sensitivity and specificity of rare cell detection, a two-step computational procedure was developed, which involved a computational motion analysis algorithm to detect micro-objects oscillating at the specified alternating frequency, and then a deep learning-based classification algorithm based on a densely connected pseudo-3-D convolutional neural network (P3D CNN) structure. The deep neural network greatly improved the accuracy of the technique, resulting in a limit of detection of 10 cells per millilitre of whole blood. This AI-driven cytometry technique relies on the magnetic particles for both cell enrichment and detection, which reduces the time and cost for detecting rare cells while maintaining a high sensitivity. This compact, low-cost yet powerful cytometry system may find numerous applications especially in resource-limited settings. | 10.1038/s41377-019-0203-5 |
Space | NGC 5128: Mysterious cosmic objects erupting in X-rays discovered | Jimmy A. Irwin et al. Ultraluminous X-ray bursts in two ultracompact companions to nearby elliptical galaxies, Nature (2016). DOI: 10.1038/nature19822 Journal information: Nature | http://dx.doi.org/10.1038/nature19822 | https://phys.org/news/2016-10-ngc-mysterious-cosmic-erupting-x-rays.html | Abstract A flaring X-ray source was found near the galaxy NGC 4697 (ref. 1 ). Two brief flares were seen, separated by four years. During each flare, the flux increased by a factor of 90 on a timescale of about one minute. There is no associated optical source at the position of the flares 1 , but if the source was at the distance of NGC 4697, then the luminosities of the flares were greater than 10 39 erg per second. Here we report the results of a search of archival X-ray data for 70 nearby galaxies looking for similar flares. We found two ultraluminous flaring sources in globular clusters or ultracompact dwarf companions of parent elliptical galaxies. One source flared once to a peak luminosity of 9 × 10 40 erg per second; the other flared five times to 10 40 erg per second. The rise times of all of the flares were less than one minute, and the flares then decayed over about an hour. When not flaring, the sources appear to be normal accreting neutron-star or black-hole X-ray binaries, but they are located in old stellar populations, unlike the magnetars, anomalous X-ray pulsars or soft γ repeaters that have repetitive flares of similar luminosities. Main One flaring source (hereafter Source 1) is located at right ascension RA = 12 h 42 min 51.4 s and declination dec. = +02° 38′ 35″ (J2000) near the Virgo elliptical galaxy NGC 4636 (distance from Milky Way d = 14.3 Mpc) 2 , 3 . A plot of the cumulative X-ray photon arrival time and a crude background-subtracted light curve for this source derived from an approximately 76,000-s Chandra observation taken on 2003 February 14 are shown in Fig. 1 . Prior to and after the flare, the X-ray count rate of the source was (2.1 ± 0.2) × 10 −3 counts per second (errors given here and elsewhere, unless otherwise stated, are 1 σ ). This count rate corresponds to a 0.3–10-keV luminosity of (7.9 ± 0.8) × 10 38 erg s −1 for a power-law spectral model with a best-fit photon index of Γ = 1.6 ± 0.3 (see Methods), assuming the source is at the distance of NGC 4636. About 12,000 s into the observation, the source flared dramatically, with six photons detected in a 22-s span, leading to a conservative peak flare count rate of counts per second—an increase in emission by a factor of 70–200 over its persistent (non-flare) state. Assuming the same spectral model as in the persistent state, the flare peaks at . Following the initial 22-s burst, the source emitted at a less intense, but still elevated, rate for the next 1,400 s. In total, 25 photons were emitted during the flare, for an average X-ray luminosity of (7 ± 2) × 10 39 erg s −1 and a total flare energy of (9 ± 2) × 10 42 erg. We assess that the probability of this burst being due to a random Poisson fluctuation of the persistent count rate is about 6 × 10 −6 (see Methods). Although the photon statistics during the 25-photon burst were limited, there was no evidence that the spectrum of the source differed during the flare. There are no apparent flares in the combined 370,000 s of the other Chandra and XMM-Newton observations of NGC 4636, either before or after 2003 February 14 (see Extended Data Table 1 ). Figure 1: Chandra cumulative X-ray photon arrival time and light curve for Source 1 in the NGC 4636 globular cluster. a , In total, 162 photons were detected over the approximately 76,000-s observation. The flare began after 12,000 s of observation and lasted for 1,400 s. The beginning and end of the flare are indicated by up and down arrows, respectively. b , Within the grey shaded region in a we derive the background-subtracted X-ray light curve. Each time bin contains five photons, with error bars representing the 1 σ uncertainty expected from Poisson statistics. PowerPoint slide Full size image A previous study 4 spatially associated Source 1 with a purported globular cluster of NGC 4636 that was identified through Washington C-band and Kron–Cousins R-band system CTIO Blanco Telescope imaging 5 . Although faint ( R = 23.02), the optical source identified as CTIO ID 6444 had a C − R = 1.94 colour that is consistent with a globular cluster of near-solar metallicity 5 . Follow-up spectroscopic observations 6 of a sub-sample of the globular cluster candidates in the vicinity of the globular cluster that hosts Source 1 found that 52 out of 54 (96%) of the objects with a C − R colour and magnitude similar to CTIO ID 6444 were confirmed to be globular clusters of NGC 4636, with the two remaining objects identified as foreground Galactic stars. The hard X-ray spectrum of Source 1 during its persistent phase (see Methods) evidences against it being a late-type Galactic dwarf star, for which the X-ray emission tends to be quite soft. Galactic RS CVn stars exhibit hard X-ray emission in quiescence and are known to undergo X-ray flares, but these stars have much higher optical-to-X-ray flux ratios 7 compared to Source 1. Therefore, it is highly likely that the optical counterpart of Source 1 is a globular cluster within NGC 4636. On the basis of its absolute R-band magnitude and ratio of mass to light ( M / L = 4.1, with M in units of solar masses M ⊙ and L in solar R-band luminosities), we estimate the globular cluster to have a mass of 3 × 10 5 M ⊙ (see Methods). A second X-ray source located near the elliptical galaxy NGC 5128 showed similar flaring behaviour. In the 2007 March 30 Chandra observation of NGC 5128, a source at RA = 13 h 25 min 52.7 s, dec. = −43° 05′ 46″ (J2000; hereafter Source 2) began the observation emitting at a count rate of (9.5 ± 1.5) × 10 −4 counts per second. This count rate corresponds to a 0.3–10-keV luminosity of (4.4 ± 0.7) × 10 37 erg s −1 using the best-fit Γ = 1.0 ± 0.2 power-law photon index and a distance 8 of 3.8 Mpc for NCG 5128. Midway through the observation, the source flared dramatically, with 10 photons detected in a 51-s time span corresponding to a conservative peak luminosity estimate of , after which the flare subsided. Following the flare, Source 2 returned to its pre-flare luminosity for the remainder of the observation. Inspection of other archival Chandra and XMM-Newton data (see Extended Data Table 1 ) revealed four more flares of Source 2. Three were observed with Chandra on 2007 April 17, 2007 May 30 and 2009 January 4, and the fourth flare was observed with XMM-Newton on 2014 February 9. In each instance, during the initial fast (<30 s) rise of the flare the count rate increased by a factor of 200–300 over the persistent count rate to about 10 40 erg s −1 , after which the flare subsided. The total flare energy of each of the five flares was approximately 10 42 erg. The light curves for the four Chandra flares look remarkably similar, as illustrated in Fig. 2 . We combined these four light curves into a combined background-subtracted light curve (see Methods for details) in Fig. 2 . Following the fast rise of the source by a factor of about 200 to a peak luminosity approaching 10 40 erg s −1 , the source remained in a roughly steady ultraluminous state for approximately 200 s before decaying over a time span of around 4,000 s ( Fig. 2 ). Fitting a power law to the combined spectra of the four Chandra flares yielded a best-fit photon index of Γ = 1.2 ± 0.3. Therefore, much like Source 1, the spectrum of Source 2 did not change appreciably during the flare. Figure 2: Individual and combined background-subtracted X-ray light curves for Source 2 in the NGC 5128 globular cluster or ultracompact dwarf. a , The X-ray light curves for the four Chandra flares show similar behaviour. Each time bin contains five photons. b , The combined light curve of the four flares illustrates the fast rise and slow decay of the flares. Each time bin contains ten photons. The time is given relative to the beginning of the flare. c , Zooming in on the grey shaded region in b reveals that the luminosity during the flare rose quickly and remained steady in an ultraluminous state for approximately 200 s before decaying back to its persistent level after about 1 h. All error bars represent 1 σ uncertainties. PowerPoint slide Full size image Source 2 has previously been identified with the object HGHH-C21 (also called GC 0320) within NGC 5128 9 , 10 , 11 . With a spectroscopically determined recessional velocity 10 (460 km s −1 ) within 110 km s −1 of that of NGC 5128, the source is clearly at the distance of NGC 5128. This implies a projected half-light radius 12 of 7 pc. With a velocity dispersion of 20 km s −1 and an inferred stellar mass 12 of 3.1 × 10 6 M ⊙ , the optical counterpart is either a massive globular cluster or, given its unusual elongated shape, more likely an ultracompact dwarf companion galaxy of NGC 5128. It is unlikely that the flaring and the steady emission in both sources are attributable to two unrelated sources in the same host. Because our flare search technique would have found these flares had they been detected by their flare photons alone, we can calculate the probability that these globular clusters would have also hosted steady X-ray emission more luminous than the persistent emission in each globular cluster (see Methods). The globular cluster in Source 1 has a <0.3% probability of having an X-ray source with a luminosity of more than 8 × 10 38 erg s −1 ; the globular cluster or ultracompact dwarf in Source 2 has a <9% probability of having an X-ray source with a luminosity of more than 4 × 10 37 erg s −1 . Multiplying these probabilities leads to only a <0.02% chance that both flares are unrelated to the steady emission. In the unlikely event that the flares are distinct sources from the persistent sources, the flaring sources must be flaring by more than two orders of magnitude over whatever their true non-flare luminosities are. Summing up all the available archival Chandra and XMM-Newton data (but omitting the Chandra High Resolution Camera (HRC) and transmission grating exposures, which are not sensitive enough to detect a flare of similar intensity to that seen in the Advanced CCD Imaging Spectrometer (ACIS) observations) allows us to constrain the duty cycle and recurrence rate of the flares. Source 2 flared five times for a total combined flare time of about 20,000 s in a total observation time of 7.9 × 10 5 s, yielding one flare every approximately 1.8 days and a duty cycle of about 2.5%. Source 1 flared once for 1,400 s in a total observation time of 370,000 s. This single flare implies a recurrence timescale of >4 days and duty cycle of <0.4%. In terms of energetics, variability and survivability, only short- and intermediate-duration soft gamma repeaters (SGRs) 13 and their cousins the anomalous X-ray pulsars (AXPs) 14 are comparable to the sources discussed here. However, SGRs and AXPs are believed to be very young and highly magnetized neutron stars, which are not likely to be found in an old stellar population such as a globular cluster or red ultracompact dwarf galaxy. Our sources are also unlike SGRs and AXPs in that SGR/AXP flares of this magnitude last only a few to a few tens of seconds 15 , 16 without an hour-long decay as seen in our sources. Our sources are also unlikely to be type-II X-ray bursts of neutron stars, which are believed to result from rapid spasmodic accretion onto the neutron star. In addition to having flare-to-pre-flare luminosity ratios of only 10–20, the only type-II burst to reach 10 40 erg s −1 (GRO J1744−28, the Bursting Pulsar) exhibits several sub-minute flares per day when flaring, with total flare energies per burst that are much lower than our sources and different timing properties from our sources 17 . Furthermore, the quiescent X-ray luminosity of the Bursting Pulsar is 4–5 orders of magnitude fainter than the long-term luminosities of our sources. Qualitatively, the fast rise and slower decay of Source 2 ( Fig. 2 ) resembles that of type-I bursts from Galactic neutron stars, which typically peak near the Eddington limit of a neutron star. However, the peak luminosities from Sources 1 and 2 are 1–2 orders of magnitude greater than the type-I limit for even helium accretion and last more than an order of magnitude longer. Rare superbursts from Galactic neutron stars have been known to last for an hour 18 , 19 , but have peak luminosities well below those of our sources. Other X-ray flares of unknown source 20 , 21 , 22 appear to be one-time transient events, indicating that they were (probably) cataclysmic events with no post-flare emission, unlike our sources. We investigated the light curves of several thousand X-ray point sources within 70 Chandra observations of nearby galaxies and found only the two examples presented here. It would appear that the Milky Way has no analogues to our sources. This is not surprising given the small number (about 40) 23 of X-ray sources in the Milky Way that are brighter than 10 37 erg s −1 , the lack of X-ray binaries that are more luminous than 10 38 ergs s −1 in Galactic globular clusters, and the rarity of burst sources in the extragalactic sample. The nature of Sources 1 and 2 remains uncertain. The increased emission during the burst might result from a narrow cone of beamed emission that crosses our line of sight every few days. However, it is unclear how a pulsed beam would lead to the distinctly asymmetric fast rise and slower decay profile. Alternatively, the flare might represent a period of rapid, highly super-Eddington accretion onto a neutron star or stellar-mass black hole, perhaps during the periastron passage of a donor companion star in an eccentric orbit around a compact object. Such an explanation has been suggested to explain observed (albeit neutron-star Eddington-limited) flares in galaxies 1 , 24 . Finally, the high X-ray luminosity during the peak of the flare might represent accretion onto an intermediate-mass black hole. If the flares are Eddington-limited, then black hole masses of 800 M ⊙ and 80 M ⊙ are implied for Sources 1 and 2, respectively, assuming a bolometric correction of 1.1 appropriate for a 2-keV-disk blackbody-temperature spectral model. The fast rise times constrain the maximum mass of a putative black hole, because the rise time cannot be shorter than the travel time of light across the innermost stable circular orbit of the black hole. For both sources, the fastest rise happened over a 22-s period, implying an upper limit on the mass of a maximally rotating black hole of 2 × 10 6 M ⊙ . A black hole in this mass range is a particularly intriguing explanation for Source 2, if indeed its host is the stripped core of a dwarf galaxy. Methods Flare search technique We searched for flares from all point sources found in 70 Chandra observations of nearby luminous early-type galaxies. The evt2 files were downloaded from the Chandra archive, and the source-detection routine wavdetect in the Chandra Interactive Analysis of Observations (CIAO) package suite was used on the image files to create a list of sources detected at the >3 σ level. Our script then extracted the time-ordered photon arrival times for each source found by wavdetect . Next, our routines scanned the photon event list and searched for bursts by finding the time difference between each photon and the photon three photons forward in time from it (that is, a 4-photon burst). They then calculated the Poisson probability of detecting that many photons over that time interval given the overall count rate of the source over the entire observation and the number of 4-photon burst trials present over the epoch. This was repeated for 5-photon bursts, 6-photon bursts, 7-photon bursts and so on, up to 20-photon bursts. If the probability of a burst of that magnitude from Poisson fluctuations was below our fiducial value (1 in 10 6 ) and the count rate during the N -photon burst was more than ten times the average count rate of the source over the observation, then the source was marked for further study. Note that our technique is more sophisticated than a simple Kolmogorov–Smirnov test on this distribution. Our technique found several (previously known and unknown) flares from Milky Way M dwarf stars, which were removed from consideration. Among the 7,745 sources detected in the 70 observations, Source 1 and Source 2 were the only non-Galactic sources not previously detected (which appear to be transients or one-time events 20 , 21 , 22 ), for which the random Poisson fluctuation probability was less than 10 −6 and for which peak-to-persistent count ratio exceeded ten (to exclude somewhat variable, but non-flaring sources). Chandra and XMM-Newton data reduction Those Chandra observations containing flares were then analysed further using CIAO 4.7 with CALDB version 4.6.9. The sources exhibited flaring in only ObsID 3926 for Source 1 and in ObsIDs 7799, 7800, 8490 and 10723 for Source 2. The remaining 3 and 36 ObsIDs for Sources 1 and 2, respectively, showed no flaring activity, did not have the source in the field of view of the detector, or the data were taken with a lower sensitivity detector (Chandra HRC/LETG/HETG) for which a flare of comparable intensity would not have been detected. Extended Data Table 1 lists all of the searched Chandra and XMM-Newton observations of NGC 4636 and NGC 5128. The luminosities of the sources in the non-flare observations were consistent with the persistent luminosities of the sources during the flare observations. All flares occurred in ACIS-I pointings. The event lists were reprocessed using the latest calibration files at the time of analysis with the CIAO tool chandra_repro . None of the Chandra observations had any background flaring time intervals that were significant enough to warrant their removal considering that we are interested in point sources. Energy channels below 0.3 keV and above 6.0 keV were ignored. Sources 1 and 2 were both located at least 4.8′from the ACIS-I aim point in all of the observations, so we used the CIAO tool psfsize_srcs to determine the extraction radius for each observation that enclosed 90% of the source photons at an energy of 2.3 keV. All subsequent count rates and 0.3–10-keV X-ray luminosities were corrected for these point spread function (PSF) losses. Because each flare occurred at a large off-axis angle from the aim point, the photons were spread over a large PSF. Consequently, pile-up effects were negligible even at the peak of each flare. We do not believe that there is any way for the flares to be an instrumental effect such as pixel flaring or cosmic ray afterglows, for which each recorded event during the flare would occur in a single detector pixel. Inspection of all of the flare observations in detector coordinates revealed that the photons were not concentrated in a single detector pixel, but were spread out in detector space in accordance with the dither pattern of Chandra as would be expected from astrophysical photons. Furthermore, the photon energies of cosmic ray afterglows decrease with each successive photon, which is not the case for the photons occurring during the flare. Although we did not conduct a survey of galaxies observed with XMM-Newton, we did utilize archival XMM-Newton data to search for additional flares from Sources 1 and 2. The XMM-Newton observations for Source 1 did not reveal any detectable flaring behaviour, but the 2014 February 9 observation (ObsID 0724060801) of Source 2 revealed a fifth flare for this source that was detected in the MOS1 and MOS2 detectors separately. To analyse the data, we used the 2014 November 4 release of the XMM-Newton Science Analysis Software (SAS), and the data were processed with the tool emproc , which filtered the data for the standard MOS event grades. Source 2 was observed only with the two MOS instruments, because it was not in the field of view of the pn camera (the observations used a restricted window owing to the high count rate of the central active galactic nucleus in NGC 5128). Periods of high background at the end of the observation were removed. Plots of cumulative photon arrival time for Sources 1 and 2 Each X-ray photon collected by Chandra or XMM-Newton is tagged with a position, energy and time of arrival, allowing a photon-by-photon account of each X-ray source at a time resolution set by the read-out time for the detector (3.1 s for Chandra ACIS-I and 2.6 s for XMM-Newton EPIC MOS). Plots of the cumulative photon arrival time are a simple way to observe time variability over the course of the observation. Whereas a source with constant flux will yield a cumulative photon arrival time plot with a constant slope, a flare will appear as a nearly vertical rise in the plot as photons stream in over a short period of time. Figure 1 shows the plot of cumulative photon arrival time for Source 1, illustrating the onset of the flare around 12,000 s after the beginning of the observation. The plot of the cumulative photon arrival time from each of the five flares from Source 2 is shown in Extended Data Fig. 1 . The beginning of the flare is evident in each plot. The final Chandra flare (ObsID 10723) occurred just at the end of this short observation. The persistent count rate within the 10″ source-extraction region of the XMM-Newton observation is compromised by background, but the onset of the flare about 16,000 s after the beginning of the observation is evident. Peak flare rate and the statistical significance of the flares We estimated the peak flare rate of Source 1 on the basis of the arrival times of the first six photons of the flare, which arrived over a 22-s period. Given the uncertainty in when the peak ended, we neglect the sixth photon of the flare to conservatively estimate a count rate of counts per second (1 σ uncertainty) after correcting for the 10% of emission that is expected to be scattered out of the source extraction region owing to PSF losses. Background was negligible during the flare and accounted for only 7% of the emission inside the source-extraction region during persistent times. Because Source 1 flared only once, it is necessary to accurately determine the number of independent trials that were contained in the sources searched within our sample of galaxies to determine the likelihood that the flare could result from a random fluctuation in the persistent count rate. The two sources discussed here were found as part of a 70-observation sample observed with Chandra and composed primarily of large elliptical galaxies at distances of <20 Mpc, with a majority of the galaxies residing in Virgo or Fornax. Within these 70 observations, 7,745 sources were detected yielding a total of 8.5 × 10 5 photons. This is equivalent to 1.7 × 10 5 independent 5-photon groupings. Statistically, the chance of detecting five or more photons in 22 s for a source that normally emits at 1.9 × 10 −3 counts per second (the count rate in the persistent state before correcting for PSF losses) is 1.0 × 10 −9 . With 1.7 × 10 5 independent trials throughout our sample, the chance of finding a single 5-photon burst for the Source 1 flare is 1.7 × 10 −4 . Searching over multiple photon-burst scales increases the odds of finding a chance statistical fluctuation. A previous study 1 that reports a similar calculation gives this correction factor to be approximately 2.5 using Monte Carlo simulations; applying that correction here leads to a false detection probability of 4.3 × 10 −4 . A similar exercise for the entire burst (25 photons in 1,400 s) leads to a chance fluctuation probability of 6.4 × 10 −6 . Similar calculations can be performed for each flare detected in Source 2 by Chandra observations. In the four cases, the flare at its peak was detected using 9 photons in 51 s, 6 photons in 22 s, 7 photons in 22 s and 6 photons in 37 s. Given the persistent count rates in each observation, and correcting appropriately for both the number of independent 9-photon, 6-photon, and 7-photon trials (that is, scaling appropriately from the 1.7 × 10 5 independent 5-photon trials) and for the multi-burst search correction factor of 2.5, we calculate probabilities of 1.4 × 10 −5 , 7.1 × 10 −7 , 1.2 × 10 −6 and 9.0 × 10 −4 of a false flare detection for each of the four flares. Because we considered XMM-Newton data only after having detected the flares in the Chandra data, the probability that the flare observed with XMM-Newton was falsely detected is 5.0 × 10 −8 given the 113,000 s of total exposure on this source. When combined, this gives a probability that all the flares were falsely detected of 5.4 × 10 −28 . X-ray light curves Owing to the limited photon statistics for the flare in Source 1, only a crude X-ray light curve was obtained by binning photons in groups of five and determining the count rate over which the five photons were collected ( Fig. 1 ). The four individual 5-photon-bin Chandra light curves for Source 2 showed similar timing behaviour ( Fig. 2 ), which gave us confidence to combine them into one light curve. For each flare, we determined the average arrival time of the first three photons of the flare and set this to ‘time zero’. Thus, photons before the flare were assigned a negative time value. The four photon lists were then combined at ‘time zero’ to provide a combined photon list. Photons were then binned in groups of ten to calculate count rates during the time period over which the ten photons were collected. The count rates were divided by four to give the average count rate per time bin per flare. Because the fourth Chandra epoch (ObsID 10723) was very short and does not extend from −40,000 s to 40,000 s from the start of the flare, we corrected the count rate accordingly to account for the temporal coverage of this epoch. The count rates were corrected for the loss of photons outside the extraction region due to the PSF, and for the expected background (although negligible during the flare, this accounted for 14% of the emission during persistent periods). The combined light curve for Source 2 is shown in Fig. 2 . A sharp rise at the beginning of the flare was followed by a flat ultraluminous state for about 200 s. The improvement in statistics by combining the four light curves traces the duration of the decay in flux out to about 4,000 s. Following the flare, the count rate of the source was remarkably consistent with the pre-flare count rate. Spectral fitting and source luminosities For Source 1, we extracted a combined spectrum during the pre- and post-flare period using the CIAO tool specextract . Background was collected from a source-free region surrounding our source. Using XSPECv12.8, a power-law model absorbed by the Galactic column density in the direction of NGC 4636 ( N H = 1.8 × 10 20 cm −2 ) 25 using the tbabs absorption model was used to fit the background-subtracted spectrum. Only energy channels over the range 0.5–6.0 keV were considered in the fit. The spectrum was grouped to contain at least one count per channel and the C-statistic was used in the fit. A best-fit power-law photon index of 1.6 ± 0.3 (90% uncertainty) was found. This fit implies an unabsorbed luminosity of (7.9 ± 0.8) × 10 38 erg s −1 during the persistent state (all luminosities reported below have also been corrected for absorption). Because the flare period contained only 25 photons, the flare spectrum was poorly constrained ( Γ = 1.6 ± 0.7). This led to a peak luminosity during the first 22 s of the flare of , a factor of about 120 times greater than during persistent periods combined. Freeing the absorption did not substantially change the fit. Fitting the flare with a disk blackbody model gave a slightly worse fit with and a luminosity 30% less than that derived from the power-law fit. For Source 2, we combined the spectra from the flare periods of the four Chandra observations into one spectrum using specextract . The same was done for the pre- and post-flare periods combined. The best-fit power-law photon indices for persistent and flare periods assuming a Galactic column density in the direction of NGC 5128 (8.6 × 10 20 cm −2 ) 25 were 1.0 ± 0.2 and 1.2 ± 0.3 (90% uncertainty), respectively. Again, this indicates no significant change in the spectral shape during the flare. These spectral models implied persistent and peak flare luminosities of (4.4 ± 0.3) × 10 37 erg s −1 and , respectively—an increase of about 200 in less than a minute. When we split the flare period into the flat ultraluminous (first 200 s) and decay (200–4,000 s) times, we also found no significant spectral evolution. We allowed the Galactic column density N H to vary in the fits and found a softer photon index ( Γ = 1.6 ± 0.6 in the persistent state and Γ = 1.3 ± 0.7 during the flare) with for the persistent state and unconstrained below 5 × 10 21 cm −2 during the flare (90% uncertainties for two interesting parameters). In both instances, freeing the absorption changed the unabsorbed X-ray luminosity by only <10%. The source does not reside in the dust lane of NGC 5128, so this excess absorption, if real, might be intrinsic to the source. We also fitted the flare spectrum with a disk blackbody model with fixed N H at the Galactic value and found a best-fit temperature of , with a comparable goodness-of-fit to that of the power-law model and a luminosity 20% below that derived from the power-law fit. For the XMM-Newton observation, the spectrum and response files were generated using the standard SAS tasks evselect , backscale , arfgen and rmfgen . Because the count rate during the pre- and post-flare time period is dominated by background (owing to the much larger extraction region compared to Chandra and to higher background rates), we did not extract a spectrum for the persistent period. We extracted the background-subtracted flare spectrum in a 30″ region around the source and fitted it with the absorbed power law described above for Chandra observations. The slope of the power law was poorly constrained ( Γ = 1.5 ± 0.5) owing to the low number of photons detected in the flare, but the slope was consistent with the fit from the co-added Chandra spectrum. The peak luminosity of the flare was , again consistent with the Chandra flares. Probability of the flare and persistent emission being from two different sources We have assumed that the persistent and flare emission emanate from a single source within the globular cluster hosts of Sources 1 and 2, but it is possible that two separate sources in the same cluster are responsible for the emission. The probability that a globular cluster hosts an X-ray binary of a particular X-ray luminosity depends on the luminosity of the source 26 and the properties of the globular cluster, such as its mass, concentration and metal abundance 27 , 28 . From previous work 27 , the number of X-ray sources more luminous than 3.2 × 10 38 erg s −1 in a globular cluster that has a mass M , stellar encounter rate Γ h , half-light radius r h , and cluster metallicity Z is where The globular cluster hosting Source 1 has photometry in Kron–Cousins R-band and Washington C-band filters; R = 23.02 and C − R = 1.94. This colour corresponds to a photometrically derived metallicity of Z / H = −0.08 dex ( Z = 0.8 Z ⊙ ) 29 . Using a single population model 30 given a Kroupa initial mass function, a 13-Gyr age, Z / H = −0.08 dex and M / L = 4.1 in the R-band are expected for this cluster. Given the distance to NGC 4636 ( d = 14.3 Mpc), the R-band M / L referenced above, and a solar R-band magnitude M R ⊙ = 4.42, we estimate a globular cluster mass of 3.0 × 10 5 M ⊙ . Because we do not have a size measurement for this globular cluster, we conservatively estimate a minimum size of 1.5 pc, which is the 3 σ lower limit based on a survey of globular clusters in the Virgo cluster 31 . With these values, we estimate that the globular cluster is expected to have 0.017 X-ray binaries with luminosities of more than 3.2 × 10 38 erg s −1 . To determine the number of X-ray binaries that are expected above the observed persistent X-ray luminosity of Source 1, we apply the X-ray luminosity function in globular clusters found in a previous study 26 , which predicts that the Source 1 persistent luminosity (8 × 10 38 erg s −1 ) is ten times less likely to be found in a globular cluster than a 3.2 × 10 38 erg s −1 source. This leads to an estimate of 0.0017 X-ray sources equal to or more luminous than Source 1. Therefore, after having found a flaring source, the probability that the persistent emission comes from a different X-ray binary in this cluster is <0.17%. If we conservatively assume that the predicted number of X-ray binaries could be 50% higher (approximately convolving all of the uncertainty sources), then the probability is <0.24%. The globular cluster or ultracompact dwarf galaxy hosting Source 2 has a spectroscopically determined metallicity of Z / H = −0.85 dex ( Z = 0.14 Z ⊙ ) 32 . The derived stellar mass 12 of the source is 3.1 × 10 6 M ⊙ . Given its size 12 of 7 pc, and correcting for the luminosity function 26 (which predicts that a 4 × 10 37 erg s −1 source is ten times more likely to be found in a globular cluster than a 3.2 × 10 38 erg s −1 source), we estimate that the globular cluster is expected to have 0.064 X-ray binaries with luminosities of more than 4 × 10 37 erg s −1 . Therefore, after having found a flaring source, the probability that the persistent emission comes from a different X-ray binary in this cluster is <6.4%. If we conservatively assume that the predicted number of X-ray binaries could be 50% higher (approximately convolving all of the uncertainty sources), then the probability is <9.1%. This might be an overestimate given that ultracompact dwarfs appear to harbour X-ray sources at a lower rate than globular clusters 33 . Even in the most conservative case, the combined probability that both sources arise from sources different from the persistently emitting sources is <1.5 × 10 −4 . For both sources, we determined their positions separately during the flare phase and the persistent phase and found no statistical difference within the positional uncertainties. However, this is not highly constraining given the large PSF of Chandra at the off-axis location of the flares. Code availability The code used to find X-ray flares is available at . | This image shows the location of a remarkable source that dramatically flares in X-rays unlike any ever seen. Along with another similar source found in a different galaxy, these objects may represent an entirely new phenomenon, as reported in our latest press release. These two objects were both found in elliptical galaxies, NGC 5128 (also known as Centaurus A) shown here and NGC 4636. In this Chandra X-ray Observatory image of NGC 5128, low, medium, and high-energy X-rays are colored red, green, and blue, and the location of the flaring source is outlined in the box to the lower left. Both of these mysterious sources flare dramatically - becoming a hundred times brighter in X-rays in about a minute before steadily returning to their original X-ray levels about an hour later. At their X-ray peak, these objects qualify as ultraluminous X-ray sources (ULXs) that give off hundreds to thousands of times more X-rays than typical X-ray binary systems where a star is orbiting a black hole or neutron star. Five flares were detected from the source located near NGC 5128, which is at a distance of about 12 million light years from Earth. A movie showing the average change in X-rays for the three flares with the most complete Chandra data, covering both the rise and fall, is shown in the inset. The source associated with the elliptical galaxy NGC 4636, which is located about 47 million light years away, was observed to flare once. This Chandra X-ray Observatory image shows the galaxy NGC5128 with its hot gas and many X-ray point sources. The circled source flared by a factor of 200 in less than a minute on multiple occasions. Credit: NASA/CXC/U.Birmingham/M.Burke et al. The only other objects known to have such rapid, bright, repeated flares involve young neutron stars such as magnetars, which have extremely powerful magnetic fields. However, these newly flaring sources are found in populations of much older stars. Unlike magnetars, the new flaring sources are likely located in dense stellar environments, one in a globular cluster and the other in a small, compact galaxy. When they are not flaring, these newly discovered sources appear to be normal binary systems where a black hole or neutron star is pulling material from a companion star similar to the Sun. This indicates that the flares do not significantly disrupt the binary system. While the nature of these flares is unknown, the team has begun to search for answers. One idea is that the flares represent episodes when matter pulled away from a companion star falls rapidly onto a black hole or neutron star. This could happen when the companion makes its closest approach to the compact object in an eccentric orbit. Another explanation could involve matter falling onto an intermediate-mass black hole, with a mass of about 800 times that of the Sun for one source and 80 times that of the Sun for the other. This result is describing in a paper appearing in the journal Nature on October 20, 2016. The authors are Jimmy Irwin (University of Alabama), Peter Maksym (Harvard-Smithsonian Center for Astrophysics), Gregory Sivakoff (University of Alberta), Aaron Romanowsky (San Jose State University), Dacheng Lin (University of New Hampshire), Tyler Speegle, Ian Prado, David Mildebrath (University of Alabama), Jay Strader (Michigan State University), Jifeng Lui (Chinese Academy of Sciences), and Jon Miller (University of Michigan). | 10.1038/nature19822 |
Medicine | Dried samples of saliva and fingertip blood are useful in monitoring responses to coronavirus vaccines | Laura Lahdentausta et al, Blood and saliva SARS-CoV-2 antibody levels in self-collected dried spot samples, Medical Microbiology and Immunology (2022). DOI: 10.1007/s00430-022-00740-x | https://dx.doi.org/10.1007/s00430-022-00740-x | https://medicalxpress.com/news/2022-06-dried-samples-saliva-fingertip-blood.html | Abstract We examined the usefulness of dried spot blood and saliva samples in SARS-CoV-2 antibody analyses. We analyzed 1231 self-collected dried spot blood and saliva samples from healthcare workers. Participants filled in a questionnaire on their COVID-19 exposures, infections, and vaccinations. Anti-SARS-CoV-2 IgG, IgA, and IgM levels were determined from both samples using the GSP/DELFIA method. The level of exposure was the strongest determinant of all blood antibody classes and saliva IgG, increasing as follows: (1) no exposure (healthy, non-vaccinated), (2) exposed, (3) former COVID-19 infection, (4) one vaccination, (5) two vaccinations, and (6) vaccination and former infection. While the blood IgG assay had a 99.5% sensitivity and 75.3% specificity to distinguish participants with two vaccinations from all other types of exposure, the corresponding percentages for saliva IgG were 85.3% and 65.7%. Both blood and saliva IgG-seropositivity proportions followed similar trends to the exposures reported in the questionnaires. Self-collected dry blood and saliva spot samples combined with the GSP/DELFIA technique comprise a valuable tool to investigate an individual’s immune response to SARS-CoV-2 exposure or vaccination. Saliva IgG has high potential to monitor vaccination response wane, since the sample is non-invasive and easy to collect. Working on a manuscript? Avoid the common mistakes Introduction Serological assays are useful in investigating an individual’s immune status and response to vaccinations as well as in perceiving epidemiological information on the spread of the infection. Especially after the implementation of vaccination programs, serological surveys of large populations are essential in evaluating the level and duration of antibody responses [ 1 ]. Most serological tests are based on the detection of the IgG and IgM antibodies that recognize SARS-CoV-2 nucleoprotein (N), viral spike glycoprotein S1 subunit, or its receptor-binding domain (RBD), but some applications have also been developed to detect IgA antibodies against these antigens [ 2 , 3 , 4 , 5 ]. Most SARS-CoV-2 vaccines are designed to use the viral spike glycoprotein or part of it as the immunogen. Over 90% of subjects start to develop IgG antibodies to SARS-CoV-2 antigens 10–11 days after the onset of symptoms [ 4 , 6 , 7 ]. Serum IgM and IgA levels elevate synchronously to or slightly earlier than IgG, and their seroconversion occurs between days 6 and 15 [ 8 , 9 ]. A stronger exposure and the severity of the COVID-19 infection are associated with higher antibody levels and a longer durability of antibodies [ 9 , 10 ]. Mucosal immunity is crucial in limiting respiratory infections. In the oral cavity, class IgG and IgM antibodies mainly diffuse from blood circulation into gingival crevicular fluid and further into the saliva. IgA antibodies are produced in mucosa, and they are responsible for the early humoral immunity against SARS-CoV-2, neutralizing the virus [ 11 , 12 ]. Serum and saliva anti-SARS-CoV-2 display similar temporal kinetics [ 8 ]. Both serum and saliva IgG antibodies are detected up to 9 months after COVID-19 infection, whereas IgA and IgM antibodies decline more rapidly [ 8 , 13 ]. Significant decrease in serum IgG levels was observed 6 months after second dose of vaccination reflecting the superior long-term humoral response after natural infection compared to vaccine-induced response [ 14 ]. Easily performed collection of samples that do not require a laboratory setting is essential for large-scale population screening. Dried blood spot (DBS) samples have been reported to be a valid alternative to plasma/serum collection for anti-SARS-CoV-2 IgG detection as the antibody levels measured from DBS samples correlate with the levels detected from traditional serum/plasma samples [ 15 , 16 , 17 , 18 ]. Saliva is an easily collectable, non-invasive sample material suitable for anti-SARS-CoV-2 IgG, IgA, and IgM detection [ 8 ]. However, dried saliva spot samples have not been utilized in SARS-CoV-2 diagnostics. The aim of our study was to explore the usefulness of self-collected DBS and dried saliva spot (DSS) samples in the analysis of the immune response against SARS-CoV-2. IgG, IgM, and IgA-class antibodies were detected from the dry spot samples collected from 1200 healthcare professionals to find out if DBS and DSS samples can be used to detect an antibody response caused by either natural infection or vaccines. In particular, we were interested to investigate whether saliva is applicable in the antibody analyses. Materials and methods The study population comprised healthcare and social workers, who were recruited between January and March 2021 in the Uusimaa region of Southern Finland. During the COVID-19 epidemic in Finland 78,565 laboratory confirmed cases (1.4% of population) were registered from January 3, 2020 to March 31, 2021 [ 19 ]. The worst epidemic situation has been in the southern part of Finland, which has the largest population and highest population density in the country. SARS-CoV-2 vaccinations started in Finland at the beginning of 2021, with critical healthcare workers and risk groups. The participants worked either in specialized care at the Helsinki University Hospital (HUS) or in primary and social care for the City of Helsinki (HEL). The inclusion criterion for participation was age of at least 18 years. Consent for participation was given using Suomi.fi e-services with a strong identification . The participants were recruited through work mailing lists and intranet. In the case of HUS, the participants of the present study were restricted only to those who had participated in the previous questionnaire study concerning COVID-19 exposure among HUS personnel ( n = 866), presenting a random sample [ 20 ]. The list of different occupations among the participants is presented in Supplementary Table 1. The study was conducted according to the guidelines of the Declaration of Helsinki and the study design was approved by the local ethical committees of the Helsinki University Hospital and the City of Helsinki (HUS/1450/2020, HUS/157/2020, HUS/182/2021, HEL 2020-007596T 13 02 01). Sampling sets for self-collection of blood and saliva samples were delivered to the participants either at their home address or at their workplace. Each sampling set included a PerkinElmer 226 Sample Collection Card designed for dried blood spot (DBS) collection, and equipment and instructions for self-administered saliva and blood sampling. Participants filled in the electronic background questionnaire on their exposures, COVID-19 infections, and vaccinations, and returned the sample card either by mail or to their workplace to be delivered further to the laboratory. Vaccines reported by the participants included Comirnaty (Pfizer-BioNTech), COVID-19 Vaccine AstraZeneca/Vaxzevria (Oxford-AstraZeneca), and Spikevax (Moderna). Altogether, 816 sample cards from HEL and 415 sample cards from HUS were analyzed. 51 sample cards (3.9%) were excluded from further analyses due to insufficient sample material, technical problems, an incompletely filled study number or background questionnaire, or cancelation of study consent by the participant. Self-collection of saliva and blood samples Participants followed detailed written and illustrated instructions for self-collection. Both saliva and blood samplings were advised to be performed before 10 am, and participants were asked not to eat, drink, or brush their teeth for 1 h prior to salivary sampling. Collection cards comprised five equal circles printed in a row and the middle circle was left empty to prevent sample mixing. Two circles printed on the sample collection card were filled with drops of blood drawn from a fingertip with a lancet. Non-stimulated saliva was collected by passive drooling into a plain 15 ml Falcon tube. Drops of saliva were applied on two circles of the collection card using a transfer pipette. After the cards were dried (3–4 h) at room temperature, they were sealed in envelopes and delivered to the laboratory, where they were stored at − 20 °C prior to analysis. Antibody analysis Samples were analyzed at PerkinElmer Wallac Oy, Turku, Finland with a fully automated solid phase DELFIA (time-resolved fluorescence) immunoassay. Punches of 3.2 mm diameter containing approximately 3 μl of blood or saliva were cut from the collection cards into the wells of an assay plate with a DBS Puncher (PerkinElmer Wallac Oy). The sample plate was analyzed with a GSP™ instrument (PerkinElmer Wallac Oy). IgG antibodies against the SARS-CoV-2 spike S1 protein were detected using commercial GSP/DELFIA SARS-CoV-2 IgG kits (PerkinElmer Wallac Oy). IgA and IgM antibodies against the SARS-CoV-2 spike S1 protein were detected using custom-made secondary antibodies and the same kit and GSP protocol as for IgG. The GSP protocol determines fluorescence as counts that are proportional to the amount of human anti-SARS-CoV-2 IgG/IgA/IgM in the sample. Anti-SARS-CoV-2 IgG results were also reported as ratios, which were calculated by dividing the sample signal by the average signal of the calibrator samples provided in the kit. The cut-off value for anti-SARS-CoV-2 IgG in DBS determined by the manufacturer is 1.4. Cut-off values for the other antibody classes and DSS samples have not been determined. Comparison of DBS, DSS, and wet saliva samples in pilot sample Six volunteers with either previous COVID-19 infection or one or two vaccinations collected DBS and DSS samples as described above to test the performance of dried spot saliva samples compared wet saliva samples. In addition, they stored the rest of the non-stimulated saliva (i.e., wet saliva samples) at − 20 °C for further analysis. All samples were collected within 1 week. Anti-SARS-CoV-2 IgG, IgM, and IgA levels were measured from DBS, DSS, and thawed saliva samples with the GSP instrument. For wet saliva analysis, 20 µl of saliva was applied to the wells of a sample plate of GSP/DELFIA SARS-CoV-2 IgG kit (Perkin Elmer Wallac oy) instead of a paper punch. In addition, one volunteer collected a time series of DBS, DSS, and wet saliva samples after receiving the first dose of vaccine. Statistical analyses The antibody levels exhibited a skewed distribution, and they were logarithmically transformed before the statistical analyses. Differences between the groups were analyzed using ANOVA combined with the LSD post hoc test or t test. Linear trends were examined using weighted linear terms of ANOVA. Correlations were examined either by Pearson or Spearman analysis, depending on the number of observations. The associations were analyzed using linear regression models using the logarithmically transformed antibody levels as dependent variables, with age, sex, BMI, and smoking as confounding factors, and the level of exposure as independent variables. Each category of the level of exposure was binary coded using the healthy, non-exposed group as the reference. Receiver-operating characteristics (ROC) was used to examine the clinical performance of the assays. Area-under-curve (AUC), and sensitivities and specificities using specified cut-off levels are reported. All statistical analyses were performed with SPSS statistical software (IBM). Results The performance of dry saliva spot (DSS) and wet saliva samples compared to dry blood spot (DBS) samples was tested in a pilot study with six volunteers (Supplementary Fig. 1). The levels of blood anti-SARS-CoV-2 IgG were approximately 10 to even 100 times higher than those of DSS and wet saliva samples, while the differences were more moderate in IgM and IgA. IgG ( r = 0.71) and IgM ( r = 0.94) of dry and wet saliva samples exhibited a strong correlation (Spearman’s rho), whereas IgA ( r = − 0.086) did not. The levels of blood IgG, IgM, and IgA started to increase 7 days after the vaccination. Saliva IgG levels increased only moderately after vaccination, whereas IgM and IgA counts remained at the baseline level (Supplementary Fig. 2). A total of 1231 persons participated in the study; 816 (66.3%) worked in HEL and 415 (33.7%) in HUS. Their characteristics are presented in Table 1 . Mean age did not differ significantly between the groups, but the HUS population included more middle-aged (40–59 years) participants. The gender distribution was similar in both groups with approximately 10% male participants. The occupational groups differed between the populations; this was one of the reasons to include both city and hospital district workers in the study. In total, nurses were the largest group (60.7%), followed by physicians (9.0%), therapists (8.9%), social workers (7.8%), dental care professionals (7.5%), and those in administrative or maintenance work (6.2%). Different occupations under each of the six groups are listed in Supplementary table 1. Table 1 Characteristics of the population Full size table In the study group of health care workers, IgG and IgM of DBS and DSS samples correlated relatively well with r = 0.673 ( p < 0.001) and r = 0.293 ( p < 0.001), respectively, whereas IgA in DBS and DSS samples correlated only weakly ( r = 0.067, p = 0.025) (Fig. 1 ). The antibody levels of the study group are presented in Supplementary Table 2. Fig. 1 Correlation between saliva and blood anti-SARS-CoV-2 antibody levels. The logarithmically transformed antibody counts are presented. Each figure shows one antibody class with corresponding antibodies measured from blood (DBS) and saliva (DSS). The correlation coefficient ( r ) and p value from Pearson correlation analyses are shown. A number of participants are: IgG, n = 1186; IgM, n = 1077; IgA, n = 1079 Full size image The exposure level was determined according to the questionnaire data collected from participants and divided into the following groups: (1) healthy = non-infected, non-exposed, non-vaccinated ( n = 350, 31.0%); (2) exposed = registered as ‘negative qPCR result’, ‘quarantine due to exposure’, or ‘exposure in the family’, non-vaccinated ( n = 381, 33.9%); (3) former COVID-19 infection ( n = 57, 5.1%); (4) vaccinated once ( n = 115, 10.2%); (5) vaccinated twice ( n = 204, 18.1%); and (6) former COVID-19 infection and vaccinated once or twice ( n = 19, 1.7%). The characteristics did not differ between the exposure groups (Supplementary Table 3). The median antibody concentrations in different exposure groups are presented in Fig. 2 . Blood antibody levels and saliva IgG displayed significant ( p < 0.001) increasing trends among the participants with different exposure level, whereas saliva IgM and IgA did not. Blood IgG and IgM as well as saliva IgG levels were significantly higher in all exposure groups compared to “healthy” participants. Compared to “healthy”, blood IgA levels were significantly higher in all other groups than “exposed”. The associations of the antibody levels with different levels of exposure were examined using linear regression models (Table 2 ). For blood IgG ( R 2 = 0.146), IgM ( R 2 = 0.0.65), and IgA ( R 2 = 0.014), as well as saliva IgG ( R 2 = 0.0.57), the exposure level was the main determinant of the antibody concentration. Age was inversely associated with blood IgG and IgM as well as saliva IgM and IgA, whereas sex was not associated with any of the antibody concentrations. Fig. 2 SARS-Cov-2 antibody levels in the groups with various levels of exposure. IgG-, IgM-, and IgA-class antibody levels were determined from the dried spot blood and saliva samples. Median levels with IQR are shown. The groups are: (1) healthy = non-infected, non-exposed without vaccination; (2) exposed = registered as ‘negative qPCR result’, ‘quarantine due to exposure’ or ‘exposure in the family’; (3) former COVID-19 infection (4) vaccinated once; (5) vaccinated twice; (6) former COVID-19 infection and vaccinated once or twice. The levels were logarithmically transformed for statistical testing. The p values above are from the ANOVA test, significance of the differences between the groups. The stars depict the level of significance compared to the group of healthy as produced by LSD, * p < 0.05; ** p < 0.01; *** p < 0.001 Full size image Table 2 Associations of the antibodies with the level of exposure Full size table In participants with former COVID-19 infection, the median time from diagnosis to sampling was 38.6 weeks (IQR 32.6, range 4.0–54.4 weeks) (Supplementary Fig. 3). None of the antibody levels had a significant correlation with the time since infection (data not shown). Next, we analyzed the effect of time since vaccination on antibody levels (Fig. 3 ). Blood IgG levels increased with a significant linear trend ( p < 0.001) until 10 weeks after the first vaccination, whereas no significant changes were observed after the second vaccination. No significant trends were observed in blood IgM and IgA levels after the first vaccination, whereas they both exhibited linear decreasing trends after the second vaccination (< 0.001 and 0.020). Saliva IgA ( p = 0.031) and IgM ( p = 0.025) decreased linearly after the first vaccination, and all saliva antibody levels decreased after the second vaccination ( p for weighted trend: IgG < 0.001, IgM 0.005, and IgA 0.041). Fig. 3 Effect of time after vaccination on the antibody levels. A Blood antibody levels and B saliva antibody levels. Mean values with SE are shown and the p values are produced by the LSD test. Lines on the left side represent the mean values of healthy, unexposed participants for reference Full size image The performance of the assays to distinguish the exposure groups was further investigated using ROC analyses (Supplementary Table 4). All blood antibody assays, and saliva IgG differentiated successfully participants with vaccination and/or former COVID-19 infection from healthy ( p < 0.001). These assays also presented highly significant ( p < 0.001) AUC values, when participants with vaccinations were detected among the entire study group with different levels of exposure (Fig. 4 ). Fig. 4 Performance of DBS and DSS determinations to detect vaccinated participants. ROC-analyses were performed for DBS-IgG, IgM, and IgA, and DSS-IgG. The comparisons were made between A participants who were vaccinated vs. others, and B participants who were vaccinated twice vs. others Full size image Using the seropositivity cut-off value of the blood IgG ratio defined by the manufacturer, we determined the cut-off level for saliva IgG: the best performance of the saliva assay (DSS) was obtained with a saliva IgG ratio of 0.14, resulting in a 70.0% sensitivity and 75.5% specificity in detecting seropositive participants. The true-positive rates of DBS and DSS seropositivities were calculated to detect participants with former COVID-19 infection, vaccination, and two doses of vaccines (Table 3 ). The DBS assay had a 99.5% sensitivity and 75.3% specificity in finding participants with two vaccinations, and the corresponding percentages for DSS were 85.3% and 65.7%. Table 3 Calculation of true-positive rates for DBS- and DSS-IgG seropositive values Full size table The proportion of DBS-seropositive subjects increased ( p < 0.001) among healthy, exposed, infected, vaccinated once, and vaccinated twice as follows: 10.6%, 18.6%, 75.4%, 76.3%, and 99.0% (Fig. 5 A). The corresponding proportions for DSS-seropositive subjects were 31.7%, 34.4%, 41.8%, 39.8%, and 85.3% (Fig. 5 B). The exposure differed between occupational groups according to both the questionnaire (Fig. 5 C) and the proportions of seropositivities (Fig. 5 D and E). Fig. 5 Reported exposures and seropositivity in DBS and DSS measurements. Exposure is defined as healthy, exposed, vaccinated once or twice, and confirmed COVID-19 infection. DBS-seropositivity is determined as blood IgG ratio exceeding the cut-off value of 1.4. DSS-seropositivity is determined as saliva IgG ratio exceeding the cut-off value of 0.14. The proportion of seropositivity in DBS and DSS determinations is presented in A – B different exposure groups, and D – E different occupational groups. The p value is produced by Chi-square test Full size image Discussion We demonstrated that self-administered dry spot blood (DBS) and saliva (DSS) samples analyzed with the GSP/DELFIA system can be utilized in analyzing individual immune responses to SARS-CoV-2. All three antibody classes in DBS samples, IgG, IgM and IgA, and IgG in DSS samples were able to distinguish infected and/or vaccinated individuals from the healthy, non-exposed, non-vaccinated group. Importantly, all blood assays and saliva IgG identified vaccinated participants from the whole population: using the DBS-IgG-seropositivity cut-off level determined by the manufacturer, the assay had a sensitivity of 99.5% in differentiating twice vaccinated participants, whereas the corresponding cut-off level for DSS-IgG determined in this study had an 85.3% sensitivity. Thus, the DBS samples would be highly useful in following immune responses after SARS-CoV-2 vaccination. Since saliva collection is easy and non-invasive, DSS also offers good potential in the follow-up of immune responses to determine, e.g., needs for additional doses of vaccinations. All antibody classes could be measured from DBS samples. IgG class antibodies comprise 70–75% of all immunoglobulins found in the blood, and as expected, SARS-CoV-2 IgG analyses gave the highest counts in the analyses. In addition, IgA (10–15% of all Igs) and IgM (5% of all Igs) could be measured from DBS samples. IgM and IgA usually disappear from the circulation with viral clearance and are not recommended in the follow-up after infection, but they may help to elucidate the clinical picture in the patients [ 21 ]. Performance of a serological assay may improve if different immunoglobulin classes are combined [ 5 ]. Thus, further studies using DBS samples are warranted among acute or convalescent phase patients. Saliva contains IgG and IgM class antibodies diffused from blood and secretory IgA produced on the oral mucosa. In general, the measured antibody levels in saliva are approximately 10–100 times lower than in the blood [ 22 , 23 ], which was also shown in our pilot sample comparing wet and dry saliva samples to dry blood samples. Low antibody levels in saliva creates a challenge with DDS samples, since only a small sample volume is used in the measurements. Nevertheless, our results showed relatively good correlation between blood and saliva anti-SARS-CoV-2 IgG levels. In addition, saliva IgM levels correlated significantly with blood IgM levels, but correlation between blood and saliva IgA levels was only modest. This most likely reflects the different origin of blood and saliva IgA. The usefulness of DSS-IgA was not fully clarified here due to the study design, but its performance would be interesting to test among acute phase COVID-19 patients or subjects who received oral/mucosal vaccines. Only 1% of twice vaccinated participants remained IgG seronegative in the analyses, whereas 25% of infected persons were seronegative. This phenomenon may be explained by the long time between infection and sample collection: the median time was 39 weeks, but ranged between 4 and 54 weeks. Typically, a good immune response against SARS-CoV-2 spike protein is seen 10–14 days after the onset [ 9 ], but there are also individuals who do not seroconvert after SARS-CoV-2 infection [ 22 , 23 ]. As neutralizing antibody responses and specific memory B cells have been described as remaining in the circulation for up to ≥ 8 months (32 weeks), previously infected individuals may harbor antibodies at this time point [ 21 ]. The immune response, however, declines at the latest 12 months (48 weeks) after infection, and after 10 months (40 weeks) post-infection 13% of individuals lost detectable IgG titers [ 24 , 25 ]. We did not observe a declining trend of the antibody levels after COVID-19, and the seronegative subjects were also distributed randomly along the time axis. At the beginning of the COVID-19 pandemic, qPCR tests were restricted to patients admitted to hospital, and therefore not all assumed SARS-CoV-2 infections were qPCR verified. However, the antibody levels did not correlate with the time since infection even if only the infected participants with a positive qPCR test result were included. Thus, the sensitivity of the DBS-seronegative results in identifying participants with formed infection was only 75.4%, whereas the specificity of 85.2% was acceptable. IgG levels measured from DSS samples distinguished individuals with one or two doses of vaccines from the healthy, unexposed, and unvaccinated participants with significant AUCs, but importantly, the assay provided good and excellent performance in distinguishing vaccinated persons from the whole population (AUC 0.72 and 0.82, respectively). In the present study, we had a chance to perform follow-up only up to 8 weeks since the second dose of vaccination, and longer monitoring will be feasible. mRNA vaccine-induced antibodies have been detected more than 6 months after vaccination [ 26 ]. Nevertheless, measuring the antibodies from easily collectable saliva samples to estimate the optimal time for a booster vaccination would save health care costs and prevent later breakthrough infections. Measured SARS-CoV-2 antibodies correlated well with the level of exposure. As expected, two dosages of vaccine and natural infection combined with vaccination induced the highest antibody levels. With the BioNTech Pfizer mRNA vaccine, it has been reported that the antibody, especially IgA, levels were higher in individuals who had a positive COVID-19 history compared to those with a negative one [ 27 , 28 ]. This reflects the secondary immune response and was also observed in the present study. Healthcare workers have been at higher risk for COVID-19 infection than the general population [ 29 ]. In our study, seropositivity was most frequent among nurses, physicians, and therapists, indicating that work-related transmission may have occurred to some extent. At the beginning of the COVID-19 epidemic, the Finnish government declared a state of emergency that continued until June 2020. During that time, a large proportion of non-urgent care was postponed, and remote patient contact was preferred. Especially in dentistry, only acute dental care was given at that time in the southern part of Finland. Studying the effects of recent exposures to COVID-19 was not possible due to the study design. Thus, it remains to be investigated whether natural infection is reflected short term in the saliva IgA and IgM levels. We recruited the participants through work mailing lists and the occupational groups could not be selected beforehand. Therefore, the group sizes differ and relatively few participants representing dental professionals, social workers, and administrative staff could be recruited, which may bias the results. Additionally, in this study, we do not have samples collected at different time points from the same person, and thus, we can estimate the long-term antibody response only at the group level. In Finland, the second dose of SARS-CoV-2 vaccination was first given 3 weeks after the first dose. The vaccination policy changed quite soon, and the second dose was given 3 months after the first dose. The changes in vaccination policy may be reflected in the results. The strength of our study is a relatively large sample size containing participants with different exposure levels: our data include information from healthy, exposed, infected, and once or twice vaccinated participants, enabling the comparison of antibody responses in different groups. Conclusions Our results indicate that self-collected dried blood and saliva spot samples can be used reliably in SARS-CoV-2 antibody analyses to measure the immune response to SARS-CoV-2 vaccination and monitor waning humoral immune response after vaccinations and natural infection. Both blood and saliva assays displayed excellent accuracy in differentiating high IgG levels after two doses of vaccination. Dried spot samples are easily collected at home, thus enabling large sample collection without requiring specialized personnel for sample taking. | In a study by the University of Helsinki and HUS Helsinki University Hospital, the levels of antibodies associated with the SARS-CoV-2 virus were analyzed in more than 1,200 employees in the social welfare and health care sector to determine whether there were differences in different antibody classes according to viral exposure. Dried saliva and blood samples collected between January and March 2021 were utilized in the study. The exposure and background data were collected using a questionnaire. Based on the results, immunoglobulin G (IgG) has a 99.5% sensitivity and 75.3% specificity to distinguish people with two vaccinations from non-exposed and exposed individuals, individuals with previous COVID-19 infection, and those with one vaccination. IgG measured from saliva also had an 85.3% sensitivity and 65.7% specificity in distinguishing people with two vaccinations from the other groups. The results of the study have been published in the Medical Microbiology and Immunology journal. Dentists had the lowest amount of antibodies against the virus The study attested to the exposure of social welfare and health care employees to coronavirus, visible in the results as elevated antibody levels. A total of 47.5% of nurses and 47.7% of doctors were seropositive due to either a previous infection, vaccination or exposure, whereas only 8.7% of dentists had been exposed to the virus. In addition to dentists, the lowest antibody levels were observed in administrative staff and social workers. The highest antibody levels in both the blood and saliva were found in those who had both had COVID-19 and had been vaccinated. The lowest antibody levels were seen in individuals who had not been exposed to the virus, who had not had a previous infection and who had not received a vaccination. "Against our expectations, there have been fewer coronavirus infections and instances of exposure among dental care staff compared to, for example, hospital doctors, even though a lot of aerosol-producing procedures are carried out close to the patient in dental care," researcher Laura Lahdentausta says. At the time of sample collection, the dental care staff had also received the lowest number of coronavirus vaccines, which was reflected in their antibody levels. New information on using saliva in antibody studies Another goal of the study was to develop research methods. In fact, the study provides important information on the use of saliva in the determination of antibody levels. "Based on the results, dried samples reliably reveal antibodies associated with the virus," says Professor Pirkko Pussinen. The benefit of dried saliva and fingertip blood samples is that they are easy to collect. Samples can be taken at home outside laboratory conditions. Their collection is inexpensive and, in the case of saliva samples, non-invasive. "In the future, this assay technique based on dried spot samples could be effectively utilized to monitor both the immune response produced by vaccination and the need for vaccines in large patient populations," Pussinen adds. | 10.1007/s00430-022-00740-x |
Biology | Researchers sequence first bedbug genome | Jeffrey A. Rosenfeld et al. Genome assembly and geospatial phylogenomics of the bed bug Cimex lectularius, Nature Communications (2016). DOI: 10.1038/ncomms10164 Journal information: Nature Communications | http://dx.doi.org/10.1038/ncomms10164 | https://phys.org/news/2016-02-sequence-bedbug-genome.html | Abstract The common bed bug ( Cimex lectularius ) has been a persistent pest of humans for thousands of years, yet the genetic basis of the bed bug’s basic biology and adaptation to dense human environments is largely unknown. Here we report the assembly, annotation and phylogenetic mapping of the 697.9-Mb Cimex lectularius genome, with an N50 of 971 kb, using both long and short read technologies. A RNA-seq time course across all five developmental stages and male and female adults generated 36,985 coding and noncoding gene models. The most pronounced change in gene expression during the life cycle occurs after feeding on human blood and included genes from the Wolbachia endosymbiont, which shows a simultaneous and coordinated host/commensal response to haematophagous activity. These data provide a rich genetic resource for mapping activity and density of C. lectularius across human hosts and cities, which can help track, manage and control bed bug infestations. Introduction The common bed bug, Cimex lectularius , has been associated with humans for thousands of years 1 . There are over 90 described species classified in the family Cimicidae (Order Hemiptera) 2 , 3 , of which three cimicid species are known to be intimate associates of humans: the common bed bug of the temperate regions ( C. lectularius Linnaeus, 1758), the tropical bed bug ( C. hemipterus Fabricius, 1803) and the West African species that feeds on humans and bats ( Leptocimex boueti Brumpt, 1910) 4 . The temperate species analysed in this study, C. lectularius , is the most predominant cimicid in densely inhabited human environments such as indoor dwellings and cities. Several theories exist with respect to how these species made the transition to humans as their primary host 5 . A leading hypothesis posits that the transition occurred when humans lived in caves, where they were exposed to bugs that fed on bats and on other cave-dwelling mammals 1 , 6 . On the establishment of human settlements and dwellings outside caves, human-adapted cimicids accompanied their new host. Given the time that elapsed since this transition, present-day bed bug populations feeding on humans and bats have undergone genetic differentiation in sympatry 7 . Early hunter–gatherer and herder populations migrated over long distances; however, as small towns and villages (and later cities) became established, bed bug infestations grew 8 . Commerce and human travel then contributed to expanded distributions of bed bugs throughout Europe, Asia and the Americas. European bed bug infestations were recorded in Germany and France by the eleventh and thirteenth centuries, respectively 9 . Bed bugs were also reported in England in 1583 carboxylesterase (CE); however, infestations were uncommon until the seventeenth and eighteenth centuries 1 , 10 . The emergence of heated homes and air travel since the late 1900s has accelerated bed bug infestations in cities globally 11 , as bed bugs could then thrive throughout the year in indoor environments with constant access to blood meals as well as migrate opportunistically and rapidly. Finally, public and commercial locales have been reported to be potentially infested 12 , 13 , as bed bugs are easily transported between their homes and these places by unsuspecting citizens 1 , 5 . A lull in bed bug infestations began with the introduction of insecticides in the mid-1900s (refs 9 , 14 ); however, a resurgence since the late 1990s and the evolution of insecticide resistance have prompted research to understand the basis of this resistance 15 , 16 , 17 . There is a limited molecular understanding of the biology of the bed bug before, during and after feeding on human blood 18 , which is essential to their life cycle since bed bugs are temporary ectoparasites, whereby they access their hosts for blood feeding and then seek the refuge of the indoor environment for digestion, waste production and mating 9 . To address these questions, we produced the first genome sequence draft of the bed bug, an RNA-sequencing (RNA-seq) functional annotation profile of gene expression across all life stages, and then a full gene annotation and phylogenetic analysis of this important urban pest. In combination with a parallel effort on environmental metagenomics of the highly urbanized and populated landscape of New York City (NYC), we also show that such a draft genome sequence can be used to map and characterize citywide phylogeographic relationships of bed bugs as they interact with their human hosts. Results Sample collection To build the genome and transcriptome profiles, we extracted DNA and RNA from the standard laboratory Har-73 insecticide-susceptible strain of C. lectularius . The bed bug has typical developmental stages of an insect that exhibits hemimetabolous metamorphosis ( Fig. 1 ). Each of the immature stages are nymphal, and there is no pupal stage between the final immature stage and the adult. Blood feeding begins in the first instar nymph stage after hatching from the egg and continues through the next four nymphal instars, followed by the adult male and female. We collected RNA from animals at each of these stages and then sequenced RNA from each collection at each time point in triplicate ( Supplementary Table 1 ). Notably, we collected RNA from unfed bed bugs and recently fed bed bugs to examine the functional genomic profile of the bed bug in relation to blood meals. To avoid human or haemophilic bacterial contamination of the DNA used in genome sequencing and assembly, we used first instar nymphs that recently hatched but had not taken any blood meals. Figure 1: The bed bug life cycle and developmental gene expression profile. The seven stages of the life cycle for C. lectularius are shown, starting from an egg and proceeding through five nymphal instar stages, with final differentiation into adult male and female. We used the annotation and RNA-seq data (five individuals per time point in triplicate) to calculate the number of DEGs between all developmental stages and sexes. The DEGs from both adult male and female comprise the arrow from eggs to first instar. The width of the arrows and their colour are proportional to the number of statistically significant DEGs (false discovery rate <0.05, log(fold-change) ≥1.5 and RPKM ≥1). Full size image Genome assembly To build the assembly, we used a combination of long and short read technologies. We first created a set of four Illumina TruSeq libraries with insert sizes of 185, 367, 3,000 and 6,000 bp ( Supplementary Table 2 , see Methods for library details). The combined 252 million paired-end (PE) reads (100 × 100 bp) represented 73 × coverage of the genome and enabled a resolution of small and large genome fragments, with a coverage of 34 × , 12 × , 7 × and 20 × for the four libraries, respectively. A first assembly of the genome was performed using ABySS to calculate accurate insert sizes for our paired reads that could be used for the more-thorough ALLPATHS-LG assembler ( Supplementary Fig. 1 ). The resulting assembly contained 77,082 contigs with an N50 contig length of 12.6 kb. After scaffolding, the assembly consisted of 13,151 sequences with an N50 scaffold length of 945 and 947 kb without and with gaps, respectively. The total estimated length of the first genome assembly was 713.6 million bp (Mb). The full set of assembly statistics from these four insert libraries is listed in Supplementary Table 2 . We then used the Illumina Moleculo kit, which utilizes a unique barcoding and dilution protocol to produce long synthetic reads. After Moleculo software processing, long contiguous reads were created and used in the assembly. The Moleculo sequencing provided 571,913 reads with an average length of ∼ 3,500 bp ( Supplementary Table 3 and Supplementary Fig. 2 ), with the reads showing a median of >99% (Q30) accuracy. In order to leverage genomic information both from the ALLPATHS-LG and from the Moleculo assemblies, the Metassembler pipeline was used, resulting in a decrease in the number of scaffolds and increase in the N50 length compared to the ALLPATHS-LG assembly, which we then used for the final assembly 19 , 20 . The integrated assembly using the long and short read technologies was a total of 697,867,761 bp from 12,259 scaffolds, with an N50 of 971kb and a N95 of 9.7kb ( Supplementary Table 4 ). We then validated this assembly using the BioNano Genomics Irys system to create 65 × coverage (56 Gb) across single-molecule genome maps from the same strain ( Supplementary Fig. 2 , see Methods), whereby we observed that 87.44% of the bases in the sequence assembly ( P value threshold=1e−7) showed accurate orientation and assembly by BioNano molecules. Finally, we used the CEGMA (Core Eukaryotic Genes Mapping Approach) algorithm to establish our coverage of core eukaryotic genes (CEGs, see Methods) 21 . Out of 248 CEGs, the ALLPATHS-LG assembly included 218 completely assembled genes, with an additional 21 CEGs partially assembled, giving us an estimated gene completeness of 96% (239/248). Transcriptome assembly and annotation To annotate the genome with the RNA-seq data, we used the MAKER2 package 22 , which utilized both the DNA and RNA sequence data to produce full gene and protein sets. As a reference annotation, we used the pea aphid’s ( Acyrthosiphon pisum ), which is a closely related, well-annotated genome sequence 23 . Our genome annotation contains 36,985 unique genes ( Supplementary Figs 3–8 ), which is slightly higher (8–17%) than similar annotated insect genomes such as the pea aphid’s genome ( Supplementary Fig. 9 ). While these numbers may indicate a complex transcriptome, it is worth noting that our annotation included RNA-seq data from all life stages of the bedbug and used comparisons to a large set of gene models in A. pisum , both of which would create an expansive set of gene models. To gauge which of these genes were developmentally regulated, we first determined the number of genes that were expressed uniquely at one stage, and we found that the first instar has a much greater number of unique genes than any other stages ( Fig. 1 ), indicating a burst of specific transcriptional activity on emerging from the embryo. Next, we determined the total number of genes expressed per stage and found fairly consistent gene activity across the stages, with a range of 14,752–20,673 genes detected at levels above one read per kilobase per million reads (RPKM; Fig. 2a,b ). However, there is a consistent decrease in overall gene expression and transcriptional activity (number of genes) from the first instar stage through adult. Figure 2: Total and unique genes active in developmental stages. ( a ) Overall transcriptional output and complexity are similar between all the stages of development, yet the number of DEGs is highly variable between different life stages. ( b ) A volcano plot showing the genes with significant differential expression (log fold-change of >2, false discovery rate 0.05 and RPKM of at least 1.0), with the –log 10 of the P value on the y axis and the fold-change on the x axis. ( c ) A Sankey diagram shows the total number of DEGs for all comparisons ( n =8,198) between different life stages, and the proportion of each comparison (middle), as well as the DEGs that are unique to each comparison (middle) and those unique across all stages (right, n =4,912). Full size image We then compared gene expression from different parts of the life cycle to detail the stage-regulated changes that occur throughout development ( Supplementary Table 5 ). We used DESeq2 (ref. 24 ) to discern significant changes in gene expression (fold-change >1.5, and <2, and Benjamini–Hochberg correction for false discovery rate <0.05). We hypothesized that, after a blood meal, a large molecular shift would occur because of ingestion of copious amounts of foreign material. Indeed, we observed a striking increase in the number of significantly differentially expressed genes (DEGs, n =4,262 genes) after the first blood meal (between the first and second nymph stages, Supplementary Fig. 2c ). Notably, this rapid change in the expression dynamics of the bed bug is the largest change in its entire life cycle, representing 20% of all stage-regulated genes ( Supplementary Fig. 2c ) and is even larger than the catalogue of sex-regulated genes that distinguish males and females ( n =2,828 genes). To address concerns that human DNA or RNA might dominate sequences in the assemblies or DEGs, both transcriptome and genome assemblies were aligned to the human reference genome. Out of 135,489 putative transcripts, only five mapped to the human genome (see Methods). The alignment lengths of these five sequences ranged from 203 to 342 bp, with a per cent identity range of 81–88%. Out of 12,259 genomic scaffold sequences, none mapped to the human genome when requiring an alignment length ≥100 bp and a per cent identity ≥80%. These results indicate that the first instar DNA and RNA collections were indeed before the first human blood meal. Finally, to assess the repeat content of the genome, we used RepeatMasker 25 and the Repbase 26 sets of standard genomic repeats and found that 2.63% of the genome contain repeats. Since RepeatMasker only identifies repeats that have been previously identified in a small number of model systems, we used the RepeatModeler software 27 to detect and model bed bug-specific repeats and annotated an additional 29% of the genome as repetitive, indicating that many of these newly discovered repetitive elements are relatively understudied and under-represented in current databases. However, similar trends were observed as in other complex metazoans, for example, the most prominent repeats are long-interspersed repetitive elements covering 11% of the genome, with another 2.5% stemming from short-interspersed repetitive elements. A full catalogue of the repeat content of the genome can be found in Supplementary Table 6 . The functional bed bug microbiome To investigate the C. lectularius microbiome, we performed reciprocal TBLASTX of all C. lectularius genes against the full set of bacterial genomes from GenBank ( Supplementary Table 7A ). The most frequent matching organisms were Wolbachia , followed by Clostridium . The high prevalence of Wolbachia was expected, as they are known to be one of the most prevalent and important endosymbionts of insects 28 , 29 . To create a more conservative set of genes of microbial origin for these sequences, we used a cross-kingdom analytical tool called alien_index 30 that is designed to determine whether the top BLAST hit for a gene is eukaryotic or microbial 31 . Using the default cutoffs, we found 114 genes that are still strongly predicted to be microbial in origin ( Supplementary Table 7B ), and the majority were still from Wolbachia, indicating a potential function for these bacterial genes. We next examined the genome of C. lectularius Wolbachia endosymbiont ( w Cle), which was recently sequenced 32 and found to be essential for C. lectularius growth and reproduction by supplying B vitamins 33 . We manually examined the locations of the w Cle genes in our assembly and found no evidence of horizontal gene transfer, since bacterial and eukaryotic genes were always grouped on separate contigs. Yet, these w Cle genes only appeared as DEGs in two distinct stages: after feeding on blood (1st versus 2nd instar, n =70 genes) and immediately after (2nd versus 3rd instar, n =11 genes). Consequently, these data suggest a dual host/microbiome response to a blood meal and provide an annotated set of genes linked to the w Cle that functions as a putative endosymbiont aide to the blood meal. When we examined the sequence composition of these genes, we observed that the genes with the highest numbers of single-nucleotide polymorphisms (SNPs) distinct from the Wolbachia reference are part of pathways that may have different fitness significances for intracellular symbionts. For instance, the gene ddl coding for D-alanine–D-alanine ligase that is involved in peptidoglycan (lipid II) biosynthesis and is required for cell wall synthesis exhibited the highest number of nucleotide differences. The gltA gene—a locus frequently used in bacterial and Wolbachia phylogenetics and a member of the Wolbachia MLST scheme 34 —may be subject to higher rates of recombination 35 , 36 . We utilized protein structural homology modelling to investigate the variations in DDL based on the high-resolution crystal structure of Escherichia coli DDLB (PDB ID: 1IOV, 53%), which is the closest related sequence with similarity with the Wolbachia DDL 37 . Of the 15 nucleotide differences between the Wolbachia ddl gene sequences, two nonsynonymous substitutions are likely to affect protein function (A58D and T84R). Several interesting structural changes in the ATP-binding pocket between wild type and mutant proteins were observed after molecular dynamics (MD) simulations ( Fig. 3 ). Specifically, in the predicted structure of the mutant protein, strand 95–98 was shifted in such a way that potentially allows for more ATP-binding flexibility. As a result, the ATP position was also modified with the formation of an additional hydrogen bond contact with N265, which could lead to an even stronger interaction with the ligand ( Supplementary Fig. 10a ). Interestingly, this change is facilitated by the replacement of alanine by threonine in position 98 (A98T), thus creating a hydrogen bond network between T98, K168 and D96, which commonly occurred in DDL structures of other species ( Supplementary Fig. 10b and Supplementary Table 8 ). Figure 3: A closeup view of the ATP-binding pocket of DDL. ( a ) Two structure models of the DDL protein, the reference C. lectularius Wolbachia DDL (magenta) and the DDL mined from our C. lectularius genome sequence (green), are superposed for visual comparison. ATP from a reference model is shown in orange and our protein in yellow sticks. Other amino-acid residues are shown as sticks and colour-coded by chemical element scheme. Arrows indicate a shifted position of the strand 95–98 and a new hydrogen bond between ATP and Asn265 in our protein. ( b ) A new hydrogen bond network in our protein between Thr98, Lys168 and Asp96 is highlighted by red circle. Full size image To further investigate the variability and evolutionary history of ddl , we examined multiple Wolbachia species ( Supplementary Fig. 10 ). While the two bed bug w Cle sequences showed no signs of diversifying selection within the broader taxonomic context, they did harbour eight amino-acid differences. The only codon uncovered to have evolved under diversifying selection was codon 174 (posterior probability=0.92), as evidenced by the consensus of multiple selection detection Supplementary Methods . Over 100 codons in ddl were found to be evolving under purifying selection, while the rest were found to evolve under neutrality, thus indicating the evolutionary tendency for this gene to accumulate synonymous substitutions over time. Phylogenetic context and activity of anticoagulation genes Using comparisons with other haematophagous species, we investigated the anticoagulation repertoire of the bed bug. We first aggregated known anticoagulants from 14 other species (see Methods) and used reciprocal BLAST to identify similar genes in the bed bug genome. High-scoring matches for predicted gene products with complete signal peptide secretory sequences were found for three classes of anticoagulant genes and their related proteins, including the serine protease inhibitor infestin, the antihaemostatic (antiplatelet aggregation factor) apyrase and the vasodilator or antihistamine lipocalin; all three are known biological adaptations to blood feeding ( Supplementary Table 9 ). Infestin is a Kazal-type thrombin inhibitor (binding in a slow, tight-binding, competitive process) that is utilized as a structural scaffold template for exogenous anticoagulants 38 . Apyrase promotes the formation of haematomas and is a salivary enzyme that hydrolyses ATP and ADP to AMP and orthophosphate, thus preventing the effect of ADP on haemostasis 38 . Thrombin and intrinsic tenase complex inhibitor ‘lipocalin’ has a characteristic eight-stranded antiparallel β-barrel structure that the kissing bug Triatoma pallidipennis uses as a scaffold for anticoagulants 39 . Lipocalin is also found in the kissing bug Rhodnius prolixus. Our gene annotation set also found several other characterized proteins with some association to a blood-feeding lifestyle. First, we found orthologues for venom metalloproteases, which are most intensively studied in the context of crotaline and viperine snake envenomations, wherein their haemorrhagic activity relates to endothelial pathology, fibrinogenolysis and their ability to act as disintegrins that inhibit platelet aggregation 39 . In addition, we discovered orthologues for zinc-binding metalloproteases that are also present in the saliomic profiles of a wide range of arthropod sanguivores, including ticks 40 , hookworms 41 and cimicomorphs related to bed bugs, for example, the reduviids 42 . Serine protease inhibitors are more commonly associated with a blood-feeding habit than are serine proteases 43 . Nonetheless, a variety of these proteases and other trypsin-like plasminogen activators have been characterized from the salivary transcriptomic profiles of the relatively closely related Triatoma matogrossensis 42 and T. infestans 44 . Next, we investigated the alignment of our top-matched open reading frames with triatomid bug infestins and homologues in triatomines, which exist in a tandem array of seven paralogues with varying functionalities. A phylogenetic analysis of the infestin proteins ( Supplementary Fig. 11 ) suggests that this Cimex protein is indeed a member of the large infesting protein family, which includes dipetalogastin, brasiliensin and infestins. The phylogenetic analysis indicates an evolutionary affinity of the Cimex protein with the infestin 1 proteins in other insects. To create a bed bug-specific set of blood-feeding genes, we filtered predicted gene products against the full non-redundant database of annotated proteins (nr, see Methods). This yielded 161 predicted genes; however, after discarding those that failed to return a weak signal peptide sequence ( D -score<0.45, see Methods), 28 predicted gene products remained ( Supplementary Table 10 ). Among those predicted to be targeted for extracellular functionality were protein sequences with matches to apyrase, the antithrombin infestin, liocalin, salivary lysozyme and trypsin, a metalloprotease, a carboxylesterase, a high-scoring match to a Gryllus gland protein and eight serine proteases. Furthermore, there were four matches to unannotated salivary proteins from T. infestans , each with N-terminal signal peptide sequences. Two additional predicted genes of interest to sanguivory are a match to an unannotated 50-kDa midgut protein from blood-feeding sandflies, and an intriguing match to a pig lung surfactant protein. These proteins represent the likely secreted proteins that assist with its haematophagous lifestyle. Insecticide resistance We identified C. lectularius orthologues to insect genes known to confer partial or full resistance to insecticides. First, pyrethroids are synthetic organic compounds found in most commercial household insecticides 46 , which can delay the closing of the voltage-gated sodium channel, resulting in prolonged nerve impulse transmission and, eventually, paralysis and death. Nonsynonymous substitutions in the para-type voltage-gated sodium channel gene were first identified in the house fly, Musca domestica , termed knockdown resistance ( kdr ), and, specifically in the transmembrane domain II of the sodium channel, have been linked to pyrethroid resistance. This mechanism has also been identified in cockroaches, aphids, mosquitos, cotton bollworm, thrips and various insects extending to different amino-acid substitutions. Bed bugs also harbour kdr substitutions that appear to be widely distributed across the United States of America. We identified one voltage-gated sodium channel orthologue in the bed bug genome that exhibited a high degree of sequence identity (86%) and a close phylogenetic match to kdr ( Supplementary Fig. 12 ) with a sodium channel protein of the assassin bug ( T. infestans ; Uniprot ID: A0A023F5Z6). We also found a match to the insect γ-amino butyric acid (GABA)-gated chloride ion channel receptor, which is involved in learning and memory and is encoded by resistance to a dieldrin ( Rdl ) gene that is targeted by insecticides belonging to the cyclodiene (for example, dieldrin) and phenylpyrazole (for example, fipronil) chemical families. A single, nonsynonymous substitution responsible for an Ala-Ser replacement in the second transmembrane domain has been found to confer increased levels of resistance to cyclodienes in Drosophila species and other insects, while additional nonsynonymous substitutions have been identified in the same gene in anopheline malaria vectors. Rdl gene duplicates have been found in Drosophila species and in the green peach aphid to increase gene expression of the resistance-conferring locus, while maintaining the original gene function. Notably, we identified two clusters of GABA receptors in the bed bug genome comprising three and seven proteins each with high similarity (>95% amino-acid sequence match) to other insect homologues. A third type of putative resistance to insecticides is based on increased metabolic detoxification because of the action of cytochrome P450 monooxygenases (P450), glutathione- S -transferases (GSTs) and CEs. GSTs are encoded by a gene superfamily in both arthropod and vertebrate clades, suggesting basic roles in metabolism 45 , and contribute to insecticide resistance by relieving oxidative stress induced by organophosphate compounds. Our reciprocal BLAST analysis (see Methods) detected 16 different bed bug GSTs assigned to various cytosolic classes with a higher representation of sigma class members. Esterases E4 and FE4 capture and hydrolyse organophosphorous insecticides, and their genes can be found in tandem—likely following gene duplication—while being controlled by different regulatory mechanisms 46 . Three homologues (CLG00050, CLG13404 and CLG00055) were found in bed bug with partial identity to other blood-feeding haemipteran insects (kissing bugs, R. prolixus and T. infestans ), nested within the main cimicomorph esterase clade ( Supplementary Fig. 13 ), and we found higher expression of these genes in the last instar and adult stages. These genes represent novel candidates for the molecular basis of insecticide resistance in the bed bug, and it is notable that CE genes are predicted to co-evolve cuticle-thickening genes, which may serve as a first line of defence to insecticides in mosquitos 48 and bed bugs 49 . Phylogenetic contextualization We next investigated C. lectularius phylogenetics in two ways. First, we constructed a gene content framework for C. lectularius in the context of 20 other fully sequenced arthropod genomes. We analysed this presence/absence of matrix to investigate the phylogenetic relationships of C. lectularius 47 , 48 , 49 . Second, we used orthologous gene sequences in phylogenetic analysis to generate an overall hypothesis of relationships based on sequences. The gene content matrices were analysed using equally weighted parsimony, Dollo parsimony and maximum likelihood using the BINGAMMA model in RAxML 50 . The proteome sequence matrix was analysed using maximum likelihood in RAxML implementing the general time-reversible substitution model estimated from our arthropod sequence data (see Methods). A total of 11,919 orthologous protein sequences were established among the 20 fully sequenced arthropod genomes. The analyses of both the gene content data ( Supplementary Fig. 14 ) and the sequence data ( Fig. 4 ) yielded broadly congruent results with the protein sequence analysis being entirely in agreement with the accepted topology of insect relationships while also being fully robust (bootstrap support 100% at each node). These phylogenetic relationships place the bed bug closest to the R. prolixus , and then to P. humanus , both of which are blood-feeding haemipterans known to associate with humans. Figure 4: Arthropod phylogenomic relationships. Maximum likelihood estimation of the phylogenetic relationships among genome-enabled arthropods with the blacklegged tick ( Ixodes scapularis ) set as outgroup. All nodes were robust at 100% bootstrap support. The scale bar denotes substitutions per site. Full size image Urban phylogeography of bed bugs To investigate the diversity of bed bugs across NYC, we utilized the PathoMap resource 51 that performed metagenomic sampling of 1,447 locations across NYC, including 465 subway stations of the NYC Metropolitan Transit Authority. This project swabbed each location and then performed shotgun sequencing on the resulting DNA. The project was primarily intended to look at microbial diversity across NYC. However, because of the nature of the sample collection, DNA from any taxa present in the subway, or carried on individual’s shoes into the subway, would be found. We aligned all of the reads for all locations to our reference bed bug genome using Burrows-Wheeler Aligner 52 and called variants using freebayes 53 . We then filtered the data to only include variants having calls for 90% of the sites. The bed bug SNP matrix was used to construct phylogenetic trees for the different subway lines and for divisions based on location (aboveground and belowground) and borough of NYC where the sample was obtained. We first sought to determine whether there was any biologically meaningful phylogenetic structure in the SNP data sets. Accordingly, we examined the trees for structure relating to several variables: above/belowground, borough, object swabbed (for example, benches and turnstiles) and material swabbed (for example, metal and plastic). Following visualization, the retention index (RI) of each variable was calculated for each tree. A randomization test was then conducted for each variable, testing whether or not the actual RI was greater than the RIs of randomized data. The only variables showing significant structure are borough ( Fig. 5a ) and above/belowground ( Fig. 5b ). Although significantly different from randomized null distributions, the borough (RI=0.1969; P =0.0212) and above/belowground (RI=0.1870; P =0.0024) characters are highly variable on the tree, showing numerous small patches of structuring rather than a few gains and losses of each character. These results likely relate to the potential largely panmictic (random mating) nature of these populations and limitations of the sequence data, including the potential cross-mapping to a related species. We next visually examined whether the subway lines themselves exhibited phylogenetic structure. The two East-West lines of the NYC subway system ( Fig. 5c–d ) showed a similar phylogeographic structure, with the same split re-occurring in both lines and subsets of the variants staying within one borough. This structure suggests that areas of the city in close proximity to each other show bed bug populations that are related to each other, and one borough’s population can be distinct from others. Figure 5: Phylogeographic distribution of bed bug DNA across New York City. ( a ) A comparison of the DNA found on subway benches across different boroughs. ( b ) A comparison of the DNA found aboveground and belowground. ( c ) The seven subway line and ( d ) the L subway line bed bug relationships are overlaid on a map of New York. The phylogenetic subgroup (colours) showed the same branch point for these two lines, both exhibiting an early split between the red and yellow subgroups across the different areas of the city. Full size image Discussion These data represent the first genome and transcriptome assembly, mapping and functional characterization of the C. lectularius (bed bug) genome. The gene expression profile of the bed bug demonstrated that the first blood meal of the bed bug is the most dynamic period of the bed bug’s transcriptional activity, and thus has broad implications for Supplementary Methods that may target these haematophagic pathways and mechanisms. Indeed, the discovery of a secreted prolylcarboxypeptidase is intriguing in light of the association with angiogenesis 54 and the ability to activate prekallikrein 55 . In contrast, other insect prolylcarboxypeptidases are more active in the midgut of insects than in salivary secretions 56 . Predicted protein sequences similar to the accessory gland protein of Gryllus veletis , a cuticular protein potentially associated with salivary ducts, are believed to be involved in the prevention of microbial putrefaction of cimicomorph blood meals 42 , 52 . Similar functions are anticipated for high-copy transcripts in other sanguivorous invertebrates 57 . Although we observed 8,198 genes with stage-specific regulation, there was also some overlap between different stage-by-stage comparisons ( Fig. 3 ), such that we annotated a total of 4,912 unique, significant DEGs across all life stages. This means that, although the majority of the 38,615 annotated genes were active at some point during development, only 13% of genes demonstrated significant differential expression between stages. This matches results from other studies, which have shown that much of the differential gene regulation is cell- or tissue-specific, or exhibits a finer-resolution 60 ; for example, Drosophila melanogaster expression can change at an hourly rate during development and changes dramatically by cell type 61 , 62 . Thus, while our genome annotation is estimated to be very complete at the gene-count level, as estimated by CEGMA’s CEG count (96%), and in comparison with other species, undoubtedly additional tissues, cell types and substages can now be characterized to classify other genes’ spatiotemporal activity and putative function. The C. lectularius genome’s orthologous gene content, repetitive element structure and overall gene number further contextualize the organism within insect phylogenetic history, and include a comparable or larger annotation set than most insect species ( Supplementary Fig. 8 ). With an analysis of 20 fully sequenced arthropod genomes, leveraging both gene content and sequence data, we estimate that our phylogenetic assignments are congruent and accurate ( Fig. 5 and Supplementary Fig. 10 ). In this context, all of the major orders of insects that could be included in the analysis, that is, Diptera, Coleoptera, Lepidoptera, Hemiptera and Hymenoptera, are robustly monophyletic. The internal relationships within orders with more than two taxa (Hymenoptera, Diptera and Hemiptera) matched existing phylogenetic assessments, and the placement of Hemiptera as the sister group to the rest of insects was also in accord with the accepted understanding of insect order systematics 63 . These closely related species provide further clues about the evolutionary history and phylogenetic relationships of the bed bug, which can help direct future studies and genome assembly prioritization. Interestingly, we were able to show that metagenomic data created for one purpose can have broad uses and can be repurposed on the completion of a new genome. The bed bug genetic diversity we found along the NYC subway metagenome indicates that geography can shape the distribution of genetic variants in the landscape and may serve as a ‘molecular echo’ of the species in the city, and their DNAs’ movement by human hosts 53 . Multiple ongoing efforts for environmental sampling and shotgun-based DNA-sequencing exist, such as those from the MetaSUB project ( ), which are advancing our understanding of species diversity and complexity in rural, remote and urban settings. Yet, an ongoing challenge for data from such metagenomics profiling is the ‘missing genomes’ problem, such that we know that databases are incomplete and that it may not be possible to always match genomic DNA data to the correct genome. As more reference genomes become available, such as this one for C. lectularius , we have shown here that orphaned reads from metagenomics projects can be successfully ‘rescued’ and mapped to a reference genome that previously were unknown. Furthermore, such alignments can be used to discern a potential phylogenetic relationship between strains in a city, which can then aid in an understanding of the differences between strains for pest control and characterization. Historically, various methods of resistance to insecticides have been documented, including sodium channel blockers, enzyme detoxification pathways and thicker chitin layers that prevent insecticide penetration of the outer exocuticle. Notably, we see evidence of many of these same mechanisms present and active in the bed bug, with increased activation after feeding on blood meal. The information presented here provides an essential biomolecular resource that can aid in understanding the origin, development and genetic basis of resistance to insecticides, as well as in providing a baseline for population-level comparisons. In turn, this knowledge will help control bed bug infestations, improve management of species diversity and lead to a greater understanding of the fundamental biology of this ancient, eukaryotic ‘companion’ of humans. Methods Raw sequence data The genome assembly validated by the NCBI, where it was checked for adaptors, primers, gaps and low-complexity regions. The genome assembly has been approved and given the accession number JRLE00000000 and BioProject PRJNA259363. All genome-sequencing data have been deposited in the SRA with accession number SRS749263. RNA-seq data are available as FASTQ files and were quality-checked and deposited in the SRA with accession number SRR1790655. Bed bug samples The bed bugs were taken from a Harlan strain colony maintained by Louis Sorkin (American Museum of Natural History). The Har-73 strain was originally collected by Harold Harlan in 1973 from an infestation at the US Army barracks in Fort Dix, NJ, USA, and has been raised as a laboratory pesticide-susceptible strain since that time. Bed bugs were reared in ∼ 236.6 ml (8 oz) glass canning jars, where the metal covers had a 250–350-μm hole mesh-screening heat-glued on the inside. Heat glue was applied to the outer circumference of the screen surface to leave a 3-cm-diameter central circle of exposed screen. Folded cardboard was used as substrate. Jars were inverted on a human arm for feeding for 30 min on a monthly basis. Jars were kept in plastic box with an open lid and left at room temperature. Specimens used for nucleic acids extraction were 1st instar nymphs that recently hatched but had not taken any blood meals ( ∼ 1 mm in length, pale to white in colour). Transcriptome assembly The bed bug transcriptome was produced using the Trinity assembler r2012–10–05 (ref. 59 ). To reduce the amount of redundant information fed to Trinity, duplicate sequences among the 631,227,170 50-bp single-end reads were removed using the fastq-mcf programme from the ea-utils library. This was achieved using the command line options -0 -D 50 n/a. Before assembly, the adapter sequencers were trimmed from all reads using SeqPrep v1.0 (ref. 21 ) with the following parameters: 5′-AAGATCGGAAGAGCACACGTCTGAACTCCAGTCACBAGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTA-3′. Basecall quality trimming was then performed using SolexaQA 60 with a phred score cutoff of 20 (-h 20) in DynamicTrim.pl and a minimum trimmed read length of 23 (-l 23) in LengthSort.pl. Trinity was run with the following parameters: --seqType fq --JM 200G --CPU 32. The assembly statistics are shown in Supplementary Table 9 . CEGMA and sequence data validation CEGMA v2.4.010312 (ref. 20 ) was used to check for the existence of CEGs in both the genome and transcriptome assemblies. Default parameters were used for the genome assembly, while --max_intron 0 was used for the transcriptome assembly. To assess the validity of the final assembly, the CEGMA 20 was used to establish our coverage of CEGs. Out of 248 CEGs, the ALLPATHS assembly included 218 completely assembled genes, with an additional 21 CEGs partially assembled, giving us an estimated gene completeness of 96% (239/248). We also had the genome assembly validated by the NCBI, where it was checked for adaptors, primers and low-complexity regions. The genome assembly has been approved and given the accession number JRLE00000000 and Bioproject PRJNA259363, and all the RNA-seq data have been deposited in the SRA (ID:264998). Functional annotation We performed functional annotation of bed bug sequences based on the gene ontology (GO) vocabulary using the Blast2GO v2.5.0 pipeline ( ) with the following parameters: java -Xmx50G -cp *:ext/*: es.blast2go.prog.B2GAnnotPipe -in bedbug.allBBgeneMatches.txt -out bedbug_out_50G.annot -prop b2gPipe.properties.local –annot, where b2gPipe.properties.local points to a local Blast2GO database. We also used InterProScan v5.5–48.0 (ref. 61 ) with the following parameters: -dp -f TSV,XML,GFF3 -goterms -iprlookup -i Cimex_lectularius. Human contamination of RNA-seq data Unaligned reads retained when producing previously described RNA-seq alignments to the Metassembler genome assembly were aligned to human genome hg19 using STAR. The samtools view command was used to count aligned reads with the -S -c -F 4 options. Active gene discovery Sorted bam files for each developmental stage and sex, as described previously, were used as input to the rpkmforgenes.py programme 62 . Each replicate bam file was processed separately. The resulting RPKM values were filtered at three different RPKM thresholds: 0.1, 1 and 10. A gene model is only considered active in the case that RPKM values for all three replicates surpassed the threshold. The counts for genes considered active were plotted using Python’s matplotlib. Analysis of genes related to blood-feeding activity Several suites amino-acid sequences from anticoagulants and other bioactive proteins involved blood feeding known from other sanquivorous taxa were prepared as target databases for blastp searches using unannotated predicted gene products from the combined Qmolecula/allpaths hybrid assembly. Those targeted were antithrombins, factor Xa inhibitors, platelet aggregation and activation inhibitors, hyaluronidases and plasminogen activators. In addition, the full set of predicted gene products was compared both with ToxProt, a compilation of all toxin proteins produced by venomous animals, as well as a third query database comprising all salivary protein sequences already annotated for Cimicomorpha at NCBI. The latter consists primarily of those sequences available for the saliome of Tratima infestans. High-scoring matches ( e -value<−60) then were sorted and evaluated for relevance to salivary and blood-feeding-related functionality. Premised on the notion that to be biologically active in the context of sanguivory, and that they would be expected to be targeted to the extracellular environment, amino-acid sequences were subjected to the prediction of N-terminal signal peptide regions ( D -cutoff=0.50), leveraging artificial neural network systems through SignalP 4.1 at . Predicted gene products were then compiled and compared with BLASTP against the full suite of available annotated sequences (NR in GenBank) to determine whether another non-target functionality was a better match; if a better E -value was found these were removed. We mined the set of bed bug protein sequences via BLASTP by using as queries a multitude of proteins from other species known to confer partial or full resistance to insecticidal compounds, when (1) containing one or more amino-acid replacements, (2) their genes are duplicated or (3) their genes are associated with transposable elements. The bed bug hits were queried themselves against the UniProt protein knowledgebase ( ) using BLASTP, and the results were manually inspected for similarity to candidates of known function. Bacterial genetic traces We downloaded all of the complete bacterial genomes that were listed in Ensembl release 24 ( ftp://ftp.ensemblgenomes.org/pub/release-24/bacteria/fasta ). In total, this sample included 20,030 bacterial strains. We ran reciprocal TBLASTX searches between the bacterial genomes and both the C. lectularius gene set and the full genome sequence using a cutoff E -value of <1e−5 and required a 30-bp overlap match. For the SNP calling, we ran MUMmer 63 to compare the gene calls from the bed bug genome against the reference C. lectularius Wolbachia endosymbiont ( w Cle) genome 32 . Protein modelling Protein structural modelling was carried out with SWISS-MODEL ( ) producing a high-quality structure with a model-template C-α root mean square deviation of 2.3 Å. The models were further refined with MD simulations with the Amber14 MD suite 64 . The proteins and ATP molecules were placed in a water box, and after initial minimization and equilibration for 1 ns, the production run with the canonical (NVT) ensemble and Langevin thermostat heat exchange totalling 100 ns was conducted on a high-performance Linux cluster with NVIDIA Tesla GPU nodes. MD trajectory files were collected and an average structure over all 100-ns timeframes was calculated for each model with the VMD programme 65 and followed by a brief minimization. Post-MD simulation analysis and visual representations were conducted in MOE programme 66 . All available 39 X-ray crystal structures of DDL proteins were downloaded from the Protein Data Bank ( ). After aligning protein sequences, we searched for the residues that were located in the same positions as in the reported network, and indeed found substantial supporting evidence for such network occurrence. Among these 39 structures, 24 of them have Lysine in the position similar to K168 of Wolbachia . Aspartic acid in position 96 is conserved among 38 available crystal structures. There are some variations in position 98, where we also observed a mutation A98T. Aspartic acid is the most common amino acid in this position (occurred 15 times), followed by Leucine (also 15 times). There is no available crystal structure of DDL with threonine in position 98 ( Supplementary Table 8 ). Interestingly, three-member networks similar to the D96-T98-K168 hydrogen-bonding network observed after MD simulations in the K168 mutant form of Wolbachia were present in all D96-D98-K168 and D96-L98-K168 X-ray crystal structures. However, if K168 is replaced with E, as happens in 10 crystal structures, then such a network is not observed. It is especially evident for sequences where position 98 is occupied by amino acids with aliphatic side chains, for example, Leucine. We found it very intriguing that such a hydrogen bond network occurred only in the mutant protein despite the fact that our template structures, 1IOV and 4C5B, lack this network. As mentioned in the main text, the replacement of Alanine with the larger Threonine side chain, which can serve as a hydrogen bond donor, may help in the formation of this three-member network T98-D96-K168 and facilitate the shift of T98 towards K168 in the mutant protein that resulted in 95–98 strand shift and create more space for ATP binding in the mutant DDL versus wild-type A98 DDL. On the basis of the computational model we concluded that among eight observed mutations, A58D, I60V, T84R, I93V, A98T, L104F, G108D and I109V, none was directly involved into the binding of ATP. However, it is worth noting that, in the wild-type protein, the residues in positions 58, 60 and 84 are in close proximity and form a hydrogen-bonding network that stabilizes loop formation in this region. It was expected that a change from a small neutral residue to a larger charged residue (for example, A58D and T84R) might cause reorganization of the loops. The comparison of the wild-type and mutant DDL models suggests that a replacement to oppositely charged amino acids might lead to stronger interactions within this network. In addition to hydrogen bonds, strong ionic interactions occur between D58 and R84 in the mutant protein. This in turn leads to partial changes in adjacent flexible regions as seen in Supplementary Fig. 10 and may cause some alterations in ligase activity. Additional information Accession codes : The genome assembly has been deposited in NCBI under the accession code JRLE00000000 and BioProject PRJNA259363 . All genome-sequencing data have been deposited in the SRA with accession code SRS749263 . RNA-seq data have been deposited as FASTQ files in the SRA under accession code SRR1790655 . How to cite this article : Rosenfeld, J. A. et al. Genome assembly and geospatial phylogenomics of the bed bug Cimex lectularius . Nat. Commun. 7:10164 doi: 10.1038/ncomms10164 (2016). Disclaimer The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIGMS or the National Institutes of Health. Accession codes Accessions BioProject JRLE00000000 PRJNA259363 Sequence Read Archive SRR1790655 SRS749263 | Scientists have assembled the first complete genome of one of humanity's oldest and least-loved companions: the bedbug. The new work, led by researchers at the American Museum of Natural History and Weill Cornell Medicine, and published Feb. 2 in Nature Communications, could help combat pesticide resistance in the unwelcome parasite. The data also provides a rich genetic resource for mapping bedbug activity in human hosts and in cities, including subways. "Bedbugs are one of New York City's most iconic living fossils, along with cockroaches, meaning that their outward appearance has hardly changed throughout their long lineage," said corresponding author George Amato, director of the museum's Sackler Institute for Comparative Genomics. "But despite their static look, we know that they continue to evolve, mostly in ways that make it harder for humans to dissociate with them. This work gives us the genetic basis to explore the bedbug's basic biology and its adaptation to dense human environments." The common bedbug (Cimex lectularius) has been coupled with humans for thousands of years. This species is found in temperate regions and prefers to feed on human blood. In recent decades, the prevalence of heated homes and global air travel has accelerated infestations in urban areas, where bedbugs have constant access to blood meals and opportunities to migrate to new hosts. A resurgence in bedbug infestations since the late 1990s is largely associated with the evolution of the insects' resistance to known pesticides, many of which are not suitable for indoor application. "Bedbugs all but vanished from human lives in the 1940s because of the widespread use of DDT, but unfortunately, overuse contributed to resistance issues quite soon after that in bedbugs and other insect pests," said Louis Sorkin, an author on the paper and a senior scientific assistant in the museum's Division of Invertebrate Zoology. "Today, a very high percentage of bedbugs have genetic mutations that make them resistant to the insecticides that were commonly used to battle these urban pests. This makes the control of bedbugs extremely labor-intensive." The researchers extracted DNA and RNA from preserved and living collections, including samples from a population that was first collected in 1973 and has been maintained by museum staff members since then. RNA was sampled from males and females representing each of the bug's six life stages, before and after blood meals, to paint a full picture of the bedbug genome. "It's not enough to just sequence a genome, because by itself it does not tell the full story," said corresponding author Mark Siddall, a curator in the museum's Division of Invertebrate Zoology and Sackler Institute for Comparative Genomics. "In addition to the DNA, you want to get the RNA, or the expressed genes, and you want that not just from a single bedbug, but from both males and females at each part of the life cycle. Then you can really start asking questions about how certain genes relate to blood-feeding, insecticide resistance and other vital functions." The researchers found that the number of genes was fairly consistent throughout the bedbug life cycle, but they observed notable changes in gene expression, especially after the first blood meal. Some genes, expressed only after the bedbug first drinks blood, are linked to insecticide resistance, including mechanisms that result in better detoxification and thicker chitin, or skin. This suggests that bedbugs are likely most vulnerable during the first nymph stage, potentially making it a good target for future insecticides. The scientists also identified three types of anticoagulant genes and their related proteins – characteristics consistent with a highly specialized blood feeder. When compared with 20 other arthropod genomes, the genome of the common bedbug shows close relationships to the kissing bug (Rhodnius prolixus), one of several vectors for Chagas disease, and the body louse (Pediculus humanus), which both have tight associations with humans. The bedbug microbiome also contains more than 1,500 genes that map to more than 400 different species of bacteria, indicating that bedbugs harbor a rich suite of endosymbionts that are likely essential for their growth and reproduction. This indicates that antibiotics that attack bacteria beneficial to bed bugs (but that are non-essential to humans) could complement control of the insects via pesticides. In addition, the study incorporated DNA data collected concurrently from more than 1,400 locations across New York City, including every subway station, to look at microbial diversity. For this work, the researchers used the new genomic data to focus exclusively on the diversity of bedbugs. They found differences in the genetic makeup of bedbugs that reside in different parts of the city, as measured by traces of DNA on east-west versus north-south subway lines, as well as between borough locations and among surfaces (e.g., benches vs. turnstiles). The findings suggest that areas of the city in close proximity to each other have bedbug populations that are related, and that bedbugs from one borough can be distinct from those in another borough. This kind of information can be used to map the pathways of migration of bedbug infestations in established and new urban environments. "A great feature of metagenomics and microbiome data is its power to reveal new biology, since you can map previously unknown sequences to a new genome as soon as it is finished," said Christopher Mason, an associate professor of computational genomics in the Department of Physiology and Biophysics and the HRH Prince Alwaleed Bin Talal Bin Abdulaziz Al-Saud Institute for Computational Biomedicine at Weill Cornell Medicine and a senior author. "With every genome that is sequenced and annotated, the genetic understanding of the world around us becomes more in focus." | 10.1038/ncomms10164 |
Nano | Twisted van der Waals materials as a new platform to realize exotic matter | Dante M. Kennes et al. Moiré heterostructures as a condensed-matter quantum simulator, Nature Physics (2021). DOI: 10.1038/s41567-020-01154-3 Journal information: Nature Physics | http://dx.doi.org/10.1038/s41567-020-01154-3 | https://phys.org/news/2021-02-van-der-waals-materials-platform.html | Abstract Twisted van der Waals heterostructures have latterly received prominent attention for their many remarkable experimental properties and the promise that they hold for realizing elusive states of matter in the laboratory. We propose that these systems can, in fact, be used as a robust quantum simulation platform that enables the study of strongly correlated physics and topology in quantum materials. Among the features that make these materials a versatile toolbox are the tunability of their properties through readily accessible external parameters such as gating, straining, packing and twist angle; the feasibility to realize and control a large number of fundamental many-body quantum models relevant in the field of condensed-matter physics; and finally, the availability of experimental readout protocols that directly map their rich phase diagrams in and out of equilibrium. This general framework makes it possible to robustly realize and functionalize new phases of matter in a modular fashion, thus broadening the landscape of accessible physics and holding promise for future technological applications. Main The inseparability of groups of particles into products of their individual states underpins the most counterintuitive predictions of quantum mechanics. In condensed-matter physics, recent focus has been placed on finding and understanding quantum materials — systems in which a delicate balance between the crystal lattice and strong electronic interactions lead to new inseparable, collective behaviours with exotic properties. Encompassing rich phenomenologies ranging from superconductivity at high temperatures to topologically ordered states of matter with fractionalized excitations, numerous materials have recently come under intense scrutiny for the realization of quantum phases on demand 1 . A major impediment to the systematic study of correlated electron physics is the comparative lack of tunability of conventional chemical compounds. The main means of control are pressure, strain and doping. This limited range of control knobs severely limits experimental exploration of the wider phase diagram, essential to guide the discovery of novel quantum phases and to controlled studies of quantum criticality, unconventional phase transitions, or realizations of exotic topological states of matter that are both of fundamental interest and could have applications in emerging quantum technologies such as quantum computing and simulation. Another impediment lies in the difficulty of solving these paradigmatic quantum Hamiltonians via present computational approaches. As an alternative, in the early 80s, Feynman proposed quantum simulation as a paradigm to turn the problem upside down 2 : implementing clean models using real physical systems, to ‘simulate’ the ground state, thermodynamic behaviour or non-equilibrium dynamics of such models. If the implementation is well controlled, the results can be used to disentangle the more complex material’s phenomena for which the original model was set up, as well as to guide the way towards stabilizing and controlling new and exotic phases of matter in real quantum materials. Much progress has been achieved via studying ultracold gases of bosonic and fermionic atoms, which can be confined in optical lattices to realize lattice models of condensed-matter physics in a controlled manner 3 . While these systems are highly controllable and can faithfully realize certain quantum Hamiltonians, it remains an experimental challenge to engineer Hamiltonians with tunable long-range interactions and especially to access low-temperature emergent long-range ordered phases, such as the intriguing d -wave state of the t – t ′ repulsive Hubbard model 4 . This Review identifies moiré heterostructures of van der Waals materials as an alternative and complementary condensed-matter approach to realize a large set of highly controllable quantum Hamiltonians. While such systems do not typically afford the high level of isolation and precise tunability that cold atomic systems do, the large degree of tunability control available through readily accessible experimental knobs allows for the exploration of phase diagrams of vast and novel sets of many-body Hamiltonians in and out of equilibrium and at very low temperatures. In particular, we review how different choices of two-dimensional (2D) heterostructure compositions at different twist angles realize a wide range of effective low-energy electronic tight-binding Hamiltonians with various geometries, dimensionality, frustration, spin–orbit coupling as well as interactions beyond the Hubbard limit, including partially screened and unscreened Coulomb interactions. This allows for flexible interaction engineering as well as control of band structure and topology. Realizing model quantum Hamiltonians in van der Waals heterostructures Moiré heterostructures of van der Waals materials exploit quantum interference to quench the effective kinetic energy scales, which permits both driving the system at low energies to an interaction-dominated regime, and drastically enhancing anisotropies or reducing symmetries of the monolayer. Conceptually, the physical mechanism can be understood straightforwardly in analogy to classical moiré interference patterns (Fig. 1 , centre): if two identical patterns with a discrete translation symmetry in two dimensions are superimposed at an angle, the combined motif retains a periodicity at much longer wavelengths for an infinite set of small rational angles. Fig. 1: Moiré quantum simulator. Stacking sheets of van der Waals heterostructures with a twist gives rise to a plethora of effective low-energy Hamiltonians. Some of these realizations that were studied in the literature are given in the inner circle of the figure with the corresponding lattice, band structure and potential phases of matter that can be realized. For the red band structure, the dashed lines indicate the flat bands additionally present in the case of twisted bilayer MoS 2 compared with TBG. In the outer circle, we outline a perspective on possible future directions of moiré quantum simulators with many intriguing developments to be expected. k , k x , k y and k z denote momentum and momentum in the x , y and z directions, respectively. E and E k refer to band energy. Full size image Similarly, a heterostructure of two or more monolayers with commensurate lattice constants stacked at a twist retains spatial periodicity for small commensurate angles, albeit with a much larger moiré unit cell. This moiré superlattice can span many unit cells and defines a new crystal structure with a mini Brillouin zone. Within this moiré Brillouin zone, the folded bands lead to a multitude of crossings, which are subsequently split via interlayer hybridization 5 . Crucially, lattice relaxation enhances these avoided crossings and typically segregates sets of almost-dispersionless bands on millielectronvolt energy scales 6 , which can be addressed individually via gate-tuning the chemical potential. A complementary view of the physics of a moiré band follows from considering its real space description. A localized electron in a moiré band can be viewed as occupying a virtual ‘moiré orbital’ that can extend over hundreds of atoms, effectively spreading over the entire moiré unit cell. Importantly, the shape of moiré orbitals changes with the size of the moiré unit cell so that the electronic interactions become a function of the twist angle as well, decreasing at a slower rate than the kinetic energy scales for decreasing angle. This permits tuning a priori weakly interacting electronic systems into regimes dominated by electronic correlations in a controlled fashion. A first demonstration of the above phenomenology was realized in materials with hexagonal structure and multiple atomic orbitals. The moiré band width for twisted bilayers of graphene does not behave monotonically as a function of twist angle but exhibits a series of ‘magic’ angles 5 at which the moiré bands near charge neutrality become almost dispersionless over a large fraction of the moiré Brillouin zone. More broadly, as reductions of the electronic band width equivalently enhance the role of competing energy scales, this microscopic structural knob to selectively quench the kinetic energy scales in 2D materials opens up possibilities to selectively engineer heterostructures with properties that are dominated via many-body electronic interactions, ultrastrong spin–orbit interactions from heavier elements, electron–lattice interactions or electron–photon interactions, permitting realization of a wide range of novel correlated or topological phenomena. Here we provide a perspective on how different universal classes of quantum many-body Hamiltonians can be engineered and controlled using moiré heterostructures The moiré quantum simulator Table 1 summarizes some of the realizable lattice structures, associated model Hamiltonians and featured quantum phases that can be achieved within a 2D twisted van der Waals heterostructure framework. In our discussion, we will concentrate first on those systems realized or proposed in the literature, summarized in the inner circle of Fig. 1 . Then, we will provide a perspective on future potential control and simulation possibilities, shown in the outer circle of Fig. 1 . Table 1 Overview of possible quantum Hamiltonians, materials realizations and phases in twisted moiré heterostructures Full size table Honeycomb lattice So far, most experimental and theoretical research has concentrated on twisted bilayer graphene (TBG) (see refs. 7 , 8 , and a review 9 for an overview on TBG), with extensions to triple- and quadruple-layer generalizations 10 , 11 , 12 , 13 , 14 , 15 , 16 . TBG realizes a moiré superstructure with the relevant low-energy degrees of freedom effectively again forming a honeycomb lattice, although at millielectronvolt energy scales and imbued with an additional orbital degree of freedom reflecting the two layers of the original system 17 , 18 . With superconductivity 7 , 19 , 20 , 21 , correlated insulators 8 , 20 , 21 and the quantum anomalous Hall (QAH) effect 22 , 23 already realized experimentally, an intriguing possibility indicated by recent theoretical analyses is that repulsive interactions favour topological d + i d instead of nodal d -wave pairing 24 , 25 , 26 . Such topological superconducting states are potentially relevant to topological quantum computing 27 , and can be harnessed and controlled via tailored laser pulses 28 . Expanding the catalogue of engineered lattice structures, we next consider twisting two monolayers of MoS 2 (refs. 29 , 30 , 31 ). In these structures, families of flat bands emerge at the band edges, the first two of which realizes a single orbital Hubbard model while the second realizes a more exotic strongly asymmetric p x – p y Hubbard model 29 , 30 , both on the honeycomb lattice (see inner circle of Fig. 1 ). The latter emerges at small twist angle, where the overall band width of these families of flat bands is tuned into the millielectronvolt regime. Due to destructive interference within the strongly asymmetric p x – p y Hubbard model itself (meaning that the two orbitals have very different hopping amplitudes), attached at the bottom and top of these flat bands one can find bands with even much lower dispersion. The asymmetric p x – p y Hubbard model is therefore another lattice Hamiltonian that can effectively be engineered in this solid-state framework, and also controlled by external parameters, such as strain or fields. This could provide interesting insights into highly degenerate systems and the interplay of magnetism with these types of band structure. Similar phenomena are expected to emerge in other twisted transition metal dichalcogenide (TMD) homobilayers, such as twisted bilayer MoSe 2 and WS 2 . Triangular lattices A natural extension of TBG involves two twisted sheets of boron nitride (BN), which is often called white graphene for its structural similarity and large bandgap. When twisting two BN sheets, the consequences are even more dramatic; while in graphene, the quenching of the kinetic energy scales relies on being closed to a set of magic angles, in BN three families of flat band emerge, where the flattening of the bands is a monotonous function of twist angle. The effective lattice structure that is engineered at low enough twist angle is effectively triangular 32 , 33 . A similar engineered lattice structure is obtained by twisting two sheets of tungsten diselenide WSe 2 (ref. 34 ) or other TMD heterostructures 35 , 36 , 37 . In all these cases, a superstructure emerges that effectively confines the low-energy physics onto a triangular lattice. In addition, the heavier transition metal elements in such TMD systems impose substantial spin–orbit coupling onto the low-energy degrees of freedom, which can in turn induce interesting topological properties. Fundamentally, triangular lattices are prototypical model systems to study the role of geometric frustration on electronic and magnetic orders. Using twisted materials one might thus simulate effectively the properties of these theoretically challenging models, relevant also for understanding many other condensed-matter systems, such as bis(ethylenedithio)tetrathiafulvalene organic compounds and their rich phase diagrams 38 , 39 , 40 . Debated correlated states of matter that can be realized this way range from 120° Neel ordered states to exotic, topological forms of superconductivity and quantum chiral spin liquids 41 . Indeed, in the case of TMD homobilayers, features consistent with superconducting phases (with further experimental evidence for a truly superconducting state still highly desirable) have already been seen in close proximity to the insulating phase 34 . An advantage of these materials is that the angular precision needed for the emergence of new quantum phases is not as stringent as in graphene, so samples are easier to produce. However, the quality of the constituent materials tends to be poorer, and might need to be improved to allow a well-controlled realization of the model Hamiltonians and phenomena described in Table 1 and Fig. 1 . Rectangular lattices While all of the examples above build on stacking monolayers with a three-fold (120°) rotational symmetric lattice structure, this is certainly not the only interesting scenario that one can consider 42 . To illustrate this, we next turn our attention to twisted bilayer germanium selenide (GeSe) 43 or similarly germanium sulfide (GeS). Monolayer GeSe has a rectangular unit cell with a 180° rotational symmetry. When two sheets of GeSe are twisted, a similar quenching of the kinetic energy scales can be found as in the above discussed cases; however, the effective lattice structure realized by such a system is a square lattice at small angles of 8°–15° while at lower angles the hopping in one of the principal directions is reduced much more strongly than the corresponding hopping along the second principal direction. At about 6°, the effective lattice system realized is entirely dispersionless along one direction while it has a residual (though small due to the band narrowing at small angles) dispersion along the second one (see inner circle of Fig. 1 ). Thus, twist angle provides a control knob to tune the system from effectively 2D to effectively 1D low-energy physics, providing new insights into this fundamental and poorly understood problem. One-dimensional quantum systems exhibit strong collective effects. Bilayer GeSe at a small twist angle should thus provide an experimental example to study this emergent quantum regime, where Luttinger liquid, Mott insulating and bond-ordered wave states dominate the stage. Furthermore, sizable spin–orbit coupling of the low-energy bands emerges at larger twist angles, suggesting that interesting spin-momentum locked states with possibly nematic properties could be engineered 44 , 45 . Experimental realizations and readout of the moiré quantum simulator Next we outline some experimental progress on the realization and readout of controlled moiré structures. Experimental realizations Moiré structures on layered materials have been observed by microscopy for at least the past three decades. In 3D crystals, they can appear naturally during growth due to the presence of screw dislocations, or can be produced by mechanical damage of the crystal structure. These structures remained mostly curiosities until the advent of graphene, when it was realized that moiré structures are an effective way of tuning band structure (strong peaks in the local density of states were observed when twisting graphene on bulk graphite) 46 . Such moiré patterns are commonly seen in chemical vapour deposition-grown multilayer graphene and TMD films, and many experiments to understand the properties of these patterns were performed in the past decade 47 , 48 . In these samples, the twist angles generated were largely uncontrolled, and often showed disorder on submicrometre lengthscales, making the general applicability of these samples to other experiments limited. The first steps on the road to making moiré heterostructures was established by mechanically stacking two monolayers 49 of graphene and hexagonal BN (hBN) to make high-quality graphene layers for transport measurements. Due to the natural lattice mismatch between graphene and hBN, such structures naturally featured moiré textures 47 . The most successful technique to realize twisted bilayer systems is the ‘tear and stack’ method and its variants 50 . This technique essentially separates a single monolayer into two pieces, which are then naturally aligned with each other. One of the pieces is picked up, rotated by the desired angle and stacked with the second piece to create the desired moiré structure. This simple and elegant technique has so far proved to be the most productive method, but has its limitations in reproducibility and uniformity of the moiré patterns produced (other techniques such as pushing with an atomic force microscopy tip 51 or introducing heterostrain 52 have been developed). The development of techniques to measure the wavelength and uniformity of the Moiré patterns in situ are promising developments in the search to create the high-quality and precise moiré structures that we are discussing here. The in-plane bonding in 2D materials can arise dominantly from covalent bonding or alternatively from an ionic one. For the former, exfoliation and twisting sheets of materials should work well, while for the latter it will be difficult to do the same. In addition, there is a large class whose bonding is intermediate between ionic and covalent. For those, exfoliation and twisting techniques need to be tested experimentally. Readout of the ground-state properties Electrical transport has been the technique of choice to probe the ground-state quantum properties of moiré structures. The first experiments in this direction were the measurements of quantum Hall effects in graphene on hBN, which revealed the influence of the moiré potential via the formation of Hofstadter butterfly patterns in the magnetotransport 53 . A dramatic development was the discovery of collective quantum phases in TBG near the so-called magic angle 5 . After the original discovery of insulating 8 and superconducting phases 7 , new orbitally magnetized phases with a quantized Hall effect, indicating the development of topological phases that spontaneously break time-reversal symmetry, have also been observed 22 , 20 . Similar collective and topological phases have been seen in the case of ABC graphene on hBN 11 , in which case the moiré pattern between the ABC graphene and hBN provides additional flattening of the band structure that already displays a peak in the density of states at charge neutrality. More recently, the methods applied to graphene have also been applied to TMD-based heterostructures. The first experiments indicating the development of interaction-driven insulating phases have been performed in both TMD heterobilayers and homobilayers 34 , 36 . Spectroscopic readout Moiré structures present two great advantages for experiments compared with traditional solid-state materials. The first is that the size of the ‘lattice’ is expanded from the atomic scale to several nanometres, allowing optical and near-field techniques to directly probe the local spectroscopic properties of the system, in close analogy with the quantum gas microscope invented in the case of cold atomic gases. The second advantage is in their intrinsic 2D nature, which makes it possible to measure the properties of the entire sample with surface-sensitive spectroscopic probes such as scanning tunnelling microscopy or angle-resolved photoemission. This does come with its disadvantages however, as many traditional spectroscopic probes applied to solid-state systems (neutron spectroscopy, for example) produce signals that are proportional to the volume of the sample being measured, and are thus not easily applicable to these materials. Scanning tunnelling microscopy has been applied extensively to study the band structure in twisted graphene structures 46 , 47 , 54 , 55 , 56 , to visualize internal edge states that exist at domain boundaries in small-angle twisted bilayer and four-layer graphene systems, and to relate these states to the topological properties of the individual domains 57 , 58 . Touching on many-body correlation effects, prominent results obtained so far are a measurement of the insulating gap at half-filling, evidence that this insulating state does not break transnational symmetry but does break rotational symmetry near half-filling 46 , 54 , 56 and the discovery of an insulating phase at charge neutrality in four-layer rhombohedral graphene produced by twisting two bilayers 58 . Furthermore, van der Waals heterostructures display nearly all optical phenomena found in solids, including plasmonic oscillations of free electrons characteristic of metals, light emission/lasing and excitons encountered in semiconductors, and intense phonon resonances typical of insulators 59 , 60 . Therefore, optical spectroscopies allow one to reconstruct the role of moiré superlattice potentials in the electronic structure of twisted multilayers as well as in their electron and lattice dynamics. For example, infrared spectroscopy on a graphene/BN heterostructure shows that the moiré superlattice potential is dominated by a pseudospin-mixing component analogous to a spatially varying pseudomagnetic field 61 , and photoluminescence has uncovered the systematic evolution of light-emitting properties associated with interlayer excitons in TMD heterobilayers 62 . Conventional diffraction-limited optics suffers from the spatial resolution restricted by the wavelength of light and therefore provides only area-averaged information on the electromagnetic response of van der Waals heterostructures. Modern nano-optical methods 63 , 64 allow one to overcome this limitation and enable optical enquiry at the lengthscale commensurate with moiré periodicities. In addition, nano-optical methods give access to spectroscopy and imaging of hybrid light–matter modes known as polaritons 59 . Moiré structures also allow the setting up of a highly accurate tool enabling moiré metrology of energy landscapes in 2D van der Waals structures 65 . We conclude this subsection by noting that functionalized atomic force microscope tips enable multiple nanoscale contrasts in addition to nano-infrared studies, including nano-Raman, magnetic force microscopy, Kelvin probe and piezo force microscopy 66 . All these different methods provide complimentary insight carrying separate messages about the studied phenomena 65 . Co-located visualization of contrasts obtained with multiple functionalized scanning probes offers an opportunity for multimessenger imaging of the interplay between electronic, magnetic and lattice effect at the lengths scale of moiré domains 67 . What comes next? An important question is which lattice structure might be realizable using moiré physics in the future. Some promising future avenues of research are illustrated in Table 1 and in the outer circle of Fig. 1 — while extensive, these lists are not exhaustive and we are confident that other low-energy model realizations will emerge as more heterostructure ‘building blocks’ become experimentally viable. We concentrate on a few highly sought after candidates next. Quantum LEGO with correlated or topological monolayers Beyond utilizing a priori weakly correlated monolayers as overviewed above, a promising approach concerns constructing twisted heterostructures out of monolayers that by themselves already exhibit correlation-driven quantum phases. We present a short perspective of prototype 2D materials that should be studied in the future. (1) In niobium diselenide (NbSe 2 ), superconducting behaviour can be found even when exfoliated down to the monolayer (at temperatures of the order of 3 K) 68 . Two such sheets might provide a solid-state-based in-road into the question of how the superconducting state emerges within a conventional Bardeen–Cooper–Schrieffer picture and how this state crosses over to a strong coupling regime. (2) In the 2D van der Waals material chromium( iii ) iodide (CrI 3 ), quantum magnetic properties can be realized that depend on the number of layers and relative stacking order 69 . Twisted heterostructures made of CrI 3 could permit accessing antiferromagnetically and ferromagnetically aligned domains in a flexible fashion using moiré lattice engineering 70 , 71 . (3) RuCl 3 realizes a Kitaev–Heisenberg magnet with stripe order 72 . Layering on a substrate at a twist, or twisting layers of RuCl 3 , can provide a method to tune the magnetic interactions and putatively tune the system towards a quantum spin liquid, as well as study the effects of doping a proximal spin liquid in a controlled manner. (4) TaSe 2 realizes a correlation-driven insulating state that is characterized by a star-of-David charge density wave (CDW) configuration at low temperatures that yields a rather large unit cell (Fig. 1 , outer circle) 73 . Again, this correlation-driven phase is present even in the untwisted case. Twisting two of these structures atop each other raises the interesting questions of whether and how superstructures made of these larger star-of-David CDWs can be realized and what are their manifestation in the electronic properties of the material. (5) Beyond van der Waals materials, the moiré approach to modify electronic structures is also at play in correlated oxides. These are epitaxial structures and therefore cannot be rotated easily. However a moiré pattern in films of the prototypical magnetoresistive oxide La 0.67 Sr 0.33 MnO 3 can be epitaxially grown on LaAlO 3 substrates. The net effect is that both the electronic conductivity and ferromagnetism are modulated by this moiré engineering over mesoscopic scales 67 . This opens up yet another route in the combinatory space of chemical compositions to use in moiré systems. (6) 1T′ WTe 2 monolayers realize a 2D topological insulator due to a spin–orbit-driven topological band inversion 74 . While layering two WTe 2 monolayers would naively lead to a Z 2 trivial insulating state, the superlattice potential at finite twist angles can in principle induce a series of stacking-controlled topological transitions. A remarkable consequence of such a device would be the possibility to study the combination of non-trivial band topology and electronic correlations in moiré bands with quenched dispersion, potentially giving rise to experimentally realizable manifestations of fractional Chern insulators or fractional topological insulator phases. Highly controllable geometrically frustrated lattices One long-sought research goal regards the controlled realization of kagome or Kitaev quantum magnets 75 . In conventional candidate systems such as herbertsmithite, RuCl 3 , iridates or metal–organic complexes, disorder-free realizations of the low-energy magnetic Hamiltonian or suppression of unwanted and longer-ranged spin exchange interactions are major challenges. A moiré realization of the kagome lattice or of strongly spin–orbit-coupled multi-orbital moiré models in the strong-interaction limit would permit tuning magnetic interactions in a controlled manner, possibly allowing a realization of a quantum spin liquid state. This would elevate moiré heterostructures as a platform to simulate an experimentally elusive phase of matter in a highly controllable setting, with a potentially worthwhile starting candidate being twisted layers of RuCl 3 . Another direction recently put forward in the realization of an effective kagome lattice, is twisted MoS 2 , which was discussed above in the context of the asymmetric p x – p y Hubbard model. Beyond this at very small angles, the next family of bands, which can be accessed by further doping the system, would effectively realize a multi-orbital generalization of a kagome lattice 29 , 30 . Future studies need to address whether such small angles and large degree of doping can be realized experimentally. Proximity effects and spin–orbit coupling One key advantage of 2D systems is the possibility to induce effects via proximity to an engineered substrate. To avoid chemical modifications of the moiré system by the interface through strong chemical bonds, which would destroy the low-energy moiré bands of the original heterostructure, one can employ 2D heterostacking. Many interesting effects are expected in flat-band systems, when superconducting or strongly spin–orbit-coupled substrates are used to imprint some of their properties on the 2D system under scrutiny. For instance, large induced spin–orbit coupling 76 might reveal novel topological superconducting states with applications to quantum computing. Similarly, proximitization with superconducting substrates could permit a controlled study of Majorana wires and lattices 77 , 78 , 79 , for example, via using a quasi-1D moiré material such as twisted GeSe. An important corrollary of exploiting a moiré superstructure would be the delocalization of bound Majorana states on scales larger than the moiré unit cell, promoting their length scale from ångströms to nanometres. Potential topological phenomena include the demonstration of a high-temperature QAH effect and Majorana modes as well as antiferromagnetic topological insulators, which open a route towards topological spintronics. From 2D to 3D Another intriguing future research direction concerns extending the idea of simulating quantum models from the 1D or 2D (discussed so far) to the 3D realm using moiré systems. To this end, a new exfoliation technique was recently demonstrated, to yield large-area atomically thin layers that can be stacked in any desired order and orientation to generate a whole new class of artificial materials, including stacking thick twisted materials 80 . We next discuss explicitly two examples, with obvious intriguing extensions possible. (1) Alternating twist. When stacking many layers atop each other, we want to define the twist angle of each layer to be measured with respect to the bottom one. Stacking 2D materials at alternating twist then means that the twist angle alternates between two values: zero and α . If α is small, in-plane localized sites will emerge by the moiré interference physics, and these sites will lie directly atop each other in the out-of-plane direction. The in-plane flat-band states will become dispersive along the out-of-plane direction. On slight doping, the system thus becomes dominantly a 1D metal with small residual in-plane coupling by the residual dispersion of the flat bands, similar to what has been discussed for bulk TaS 2 (ref. 81 ). Such a 1D metal is susceptible to instabilities (such as CDW, spin density wave (SDW) or excitonic instabilities) as a 1D system is always perfectly nested. This will gap out the system along the out-of-plane direction and elevate the relevance of the small dispersion along the in-plane direction. A similar mechanism was recently observed to give rise to the 3D quantum Hall effect in bulk ZrTe 5 (ref. 82 ) and similar fascinating surprises might be expected in the alternating twist configuration we propose here. (2) Continuous twist. Another intriguing idea is to stack layer by layer with constant twist angle α 0 between adjacent layers, such that the above defined twist angle with respect to the bottom layer increases linearly, \(\alpha ={\alpha }_{0} i\ {\rm{modulo}}\ 36{0}^{\circ }\) , where i is the layer index. Calculations to characterize such a system are challenging as the unit cell with respect to the in-plane direction grows huge or a quasi-crystal structure with no unit cell emerges when more and more layers are added. Here the generalized Bloch band theory developed in ref. 83 might be useful. However, the general physics of emerging flat bands should transfer to the third direction, providing an inroad to realize 3D quantum models with tunable ratio of potential and kinetic energy scales as discussed above for the 2D case. Moiré heterostructures out of equilibrium So far, we have addressed equilibrium properties of twisted systems and we have illustrated the rich phenomena that one can realize via engineering a variety of low-energy quantum many-body Hamiltonians. With increasingly sensitive pump–probe experiments available, another enticing possibility is to drive such systems out of equilibrium via short laser pulses or via embedding the material in a quantum cavity 84 , 85 , 86 , 87 , 88 . As the moiré potential lowers the relevant kinetic energy scales to millielectronvolts, these lie within reach of the typical energy scales of light–matter interactions, permitting outsized transient modifications of the electronic dynamics. Combining control of the twist angle with non-equilibrium perturbations thus holds great promise to offer a new platform with opportunities to realize novel correlated non-equilibrium states of matter. (1) Floquet engineering. In TBG, the sensitivity of correlated phases to millielectronvolt changes of the dispersion of the moiré bands as a function of deviation from the magic angle suggests that rich effects can be achieved already via dressing the electronic bands by weak light fields. In the presence of an optical pump, a photon-mediated renormalization of the hopping matrix elements between moiré unit cells can selectively modify the low-energy band structure and flatten its dispersion. (2) Cavity engineering. Analogously, cavity engineering recently attracted great interest as a means to exploit strong light–matter coupling to change electronic phases. The comparatively low energy scales of moiré bands and correlated phases in TBG suggests that coupling to a cavity photon mode can induce an outsized effect on the interacting electronic state. One such application is in photo-induced and cavity-induced superconductivity. New predictions for quantum phenomena in these cavities include the enhancement of superconductivity using the coupling between the vacuum fluctuations of the cavity and plasmonic modes of the environment 87 , 88 , 89 , 90 , the study of strongly correlated Bose–Fermi mixtures of degenerate electrons and dipolar excitons 91 , 92 , 93 , 94 , 95 . (3) Optically generated synthetic magnetic fields. The strong spin–orbit coupling in TMDs offers new possibilities to generate opto-magnetic quantum properties using dynamic fields. For instance, one can optically generate real magnetic fields from spin–valley polarization in TMD heterobilayers. Under circularly polarized excitation at the bandgap of one of the TMD monolayers in a heterobilayer, the initially large spin–orbit splitting can be further increased, leading to a high degree of spin–valley polarization of the hole. This, along with the lack of spin–valley polarization of the electron, will result in a net spin polarization, and thus, a real magnetic moment with the potential to induce novel non-equilibrium quantum phases. Excitons at will Moiré platforms also provide a basic infrastructure for achieving different interlayer exciton phases and excitonic lattices (Fig. 1 ). In particular, bilayers of TMDs were identified (even without twist) as intriguing candidates because the long lifetime of these charge-separated interlayer excitons 96 , 97 , 98 should facilitate their condensation 99 . Adding twist angle allows the realization of moiré exciton lattices 62 , 100 , 101 , 102 , 103 , which can feature topological exciton bands that support chiral excitonic edge states 104 . The high degree of control permits study of the localized–delocalized crossover of excitons in moiré lattices as well as a strain-induced crossover from 2D to 1D exciton physics 105 . Minimizing the kinetic energy to increase interactions is key to achieving robust condensation. For example, twisted WSe 2 homobilayers have tunable flat electronic bands over a range of twist angles (see above), with alternatives including experimentally accessible heterobilayers as well as strain and pressure engineering. Two such band-engineered TMDs separated by a uniform spacer layer (hBN) 101 will probably provide the best avenue to achieve exciton condensation in zero applied field and to then tune the properties of the condensate by gating, field, pressure and strain. Future strategies to enhance the tendencies of excitons to condensate beyond the use of twist angle to achieve flat bands and thus more prominent interactions could also include the use of cavity structures to enhance light–matter interaction (Fig. 1 ). Here, pump–probe nano-optical methods can both address and interrogate distinct regions in moiré heterostructures 67 . Skyrmions Skyrmions — vortex-like magnetic configurations — can arise as a spontaneously formed hexagonal lattice in the ground state of chiral magnets, or as quasiparticle excitations on top of a ferromagnetic ground state. The energetic stability of skyrmions is a consequence of an anti-symmetric exchange (also called Dzyaloshinskii–Moriya) interaction and they are interesting due to their non-trivial topological structure protecting them against perturbations and rendering them highly attractive as potential memory devices. One candidate system for which skyrmions have been predicted is monolayer CrI 3 (ref. 106 ). In the realm of twisted magnetic van der Waals heterostructures, moiré skyrmion lattices are favoured due to the long-period moiré pattern, which can be used to tune the periodicity and shape of the magnetization texture. In addition, an external magnetic field can be used to switch the vorticity and location of skyrmions. Conclusion Here we have identified future directions of research into the blossoming fields of twisted van der Waals heterostructures with a particular emphasis on the high degree of control achievable in these materials. Controlled engineering of quantum Hamiltonians as tunable low-energy theories of these structures is at the forefront of this fascinating research field. This idea might be viewed as a condensed-matter realization of a quantum simulator in the future, which allows direct access to fundamental properties of quantum many-body systems, such as the delicate emergent physics of novel collective phases of matter, which have been experimentally elusive so far. In this sense, twisted van der Waals heterostructures might be the long-sought-after remedy to access the wider phase space, which should include all kinds of fundamentally and technologically relevant phases of matter. One of the major experimental challenges to be overcome next is to devise readout techniques of the magnetic properties in these systems as well as to push the precision of angle-resolved photoemission spectroscopy into the regime where it can resolve the physics within the small Brillouin zone of twisted heterostructures reliably. Both of these advances would provide cornerstones allowing the interrogation of the low-energy physics of the system with regards to its interacting properties more completely. Considering the tremendous combinatorial space of van der Waals materials to chose from, extensions to higher dimensions and the vibrant field of non-equilibrium control, including Floquet engineering and twistronics applied to these novel platforms, it is likely that only a small fraction of the potential phenomena achievable in twisted van der Waals heterostructures have been experimentally realized so far. This newly emerging research field is likely to yield many more exciting developments. | Researchers from the MPSD, the RWTH Aachen University and the Flatiron Institute, Columbia University (both in the U.S.) and part of the Max Planck—New York City Center for Non-equilibrium Quantum Phenomena have provided a fresh perspective on the potential of twisted van der Waals materials for realizing novel and elusive states of matter and providing a unique materials-based quantum simulation platform. The team has laid out an exciting road map for the vibrant field of twisted van der Waals materials, reviewing the rapid progress made recently and adding their unique vision of intriguing avenues of future research to be completed. In their work, now published in Nature Physics, the team demonstrates that the future of twisted van der Waals materials is bright—both in terms of fundamental science as well as their potential applications in materials science and quantum information technologies. Twisted van der Waals materials consist of stacked layers of two-dimensional systems at a relative rotation angle. They are shown to provide a versatile toolbox to realize many highly sought-after quantum model systems, which exhibit exotic and—so far—elusive phases of mater of potential relevance to material science and quantum technologies. Hence those twisted van der Waals materials define a materials-based quantum simulator. This is particularly exciting as twisted van der Waals materials offer tantalizing opportunities to provide clean systems which are extremely well controlled by the twist angle, stacking sequence, substrate or gating techniques. However, this article demonstrates that the potential of twisted van der Waals materials even multiplies when brought together with other blossoming fields of condensed matter and quantum technology research. The potential for extensions is vast, with even more rich physics to be uncovered by, for example, by investigating the interaction of non-equilibrium states or cavities with the already promising van der Waals materials "One of the strengths of these novel materials is that they provide an unprecedented level of tunability," says lead author Dante Kennes. "This allows us to effectively realize many of the different lattice quantum models, which have taken center stage in the last few decades in the field of condensed matter research." Thus, exotic phases of matter, such as the elusive spin-liquid phase or systems with topological properties favorable to quantum technologies might be on the horizon. "We are really at the beginning of an intriguing journey to explore the vast amount of chemical and physical combinations in the field," co-author Lede Xian reports about his recent work. " Many more exciting discoveries, some of which we outline in our work, will follow," adds fellow author Martin Claassen. Twisted van der Waals materials offer a wonderful and formidable experimental challenge due to their complexity. They also open up completely new routes of manipulating and controlling quantum materials, according to Ángel Rubio, the director of the MPSD's Theory Department: "These materials let us unravel new phases of matter and promise a wide range of exciting applications in quantum technologies. We have just touched the surface of those possibilities in this perspective article and we can expect many more surprises in this exciting research journey." As this work illustrates, the ratio between questions answered and interesting questions to be addressed in the future makes this a particularly promising and vibrant research field, adds Ángel Rubio: "This plethora of exciting new phenomena can be enlarged further when considering driving those systems out of equilibrium or embedding them in optical cavities. Many interesting phenomena are waiting to be unraveled ahead of us!" | 10.1038/s41567-020-01154-3 |
Space | Earth's atmosphere may be source of some lunar water | Gunther Kletetschka et al, Distribution of water phase near the poles of the Moon from gravity aspects, Scientific Reports (2022). DOI: 10.1038/s41598-022-08305-x Journal information: Scientific Reports | https://dx.doi.org/10.1038/s41598-022-08305-x | https://phys.org/news/2022-04-earth-atmosphere-source-lunar.html | Abstract Our Moon periodically moves through the magnetic tail of the Earth that contains terrestrial ions of hydrogen and oxygen. A possible density contrast might have been discovered that could be consistent with the presence of water phase of potential terrestrial origin. Using novel gravity aspects (descriptors) derived from harmonic potential coefficients of gravity field of the Moon, we discovered gravity strike angle anomalies that point to water phase locations in the polar regions of the Moon. Our analysis suggests that impact cratering processes were responsible for specific pore space network that were subsequently filled with the water phase filling volumes of permafrost in the lunar subsurface. In this work, we suggest the accumulation of up to ~ 3000 km 3 of terrestrial water phase (Earth’s atmospheric escape) now filling the pore spaced regolith, portion of which is distributed along impact zones of the polar regions of the Moon. These unique locations serve as potential resource utilization sites for future landing exploration and habitats (e.g., NASA Artemis Plan objectives). Introduction NASA’s return to the lunar surface (i.e. Artemis Plan) requires mission planning for near surface water phase resources 1 . The lunar surface regolith is coupled with ion-magnetohydrodynamical processes that may have contributed to the deposition of water phase on its surface. This is because the lunar environment is exposed for five days of each Earth orbit period to a magnetic field tail extending all the way from the Earth’s geomagnetic field 2 . Recent measurements from the Kaguya lunar orbiter (JAXA) have revealed significant numbers of oxygen ions during the time when the lunar orbit was inside the geomagnetic field 2 . This provided the necessary evidence that oxygen ions were not coming from the intrinsic solar winds. This is because the high temperature of the solar corona allows for only multi-charged oxygen ions (O5+, O6+, O7+ and O8+) as observed by Advanced Composition Explorer (ACE) space satellite with O+ in negligible amounts 3 . However, 1–10 keV O+ ions were observed to populate the Moon’s environment during the transition through the plasma sheet that originated from the Earth’s ionosphere 2 , 4 . The terrestrial ion’s flux density during the Moon’s passage through the magnetotail was estimated between 2.1 × 10 4 cm 2 s −1 and 2.6 × 10 4 cm −2 s −1 2 , 5 . This process of Earth-lunar geoionic O+ accumulation fluctuated over the history of the earth for several reasons: Initially (1), when the geomagnetic field may have not been well developed or even absent in ancient times 6 the O+ accumulation was more intense. When applying this possibility from the Earth to Mars, where the global magnetic field is absent 7 , 8 , there, an ionospheric plasma sheet develops in absence of the global magnetic field and transfers ion, mostly oxygen, down the plasma sheet as observed by Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft 9 . The second source (2) of ion enhancement has to do with the increasing distance between the Earth and the Moon in their history 10 . The third source (3) of ion variability is an episodical increase of the solar activity observed by MAVEN mission 11 . Evidence of such ion transfer mechanisms 9 supports a hypothesis that a part of the terrestrial atmosphere that was lost in the past is now likely preserved within the surface of lunar polar regolith. For this hypothesis there is a support from observation of nitrogen and noble gases isotopes 12 . The recent advancements in Earth's atmospheric escape warrants new analysis of water phase deposits on the Moon. We apply our novel gravity aspects (descriptors) derived from harmonic potential coefficients of gravity field of the Moon by considering these novel ionic transfer mechanisms on depositional history of water phase formation, from which gravity strike anomalies appear. While the loss of the ions from the atmospheres of terrestrial planets depends on processes at the atmosphere-surface interface, there are a significant loss mechanisms occurring in the upper atmosphere. For example, the ionosphere’s loss of ions due to space plasma acceleration can dynamically control the evolution of the atmosphere 5 . The geomagnetic field creates an obstacle to the solar wind preventing a direct abrasion of terrestrial neutral ions (oxygens, hydrogens) via thermal and non-thermal activities 13 . Four main pathways of terrestrial ions constitute of (i) magnetopause escape, (ii) magnetopause ring current dayside escape, (iii) anti-sunward flow escape, and (iv) lobe/mantle escape (Fig. 1 ). When ions are escaping via these pathways 5 , 14 , 15 , they can be returned towards the Earth and be added back to the atmosphere 5 . This occurs when the collision-less path distance becomes small enough that plasma on this length scale dissipates and the geomagnetic field lines and plasma field lines become reconnected 16 . Independently, ion outflow (including mostly H+, O+) from the uppermost terrestrial polar ionosphere has a time dependent typical thermal energies of 0.3 eV 16 . This terrestrial polar wind outflow is on the order of 10 25 ions s −1 during the solar maximum activity, having average flow rates across the solar minimum and maximum of ~ 5 × 10 24 ions s −1 , caused by electric field disturbance due to charge difference between the ions and faster moving electrons 16 . Prior work has shown, during the dark sky conditions of lunar eclipses, that the differential effects of these energetic H+/O+ ionic species strike the Moon’s surface when engulfed by magnetotail transit. This observation suggested that omnipresent exospheric sources are augmented by these variable plasma impact sources in the solar wind reconnection with Earth’s magnetotail 17 . There is one order of magnitude difference between the estimates of the polar outflow and the four ion escape routes 5 . Here we consider this unknown loss of ions may account for volume of ions deposited on the Moon. Figure 1 Sketch showing three-dimensional cutaway of Earth’s magnetosphere. The blue and white arrows are motion pathways of ions (for details see Seki et al. 5 ) illustrating the mechanism for oxygen/hydrogen ions transfer to the Moon. Red dotted line with the arrow shows motion of the Moon into the magnetospheric tail. Escape locations into the interplanetary space is marked by locations i, ii, iii, iv. Image was drown using Microsoft PowerPoint for Mac Version 16.55. Full size image We note that the solar wind plasma separation of the electrons from the heavy ions is possible when the neutral plasma is obscured by an airless obstacle (asteroids, the whole Moon in this case) 18 . Plasma expands into the void behind the obstacle and creates an electron rich (ion free) volume behind the obstacle 19 . This mechanism was considered for the Lunar dust levitation 20 . The electrons are lighter and therefore they diffuse more efficiently into the shadow behind the Moon while the heavier positive ions continue further distances along the Moon’s shadow boundary (an analogy with the electrostatic signature present at permanently shadowed craters 18 ). Note that laboratory experiments tried to model electrostatic accumulation due to shadow obstacle processes 21 . They considered shadows as a cause of an electrostatic lofting of the dust on airless bodies, the process that plays role in surface evolution. This may well relate to unexplained observations of dust ponding on asteroids. These dust ponds are accumulation of dust formed in craters on 433 EROS 22 , 23 , 24 . Similar observations were made on comet 67P 25 . Even at Saturn's icy moon Atlas, where the unusually smooth surface may have been modified by electrostatic field. Thus, there must be a distance at which the Moon’s electrostatically charged tail may contribute to the reconnection events in the Earth’s geomagnetic tail (Fig. S10 ). Each time when the electrostatically charged tail from the Moon enters the geomagnetic plasma sheet, it may interfere with any ions present in the plasma sheet and modify their trajectories. Such disturbance in particle motion may result in the collision-less path distance of the plasma becoming significantly smaller and this could significantly increase the probability of magnetic flux lines’ reconnection. Earth's atmospheric escape warrants important considerations as potential life support pathways on the Moon 26 , 27 , 28 . We calculate a rough estimation of volumetric water phase that has likely deposited and transformed the Moon’s regolith over millennia: during the intersecting 5-day interaction of Earth’s magnetosphere with the Moon, if we assume only 1% of the average ion flow per second (5 × 10 24 ions s −1 ) of the O+/water molecules are deposited into the Moon’s regolith, this volumetric time transfer equates to 1 million × 24 h × 5 days × 12 months × 5 × 10 22 water molecules × 2.7 –29 m 3 /millions of years (MY) is ~ 1 km 3 per MY (This calculation assumed 0.3 nm size of the water molecule, thus 0.027 nm 3 = 2.7 –29 m 3 water phase volume). If we assume this process occurring from the period of Late Large Bombardment ~ 3.5 billion years (BY) ago, based on the above calculations, we estimate the accumulation of ~ 3500 km 3 of terrestrial water phase, filling the pore spaced regolith, for which novel gravity strike signals would appear. This amount may be two to three fold different, due to smaller Moon-Earth distance in the history 10 , but not different by more than order of magnitude. For example, this volume, of ~ 3500 km 3 would be similar to the ~ 5400 km 3 volume of lake Vostok in Antarctica 29 . Ionic flow is orientated both away from and towards the Earth, due to energetic escape processes of Earth's atmosphere. When the Moon enters and is exposed to Earth’s ionic plasma sheet, this may capture ions and account for the missing portion of the ionic budget 5 . Impact gardening would then distribute these deposits across the whole Moon’s surface. Most primitive basalts from the lunar surface contain considerable amounts of H 2 O 30 , 31 . However, the solar radiation would evaporate the surface deposits and redistribute them towards the polar regions 32 , 33 . Larger amounts of such deposits would form a permafrost at the near surface polar regions of the Moon, while filling the pore space of the lunar regolith, and over time, compressing into the liquid phase boundary at depth 34 . Based on the pressure variation with depth on the Moon, we get into 1 atmosphere regolith overpressure at depth of 30 m 35 . The temperature near the Moon’s poles is about 100 K and the regolith there has an increasing thermal gradient with depth of about 0.1–0.5 K/m 36 . From this gradient we estimate a depth between 100 and 2000 m, where the pressure and temperature would allow water in pores to exist in liquid state. We have a prior experience with detecting subsurface water phase deposits on Earth. For that detection, we used gravity aspects and estimate potential locations of underground deposits of water phase and gas in Sahara Desert regions 37 . Here we apply these methods and locate potential deposits of water phase in the polar regions of the Moon. Methodological theory on data We use a novel method for detecting underground density anomalies via anomalous gravity signal. This method was developed for the study of various geological structures on the Earth: impact craters, subglacial volcanoes, lake basins, paleolakes or petroleum deposition sites globally. Notable, this has also been extended for the impact craters, maria and catenae on the Moon 38 , 39 . Typical gravity investigations employ the traditional gravity anomalies or second radial derivatives of the disturbing gravitational potential. This work uses a wider set of functions of the disturbing gravitational potential, we call them “ gravity aspects ”. These are derivation operators acting on the gravity anomalies Δg , the Marussi tensor ( Γ ) , the second derivatives of the disturbing potential ( T ij ), with the second radial component T zz , two of the three gravity invariants ( I j ), their specific ratio ( I ), the strike angles ( θ ) and the virtual deformations ( vd ). Our prior use revealed their diverse sensitivity to the underground density contrasts were due to causative bodies: these are computed to a high degree and order with sufficient numerical stability. It appears that such application extracts a finer and more complete detail of satellite gravity measurements. Theory of this approach was outlined in the book of Klokočník et al. 39 . Further examples and specific application of this method to the Moon is in the Supplementary material . From Eq. ( 1 ) (see Supplementary material ) we compute and plot the strike angles θ at the location of interest (here at the Moon’s polar regions). Alignment indicates the aligned porosity, filled with contrasting density material (water phase/vacuum). The aligned θ regions suggest water phase deposits. The first step of this detection method is a transformation: we use the difference in the gravity anomalies between its assumed deepest and shallowest location, then, to the difference in the vertical direction, allowing the maximum estimate where the object can be located, and how large/deep it might be. Several iterations are required to achieve this step. The second step is a use a topography data and the geographic positions of topographical sites, leading eventually to a fine-tuning of the level, extent, and shape of the water phase enriched objects. Gravity data The input data here uses harmonic potential coefficients of the spherical harmonic expansion to degree and order d/o of the perturbational gravitational potential (Stokes parameters). A set of these coefficients defines a global static gravitational field. We use the best models available based on satellite records 40 , 41 . This defines the limits of d/o = 1200 and 1500 for the models GRGM1200A 40 and GL1500E 41 respectively, with practically useful limit d/o = 600 (recommended by the authors of these models themselves). Application of these models allows for the theoretical ground resolution ~ 10 km. The precision is about 10 mGal. For this paper we chose the GRGM1200A model (after tests concerning degradation of gravity aspects for different harmonic degree, order, and/or appearance of any artifacts). Surface topography data These are taken from a new lunar digital elevation model from measurements of the LOLA (Lunar Orbiter Laser Altimeter), an instrument on the payload of Lunar Reconnaissance Orbiter (LRO) spacecraft 42 , 43 . The height is given relative to the Moon’s reference radius of 1737.4 km. A nominal precision of the LOLA altimeter is ~ 10 cm. Results We computed and plotted the gravity aspects, namely the strike angles θ and the second radial derivatives T zz near the lunar poles—see Fig. 2 A,B and Supplementary material . In Fig. 2 A, we used three color modes to express the degree of alignment: yellow and greed as misaligned and red with high degree of alignment. The choice of contrasting alignments (i.e., aligned vs non-alignment), was chosen to be the most conservative, so that only areas with high Comb Factor (CF) values (0.99–1.00) were shown in Fig. 2 A (see Supplement for CF definition). To demonstrate the robustness, we show more than one way of plotting these strike angle parameters, and represented by CF (Figs. 3 , and 4 , and Supplementary material ). This shows how we outline areas for smaller alignment of strike angles (CF < 0.97) for each respective hemisphere. Figures S1 – S8 show variations of strike angles for both polar regions of the Moon and for ratio I < 0.3 (representing 2D-like structures) and I < 0.9 (3D-like structures). The calculations resulted in areas of high degree alignment of CF. Hence, we outlined these areas by red vs green and yellow symbols in Fig. 2 A. Note areas of significant alignment of the strike angles near the north and south poles of the Moon (Fig. 2 A). Figure 2 Geophysics, topography, and geological subunits of the Moons polar regions. ( A ) Gravity comb factor (CF) plotted for ratio I < 0.9 (see Eq. ( 2 ) and definition of CF in Supplementary information ). The color legend identifies degree of alignment of strike angles (also Figs. 3 , 4 ); ( B ) Gravity second derivative T zz [E] along with topography [m]. Three areas outlined by red lines and labelled with NSR S1, NSR S3, and NSR S4 are regions identified with a potential water rich permafrost based on neutron suppression observations 34 ; ( C ) Geological map units, where in north pole panel significant craters are labelled by yellow letters as: A—Rozhdestvenskiy U, B—Nansen F, C—Hermite A, D—Porges A, E—Porges B, F—Nansen C, G—Peary, H—Rozhdestvenskiy W, I—Porges C, J—Porges D, K—Porges E, L—Porges F, M—McCoy A, and craters in south pole panel as: A—Haworth, B—Faustini, C—Wiechert J, D—Idel’son L, E—DeGerlache, F—Shackleton, G—Sverdrup, H—Slater, I—Wiechert P, J—Kocher, K—Wiechert U, and L—Nobile; Data in plots ( A ), ( B ) were produced by combination of MATLAB, Surfer7.0 and Microsoft PowerPoint. ( C ) is a PowerPoint-modified Unified Geology Map of the Moon 44 . Full size image Figure 3 Stability of the comb factor (CF) within the area of north pole of the Moon. Dimensions are in meters. ( A ) Left panel corresponds to the strike angle plot and its CF for the north pole in Fig. 2 A. The CF between 0.99 and 1.00 is in red color while the lower CF is in blue color. ( B ) Right panel shows CF between 0.97 and 1.00 in red color while the lower CF is in blue (blue symbol is larger for contrast clarity). Both plots are strike angles for ratio I < 0.9 (see Eq. 2 ), sensitive to weakness directions of the rocks in subsurface structures near the north pole of the Moon. Data were plotted using MATLAB software. Full size image Figure 4 Stability of the comb factor (CF) within the area of south pole of the Moon. Dimensions are in meters. ( A ) Left panel corresponds to the strike angle plot and its CF for the south pole in Fig. 2 A. The CF between 0.99 and 1.00 is in red color while the lower CF is in blue color. ( B ) Right panel shows CF between 0.97 and 1.00 in red color while the lower CF is in blue (blue symbol is larger for contrast clarity). Black letter P shows significant extent of CF-detected pores in the areas of Aitkin basin. Both plots are strike angles for ratio I < 0.9 (see Eq. 2 ), sensitive to weakness directions of the rocks in subsurface structures near the north pole of the Moon. Data were plotted using MATLAB software. Full size image Note that Fig. 2 B shows how the T zz , second derivative of the disturbing gravitational potential distributes near the polar regions of the Moon. The values are spread between − 300 E to 300 E near north pole and from − 600 E to 1100 E near south pole. In the north pole region, the low values are indicative of compressional regime; thus, near surface rocks are denser and spread near the inner ring of the two large impact structures in upper left corner of the Fig. 2 B (upper north pole panel). Note, the minimum values of T zz reside inside smaller impact craters are expressed both in topographic and the geological unit mapping (Fig. 2 B,C, upper panel). While we observe these three craters (e.g., Rozhdestvenskiy, Hermite, and Bird), we find these topographic and geological units map to share similarities; the T zz parameter shows that Byrd crater has missing low values within its inner rim structure (see Fig. S9 for delta g). This may relate to a larger difference in delta g indicating a variation in compression force inside the craters. The Bird crater is more gravity equilibrated than the Rozhdestvenskiy and Hermile craters. The large topographical relief also seems to generate low values in T zz in other three smaller craters, labeled as A, B., and C. in the north pole geology map. To check the deeper extent of these impact structures we compare T zz with ∆g (Fig. 2 and Fig. S9 ) and see much larger contrast in ∆g for Rozhdestvenskiy and B craters, followed by Hermite crater. Positive topographic relief shows consistently larger values in ∆g value. Similarly, we obtained values for T zz and ∆g near the south pole (Fig. 2 B, lower panel and Fig. S9 ). Note a larger span of T zz values and association of T zz minima with the interior of impact structures and topographic heights with the positive T zz values (Fig. 2 B). Discussion The strike angles θ derived in this work show sensitivity to the rock’s anisotropy 17 , 18 , 19 , 20 . These fractured rocks’ weakness and corresponding anisotropy, point to the directions of the strike angle θ and thus towards likely locations of volatile phases accumulations, including water phase . Our results in Fig. 2 shows patchiness of the locations where water phase may have accumulated and this is consistent with the recent molecular water detection by SOFIA 45 , where they observed at 6 μm emission feature at high lunar latitudes interpreted as patchy water phase enrichment, and as much as 100–400 μg/g of regolith 45 . The Moon’s surface has been significantly modified with impact craters, energetical processes triggering structural extensions during the conversion of high impact kinetic energy into heat. The resulting explosive fractures splinter these regolith rock units and creates topographical indentation, producing a gravitational instability. The rocks in crater vicinity were dominantly compressed in a direction away from the crater center 46 . Such compression creates elongation and fracturing in perpendicular direction (parallel to crater perimeter) 47 , 48 , 49 . Many thrust faults are positioned in the subsurface near the perimeter of the crater, due to the impact energy forcing to remove the significant volume material from the inside of the crater 50 . With age, post craters collapse removes or diminish these topographical and gravitational reliefs 51 . The post crater collapse orientation (towards the middle) creates and magnifies the network of faults that are parallel to craters’ perimeters 51 . Thus, both impact and post impact processes enhance the anisotropy of rocks along the perimeter of the crater, by forming networks of fault system containing planar weaknesses that include planar pores, oriented along the perimeter direction. On Earth, these pore-spaces (i.e., porosity) often become filled with fluids as water and/or oils. Similarly, we apply the gravity expression of the planar weaknesses of the Moon’s impact craters. For this goal we apply a method of gravity detection of the planar network of weaknesses above the preexisting water-filled basins, which has allowed identification of paleolakes on Earth, now arid regions 37 , 52 . It appears that paleolakes hidden under thick layers in the Great Sand Sea of Western and Southern Egypt generate a special gravity aspect signature that we interpret to be related to the structural anisotropy of the sediment basin 37 , 53 . Here we apply the same approach with the hypothesis that the structural weakness of impact craters can be recognized in the gravity aspects. In addition, the gravity aspect, namely the strike angle(s), can determine, where the pore space is likely to be filled with significant amounts of water phase. Once the fractures are filled with water phase, that is more mobile, compared with the host rock system, fractures become subjects of significant pore forces and subsequent anisotropy of the stress field detectable from the gravity potential aspects 53 . The Moons polar regions contain significant amount of water phase 33 , 34 . Our estimation (in the Introduction) when considering the forementioned reasonings, allow for theoretical calculations exceeding several thousand cubic kilometers of water phase. Such volume estimates require water phase enclosed in the pore containing rock units in the polar regions, which may cause structural extensions and fracturing. The network of pore fractures surrounding these impact craters is likely to develop due to regular impact crater structural degradation processes 54 and thus would be the most reasonable location for water phase deposits in the polar regions of the Moon. The aligned θ regions tend to be near the impact craters and the angles θ are parallel to craters’ perimeters. For example, near the north pole we identified several highly aligned regions along the perimeter of the Rozhdestvenskiy crater and several such areas around the Hermite crater. This is a significant indication, that these two craters contain significant pore space structures that weaken the rock underneath the surface and create gravity strike anomalies. The identified regions of highly aligned strike angles (Fig. 2 A,B) are thus likely to contain a significant amount of pores-filled water-phase at the subsurface pressure depths, and solid phase near surface (e.g. permafrost). We observe similar analogies of the aligned strike angle detection of the water phase-filled pore space in rocks near the south pole of the Moon. Our invented Comb Factor (CF) parameter has anomalous values around the crater perimeters (Fig. 2 A), pinpointing a significant potential for presence of the significant volume of pore space filled with water phase, thus generating anisotropy in the rocks’ stress field. The porosity filled with water phase is the structural weakness that is being sensed by strike angle detection near the impact crater boundaries. This is consistent with formation of circular fault systems around the impact craters, where the porosity would form preferentially along the faults formed by impact process 54 . Note, that near the south pole, there is a large region of aligned CF values (labelled “P” in Fig. 4 ), away from the conspicuously visible major impact craters or topographical relief areas. This “P” region is, however, on the boundary of much larger impact structure, south pole Aitken basin, that significantly modified the early Moon’s crust 55 , and such mega impact event, may have significantly modified the fracture and related pore space network that could be subsequently filled with the water phase, weakening the rocks modified already by impact even further. The water phase has been shown to exist in the polar regions of the Moon 45 , 50 , 56 . Polar water phase has been proposed to come from the Sun 57 . However, most recently it was shown the oxygen ions entering the Moon’s vicinity must source from Earth 58 , when exposed to the magnetotail of Earth 2 , 4 , 5 , 17 . Earth's atmospheric escape would provide the supply of the water molecules to the Moon. The hydrogen cations and oxygen anions are free to react with each other, due to their electronegativity differences when they get close. Then, the chemical bonds re-form to make water molecules while an additional energy is released, which propagates the exothermic reaction further. Earth's atmospheric escape effect 14 , 26 serves as a potential source of unaccounted ions 5 escaping from the Earth into the plasma sheet, and when the Moon passes through this sheet, certain number of oxygen and hydrogen ions is trapped on the Moon. These proposed regions, both in the southern and northern hemispheres, may contain significant subsurface water phase deposits. While these regions were detected from the gravity field aspects, their detection is partially supported from the epithermal neutron emission made from LEND observations onboard LRO, where two out of three Neutron Suppression Regions in this area (NSR S1 and NSR S4) partly overlap with the regions with potentially volumetrically significant water phase detected (Mitrofanov et al., 2012, e.g., their Figure 5) 34 . Detection of porosity through strike angles (Fig. 2 ) would reach depth of tens of kilometers and would not detect porosity distributed only few meters under the surface, whereas the neutron detection is sensitive only to the very top surface of the Moon (0.5 m) and thereby not detect any water in deeper subsurface. Despite this difference in detection depth, it is remarkable that there is an overlap of these two contrasting water phase detection approaches. At this point, our results coupled with those from prior observation of neutrons suppression regions (NSR) both point to the areas with potential volumes of water phased deposits. While NSR locations are not directly tied to the Lunar impact structures, alignments of strike angles clusters near the rims of large craters and this suggests that strike angle alignment analysis is a straightforward way how to detect remotely significant amounts of water phase on planets. Our gravity aspect method revealed that strike angles are related to T zz gravity aspects, the second derivative of vertical gravity disturbance, and sensitive to gravity signature of craters and faulting. However, we note that the gravity field can also be analyzed for horizontal gravity gradients 59 that can detect potential tabular dike features that are hidden under the surface damaged by impact cratering. This suggests evidence of extensional lithospheric processes at pre-Nectarian to Nectarian age 59 . While our strike angle method was applied in areas very close to the poles (85–90), our vertical T zz gravity aspects was most sensitive to vertical gradients, and we believe our method was not sufficiently sensitive to the linear tabular dikes, for example, detected by Andrews-Hanna et al. 59 . In addition, the tabular dikes mentioned were mostly distributed further from the poles, where our analysis was restricted to 85°–90° from the poles. Conclusion The origin of the water phase on the Moon has not yet been uniquely identified. In this work, we apply recent observations that part of the Earth’s atmosphere may have been transported to the Moon via the novel hydro-magnetospheric-plasma tail and exposing the Moon’s surface with terrestrial H 2 O. We proposed the Moon’s interaction with the geomagnetic tail allows terrestrial ions capture that combine into water molecules and allows water phase deposits on the Moon. Crater impacts, forming structural extensions and fractures, allow suitable pore space networks for hosting large subsurface liquid water reservoirs. Back of envelope calculation suggested several thousands of cubic kilometers of water phase may have accumulated this way into the subsurface of the Moon over the past 3.5 billions of years. We applied a new method for presenting the gravity aspects signature on the Moon, involving functions derived from gravitational potential, descriptors, and modeling the gravity field. This new method has sensitivity to the structural anisotropy of the Moon’s regolith, especially near impact craters. The gravity aspects method detected specific regions near the north and south poles, which point to the likelihood of significant volume of water phase-filled pore space. It seems likely that identified regions in this work hold significant amounts of water phase, suitable for resource utilization plans of future planned missions (Artemis 1 ). | Hydrogen and oxygen ions escaping from Earth's upper atmosphere and combining on the moon could be one of the sources of the known lunar water and ice, according to new research by University of Alaska Fairbanks Geophysical Institute scientists. The work led by UAF Geophysical Institute associate research professor Gunther Kletetschka adds to a growing body of research about water at the moon's north and south poles. Finding water is key to NASA's Artemis project, the planned long-term human presence on the moon. NASA plans to send humans back to the moon this decade. "As NASA's Artemis team plans to build a base camp on the moon's south pole, the water ions that originated many eons ago on Earth can be used in the astronauts' life support system," Kletetschka said. The new research estimates the moon's polar regions could hold up to 3,500 cubic kilometers—840 cubic miles—or more of surface permafrost or subsurface liquid water created from ions that escaped Earth's atmosphere. That's a volume comparable to North America's Lake Huron, the world's eighth-largest lake. Researchers based that total on the lowest volume model calculation—1% of Earth's atmospheric escape reaching the moon. A majority of the lunar water is generally believed to have been deposited by asteroids and comets that collided with the moon. Most was during a period known as the Late Heavy Bombardment. In that period, about 3.5 billion years ago when the solar system was about 1 billion years old, it is argued that the early inner planets and Earth's moon sustained unusually heavy impact from asteroids. Scientists also hypothesize that the solar wind is a source. The solar wind carries oxygen and hydrogen ions, which may have combined and been deposited on the moon as water molecules. Now there's an additional way to explain how water accumulates on the moon. The research was published March 16 in the journal Scientific Reports in a paper authored by Kletetschka and co-authored by Ph.D. student Nicholas Hasson of the Geophysical Institute and UAF Water and Environmental Research Center at the Institute for Northern Engineering. Several colleagues from the Czech Republic are also among the co-authors. This diagram from the research paper authored by Gunther Kletetschka shows the moon approaching Earth's magnetotail. Credit: Gunther Kletetschka Kletetschka and his colleagues suggest hydrogen and oxygen ions are driven into the moon when it passes through the tail of the Earth's magnetosphere, which it does on five days of the moon's monthly trip around the planet. The magnetosphere is the teardrop-shaped bubble created by Earth's magnetic field that shields the planet from much of the continual stream of charged solar particles. Recent measurements from multiple space agencies—NASA, European Space Agency, Japan Aerospace Exploration Agency and Indian Space Research Organization—revealed significant numbers of water-forming ions present during the moon's transit through this part of the magnetosphere. These ions have slowly accumulated since the Late Heavy Bombardment. The presence of the moon in the magnetosphere's tail, called the magnetotail, temporarily affects some of Earth's magnetic field lines—those that are broken and which simply trail off into space for many thousands of miles. Not all of Earth's field lines are attached to the planet at both ends; some have only one attachment point. Think of each of these as a thread tethered to a pole on a windy day. The moon's presence in the magnetotail causes some of these broken field lines to reconnect with their opposing broken counterpart. When that happens, hydrogen and oxygen ions that had escaped Earth rush to those reconnected field lines and are accelerated back toward Earth. The paper's authors suggest many of those returning ions hit the passing moon, which has no magnetosphere of its own to repel them. "It is like the moon is in the shower—a shower of water ions coming back to Earth, falling on the moon's surface," Kletetschka said. The ions then combine to form the lunar permafrost. Some of that, through geologic and other processes such as asteroid impacts, is driven below the surface, where it can become liquid water. The research team used gravitational data from NASA's Lunar Reconnaissance Orbiter to study polar regions along with several major lunar craters. Anomalies in underground measurements at impact craters indicate locations of fractured rock conducive to containing liquid water or ice. Gravity measurements at those subsurface locations suggest the presence of ice or liquid water, the research paper reads. The latest research builds on work published in December 2020 by four of the new paper's authors, including Kletetschka. | 10.1038/s41598-022-08305-x |
Medicine | New study reveals shared genetic markers underlying substance use disorders | Nora Volkow et al, Multivariate genome-wide association meta-analysis of over 1 million subjects identifies loci underlying multiple substance use disorders, Nature Mental Health (2023). DOI: 10.1038/s44220-023-00034-y Journal information: Nature Mental Health | https://dx.doi.org/10.1038/s44220-023-00034-y | https://medicalxpress.com/news/2023-03-reveals-genetic-markers-underlying-substance.html | Abstract Genetic liability to substance use disorders can be parsed into loci that confer general or substance-specific addiction risk. We report a multivariate genome-wide association meta-analysis that disaggregates general and substance-specific loci from published summary statistics of problematic alcohol use, problematic tobacco use, cannabis use disorder and opioid use disorder in a sample of 1,025,550 individuals of European descent and 92,630 individuals of African descent. Nineteen independent single-nucleotide polymorphisms were genome-wide significant ( P < 5 × 10 –8 ) for the general addiction risk factor (addiction-rf), which showed high polygenicity. Across ancestries, PDE4B was significant (among other genes), suggesting dopamine regulation as a cross-substance vulnerability. An addiction-rf polygenic risk score was associated with substance use disorders, psychopathologies, somatic conditions and environments associated with the onset of addictions. Substance-specific loci (9 for alcohol, 32 for tobacco, 5 for cannabis and 1 for opioids) included metabolic and receptor genes. These findings provide insight into genetic risk loci for substance use disorders that could be leveraged as treatment targets. Main The lives lost, impacts on individuals and families, and socioeconomic costs attributable to substance use reflect a growing public health crisis 1 . For example, in the United States, 13.5% of deaths among young adults 2 are attributable to alcohol, smoking is the leading risk factor for mortality in males 3 , and the odds of dying by opioid overdose are greater than those of dying in a motor vehicle crash 4 . Despite the large impact of substance use and substance use disorders 5 , there is limited knowledge of the molecular genetic underpinnings of addiction broadly. Individual substance use disorders (SUDs) are heritable ( h 2 , ~50–60%) and highly polygenic 6 , 7 . Recent large-scale genome-wide association studies (GWASs) have identified loci associated with problematic drinking 8 , 9 , alcohol use disorder (AUD) 10 , 11 , cigarettes smoked per day 12 , nicotine dependence 13 , 14 , cannabis use disorder (CUD) 15 and opioid use disorder (OUD) 16 . Echoing evidence from twin and family studies 17 , these GWASs show that the genetic architecture of SUDs is characterized by a high degree of commonality 18 , that is, a general addiction genetic factor likely conveys vulnerability to multiple SUDs. Even after accounting for genetic correlations with non-problematic substance use and with other psychiatrically relevant traits and disorders, there is considerable variance that is unique to this general risk for addiction, indicating that a liability to addiction reflects more than just the combined genetic liability to substance use and psychopathology 18 , 19 , 20 , 21 . We conducted a multivariate GWAS of the largest available discovery GWASs of SUDs, including problematic alcohol use (PAU: N = 435,563; continuous) 8 , problematic tobacco use (PTU: N = 270,120; continuous) 12 , 13 , 18 , CUD ( N = 384,032, cases = 14,080) 15 and OUD ( N = 79,729, cases = 10,544 cases) 16 . First, we partitioned single-nucleotide polymorphism (SNP) effects into five sources of variation: (1) a general addiction risk factor (referred to as the addiction-rf), and risks specific to (2) alcohol, (3) nicotine, (4) cannabis and (5) opioids. Second, we identified biological pathways underlying risk for these five SUD phenotypes using gene, expression quantitative trait locus (eQTL) and pathway enrichment analyses. Third, we examined whether currently available medications could potentially be repurposed to treat SUDs 22 . Fourth, we assessed the association of a polygenic risk score (PRS) derived from the addiction-rf with general SUD phenotypes in an independent case/control sample. Fifth, we examined the extent to which genetic liability to the addiction-rf is shared with other phenotypes (for example, physical and mental health outcomes). Sixth, we tested whether the addiction-rf PRS was associated with medical diagnoses derived from electronic health records (EHRs) and with behavioural phenotypes in largely substance-naive 9–10-year-old children. Results Addiction risk factor in European ancestry GWAS As in our prior study 18 , we estimated a single factor model, scaled the variance of the addiction-rf to 1 and allowed loadings to be estimated freely. The single factor model that loaded on OUD ( N effective = 30,443), PAU ( N effective = 300,789), PTU ( N effective = 270,120) and CUD ( N effective = 46,351) fit the data well ( χ 2 (1) = 0.017, P = 0.896, comparative fit index (CFI) = 1, standardized root mean square residual (SRMR) = 0.002). The latent factor loaded significantly on all indicators (standardized loadings on OUD = 0.83, PAU = 0.58, PTU = 0.36, CUD = 0.93; see Supplementary Fig. 1 for full model). The addiction-rf was associated with 19 independent ( r 2 < 0.1) genome-wide significant (GWS) SNPs that mapped to 17 genomic risk loci (Fig. 1 ; Table 1 ; Supplementary Table 1 for lead SNPs and Supplementary Table 2 for genomic risk loci). The most significant SNP ( rs6589386 , P = 2.9 × 10 –12 ) was intergenic, but closest to DRD2 , which was GWS in gene-based analyses ( P = 7.9 × 10 –12 ; Supplementary Table 3 ). Further, rs6589386 was an eQTL for DRD2 in the cerebellum, and Hi-C analyses (in FUMA) 23 revealed that the variant made chromatin contact with the promoter of the gene (Supplementary Fig. 2 ). Fig. 1: Manhattan plot of the addiction-rf GWAS results. The dotted line represents genome-wide significance at 5 × 10 –8 . Each SNP peak is annotated with the closest mapped gene from FUMA (Table 1 ) . We have not included all SNPs in the credible set in Table 1 , but they are shown in Supplementary Table 4 . Significance is set at genome-wide significance Bonferroni correction is a two-sided test ( P < 5 × 10 –8 ). Full size image Table 1 Lead GWAS significant variants Full size table Gene-based analyses identified 42 significantly associated genes (Supplementary Table 3 ); the most significant signals were FTO ( P = 1.86 × 10 –13 ), DRD2 ( P = 7.9 × 10 –12 ) and PDE4B ( P = 9.63 × 10 –11 ). Fine-mapping identified 123 GWS SNPs (of 660 non-independent GWS SNPs) in credible sets as potential causal SNPs based on the posterior probability of inclusion (Supplementary Table 4 ). Mapping the lead independent SNPs in the credible sets to their nearest gene based on posterior probability of 1, the following SNPs showed the strongest causal potential: rs1937455 ( PDE4B ), rs3739095 ( GTF3C2 ), rs6718128 ( ZNF512 ), rs4143308 ( RP11-89K21.1 ), rs4953152 ( SIX3 ), rs41335055 ( CTD-2026C7.1 ), rs2678900 ( VRK2 ), rs7620024 ( TCTA ), rs283412 ( ADH1C ), rs901406 ( BANK1 ), rs359590 ( RABEPK ), rs10083370 ( LINC00637 ), rs1477196 ( FTO ) and rs291699 ( CDK5RAP1 ) (Supplementary Table 4 and Fig. 1 ). Pathway analysis of gene-based results revealed several significant gene ontology (GO) terms including double-stranded DNA binding ( P Bonferroni = 0.005), sequence-specific double-stranded DNA binding ( P Bonferroni = 0.01), regulation of nervous system development (two terms: P B onferroni = 0.011–0.037), and positive regulation of transcription by RNA polymerase ( P B onferroni = 0.038) (Supplementary Table 6 ). Substance-specific risk in European ancestry GWAS To identify loci associated with only a single substance (that is, not pleiotropic), we used ASSET (Association Analysis Based on Subsets 24 ; one-sided P < 5 × 10 –8 ). SNPs that were associated at GWS with only an individual substance (PAU, PTU, CUD or OUD) were considered substance-specific (for example, CHRNA5 SNPs were only associated with PTU; Supplementary Fig. 3b–e ). Problematic alcohol use ASSET analyses revealed nine independent SNPs in six loci associated specifically with PAU (Supplementary Fig. 3b ; Supplementary Tables 7 and 8 ). As expected 8 , the top signal was rs1229984 in ADH1B ( P = 4.11 × 10 –68 ). Gene-based enrichment analyses also implicated the alcohol dehydrogenase activity zinc-dependent pathway ( P Bonferroni = 0.035; Supplementary Table 9 ). Problematic tobacco use PTU was specifically associated with 32 independent SNPs in 12 loci (Supplementary Fig. 3c ; Supplementary Tables 10 and 11 ). The top SNP was rs10519203 ( P = 5.12 × 10 –267 ) in HYKK which is also a robust eQTL for CHRNA5 ; the signal is likely driven by the CHRNA5 missense variant, rs16969968 ( P = 2.79 × 10 –175 ), which has previously been linked to tobacco use ( r 2 = 0.87) 12 . Several other SNPs were closest to genes encoding nicotinic acetylcholine receptors, including CHRNA4, CHRNB4, CHRNB3 and CHRNB2 (Supplementary Table 10 ). Gene-based enrichment implicated multiple pathways and gene sets related to nicotinic acetylcholine receptors (Supplementary Table 12 ). Specific dopamine-related associations were also noted (for example, PDE1C: rs215600 ; P = 2.35 × 10 –18 ; DBH: rs1108581 ; P = 1.00 × 10 –14 ). Cannabis use disorder ASSET identified five substance-specific loci for CUD (Supplementary Tables 13 and 14 ) , with lead signals at rs11913634 (FAM19A5; P = 1.20 × 10 –15 ), rs8104317 ( CACNA1A; P = 1.17 × 10 –13 ), rs72818514 ( ATP10B ; P = 1.57 × 10 –9 ), rs11715758 ( GNAI2/HYAL3 ; P = 4.84 × 10 –8 ; Supplementary Fig. 3d ) and rs11778040 ( P = 1.77 × 10 –9 ; annotated to the GULOP pseudogene). rs11778040 also mapped to the previously discovered signal for CUD near CHRNA2 and EPHX2 15 and is an eQTL for CHRNA2 , EPHX2 and CCDC25 . CUD-specific signals showed no significant gene-based enrichment. Opioid use disorder The only significant substance-specific signal for OUD was the well-characterized 16 mu opioid receptor ( OPRM1 ) SNP, rs1799971 ( P = 1.63 × 10 –8 ; Fig. 2e ). Gene-based analyses produced no significant findings. Fig. 2: Manhattan plot of the transcriptome-wide association study results for addiction-rf. a , b , TWAS of the addiction-rf, plotted as a Manhattan plot. The analyses in a were conducted in S-MultiXcan with GTeX v8 data. The analysis in b was run using S-PrediXcan with weights trained from PsychENCODE. The y -axis is presented as –log 10 ( P ), the colour of the data point represents the tissue in which correlation between gene expression and outcome was the highest. The dotted black line represents Bonferroni-corrected TWAS significance of a two-sided test ( a , 9,944 genes, P Bonferroni = 5 × 10 −6 and the line is at 5.3; b , 13,850 genes, P Bonferroni = 3.6 × 10 –6 , line is at 5.4). Full size image Cross-substance risk in African ancestry GWAS The ASSET-based meta-analysis of GWAS data for AUD ( N = 82,705) 11 , tobacco dependence (TD; based on the Fagerström Test for Nicotine Dependence, N = 9,925) 13 , CUD ( N = 9,745) 15 and OUD ( N = 32,088) 16 in individuals of African ancestry yielded only one GWS pleiotropic SNP, rs77193269 ( P = 4.92 × 10 –8 ); this SNP was GWS for AUD and TD when considering ASSET loci pleiotropic for two substances (Supplementary Fig. 4b ). For substance-specific signals, only one SNP was GWS significant: rs2066702 , an ADH1B variant that was alcohol-specific (Supplementary Fig. 4a ). Cross-substance risk in cross-ancestry GWAS We found 68 GWS SNPs (Supplementary Fig. 5 ), which are challenging to map to nearby regions or candidate genes due to ancestral differences in LD structure. Table 2 lists the SNP with the lowest GWAS P value on each chromosome. The most significant association was noted near the FUT2 gene ( rs507766 , P = 3.47 × 10 –19 ). Many GWS signals were consistent with genes found in the European GWAS, including FTO ( rs9928094 , p = 6.50 × 10 –32 ) and PDE4B ( rs1937439 , P = 8.56 × 10 –12 ). We also identified two SNPs in genes that have previously been implicated in SUDs including CADM2 ( rs62250713 , P = 1.00 × 10 –18 ) and FOXP2 ( rs4727799 , P = 3.90 × 10 –15 ), both of which were within r 2 = 0.6 of lead signals from the European GWAS. Table 2 Top results from the cross-ancestry meta-analysis in METASOFT Full size table Polygenic architecture and power We used a likelihood estimation-based approach to calculate the probability distribution of effect sizes for the addiction-rf and each of the constituent input GWASs (that is, PAU, PTU, CUD and OUD) to examine relative differences in polygenicity (Methods). The addiction-rf showed a narrow distribution of small effect sizes with almost all values falling close to 0. Contrastingly, the original substance-specific GWASs were characterized by larger average effects (see Supplementary Fig. 6 for shape of probability density distribution). For example, only 26% of genes associated with PTU showed effect sizes as close to the mean threshold of the probability distribution as the addiction-rf did. These findings suggest that the addiction-rf is characterized by greater polygenicity than specific substances. Transcriptome-wide association and drug repurposing A transcriptome-wide association study (TWAS) 25 of the addiction-rf using multiple tissues simultaneously from GTEx in MetaXcan (Methods) identified 35 genes in 13 brain regions (Fig. 2 ; Supplementary Table 15 ). Gene-set analysis using FUMA 23 revealed that these genes were enriched for gene sets and pathways related to neural cells and T-cell processes (Supplementary Fig. 7 ; Supplementary Table 16 ). TWASs with PsychENCODE data found 29 significantly associated genes and 11 genes that overlapped with those identified in the GTEx analysis ( AMT , DALRD3 , GPX1 , KLHDC8B , NCKIPSD , NICN1 , P4HTM , PPP6C , RHOA , SNX17 , WDR6 ; Fig. 2 ). Linking transcriptome-wide patterns from our GTEx MetaXcan analysis to perturbagens that cross the blood–brain barrier from the Library of Integrated Network-Based Cellular Signatures (LINCS) 26 database identified 104 medications approved by the US Food and Drug Administration (FDA) that reverse the addiction-rf transcriptional profile ( Supplementary Table 17 ) . Medications currently used to treat SUDs (for example, varenicline for smoking cessation), other psychiatric conditions (for example, reboxetine for depression) as well as those used for other purposes (for example, mifepristone is currently used for pregnancy termination and is currently under clinical investigation for treating AUD; riluzole is a treatment for amyotrophic lateral sclerosis) were identified. Linkage disequilibrium score regression and genetic correlations After Bonferroni correction ( P < 0.05/1,547 = 3.20 × 10 –5 ), the addiction-rf was genetically correlated with 251 phenotypes (Fig. 3 ; Supplementary Table 18 ). Notably, 38 of these (15%) were somatic diseases linked to specific substances (for example, lung cancer with tobacco and pain-related conditions with opioids). As expected, we found significant genetic correlations (rG) between the addiction-rf and serious, transdiagnostic psychopathological behaviours, including suicide attempt (rG = 0.62, P = 2.89 × 10 –33 ) and self-medication (for example, using non-prescribed drugs or alcohol for anxiety, rG = 0.64, P = 3.18 × 10 –6 ). The addiction-rf was correlated with, but remained separable based on 95% confidence intervals (rG = 0.63 ± 0.037, P = 2.33 × 10 –231 ), from an externalizing factor 27 that included similar indices of problematic substance use and behavioural measures. Fig. 3: PheWAS of genetic correlations using MASSIVE. Genetic correlations between 1,547 traits and the addiction-rf, calculated in MASSIVE, mapped by their statistical significance (−log 10 ( P ) on the y -axis), and broad category. The top 20 correlations are annotated ; all results can be found in the Supplementary Results. The black dashed line represents Bonferroni significance for association of a two-sided test ( P Bonferroni = 0.05/1,574 = 3.232 × 10 –5 ). Full size image Latent causal variable analysis We used MASSIVE to conduct latent causal variable (LCV) 28 analyses on the same 251 phenotypes significant in our genetic correlation analyses (Supplementary Table 19 ). After multiple corrections ( P = 0.05/250 = 1.98 × 10 –4 ), the only significant causal processes were medication codes. Specifically, addiction-rf was estimated as a potential risk factor for “Medication for cholesterol, blood pressure or diabetes: cholesterol lowering medication” (genetic causality proportion = –0.739(0.078), P = 4.51 × 10 − 21 ), “treatment/medication code: atorvastatin” (genetic causality proportion = –0.373(0.050), P = 7.93 × 10 –14 ) and “Medication for cholesterol, blood pressure, diabetes, or take exogenous hormones: cholesterol lowering medication” (genetic causality proportion = –0.315(0.071), P = 8.31 × 10 –6 ). The negative genetic causality proportion estimates suggest a causal role of addiction on physical disease (addiction-rf is trait 2 in all instances). Polygenic risk score analyses PRS analyses with measures addiction and SUDs In the independent Yale–Penn 3 sample 16 (European ancestry, N = 1,986), the addiction-rf PRS was significantly associated with a phenotypic factor loading on several SUDs ( P < 0.001), polysubstance use disorder (two or more SUDs; P < 2 × 10 –16 ), and each individual SUD (DSM-IV 29 : TD, cocaine use disorder (CoUD), AUD, CUD and OUD (all P < 7.71 × 10 –6 ; Fig. 4 ; Supplementary Table 20 ). Nagelkerke’s R 2 values ranged from 2.4% for CUD to 5.9% for TD, and 6.6% for a phenotype similar to the addiction-rf that represents phenotypic commonality across AUD, CUD, OUD, TD and CoUD. Odds ratios varied from 1.41 for CUD to 1.73 for OUD. Fig. 4: Polygenic risk score prediction in Yale–Penn 3. a , PRS of the addiction-rf predicts lifetime AUD, CUD, OUD, TD and CoUD, and variables representing more than one lifetime SUD diagnosis versus no SUDs diagnosis (polysubstance use disorder, two level), more than one lifetime diagnosis versus one lifetime diagnosis (polysubstance versus unitary), as well as any SUD diagnosis (any addiction) in an independent sample (Yale–Penn 3; N = 1,986 individuals of European genetic ancestry). b , The addiction-rf PRS was associated with a comparable phenotypic SUD common factor in the Yale–Penn 3 sample. Analyses control for age, sex and 10 genetic principal components of ancestry; all path estimates are fully standardized. *, Estimates were significant at P < 0.001 of a two-sided test (LAVAAN does not report P -values lower than 0.001). CFI, comparative fit index; RMSEA, root mean square error of approximation. Full size image Phenome-wide association studies in electronic health records data In the BioVU sample (European ancestry, N = 66,914) 30 , the addiction-rf PRS was associated with SUDs ( P = 3.31 × 10 –29 ; Supplementary Fig. 8 ), various types of substance involvement (for example, tobacco use disorder P = 9.79×10 –24 , alcoholism (so named in EHR, we note the term ‘alcohol use disorder’ is more appropriate), P = 1.12 × 10 –21 ), chronic airway obstruction ( P = 4.99 × 10 –10 ) and several psychiatric disorders, with the strongest being bipolar disorder ( P = 2.44 × 10 –11 ). Controlling for any SUD diagnosis to account for causal effects found similar associations with ‘alcoholism’, mood disorders, respiratory disease and heart disease (Supplementary Fig. 9a ). Controlling for tobacco use disorder diagnosis did not significantly modify associations (Supplementary Fig. 9b ). Behavioural phenotypes in substance-naive children Among 4,491 substance-naive children aged 9–10 years who completed the baseline session of the Adolescent Brain and Cognitive Development (ABCD) Study 31 , the addiction-rf PRS was positively correlated (after Bonferroni correction) with Behavior Activation System Scale (BAS) fun-seeking (an aspect of externalizing behaviour; P = 2.09 × 10 –5 ), family history of drug addiction ( P = 7.04 × 10 –7 ), family history of hospitalization due to mental health concerns (including suicidal behaviour; P = 4.64 × 10 –6 ), childhood externalizing behaviours (for example, antisocial; P = 1.62 × 10 –5 ), childhood thought problems ( P = 3.51 × 10 –6 ), sleep duration ( P = 1.52 × 10 –7 ), parental externalizing and substance use behaviours (for example, prenatal tobacco exposure; P = 2.87 × 10 –11 ), maternal pregnancy characteristics (for example, urinary tract infection during pregnancy, P = 2.70 × 10 –7 ), socioeconomic disadvantage (for example, child’s neighbourhood deprivation; P = 9.84 × 10 –7 ) and child’s likeliness to play sports ( P = 2.80 × 10 –6 ) (Supplementary Fig. 10 ; Supplementary Table 21 for results from all phenotypes and Supplementary Table 23 for measure inclusion criteria). Discussion We found 17 genomic loci significantly associated with addiction-rf, and 47 substance-specific loci. Post-hoc fine-mapping, annotation, and exploratory drug repurposing analyses highlight the potential therapeutic relevance of the discovered loci. The addiction-rf PRS was associated with many medical conditions characterized by high morbidity and mortality rates, including psychiatric illnesses, self-harming behaviours, and somatic diseases that could be consequences of chronic substance use (for example, chronic airway obstruction) or precursors to heavy substance use (for example, chronic pain). Finally, in a sample of drug-naive children, the addiction-rf PRS was correlated with parental substance use problems and externalizing behaviour. Our analyses suggest that the regulation or modulation of dopaminergic genes, rather than variation in dopaminergic genes themselves, is central to general addiction liability. DRD2 was the top gene signal, which was mapped via chromatin refolding, suggesting a regulatory mechanism. The role of striatal dopamine in positive drug reinforcement is well established 32 . DRD2 plays a role in reward sensitivity and may also be central to executive functioning 33 —the interplay of reward and cognition is likely relevant throughout the course of addiction. These complementary observations reinforce the role of dopamine signalling in addiction 32 . Other regulatory effects on dopaminergic pathways were supported by the signal at PDE4B , which has been implicated in prior GWASs of disinhibition traits 27 . The phosphodiesterase (PDE) system has been proposed as a dopaminergic regulation mechanism 34 . Furthermore, animal studies suggest that the PDE system is associated with downregulation of drug-seeking behaviours across opioids, alcohol and psychostimulants 35 . Notably, The PDE4B antagonist, ibudilast, has been shown to reduce heavy drinking among patients with AUD 36 , 37 and also shown to reduce inflammation in methamphetamine use disorder 38 , and was significant in our drug repurposing analysis. The addiction-rf PRS was associated with general and specific SUD liabilities in an independent sample. The addiction-rf PRS predicted ~6% of OUD variance, which is nearly half the total SNP-heritability of OUD 16 . The addiction-rf PRS also predicted variance in cocaine use disorder (CoUD); as CoUD was not included in the development of the addiction-rf (due to a lack of a well-powered CoUD GWASs), these findings highlight the generalizability of the addiction-rf beyond alcohol, tobacco, cannabis and opioids. Substance-specific genetic signals fell primarily into three broad categories: drug-specific metabolism (for example, ADH1B for PAU), drug receptors (for example, CHRNA5 for PTU, OPRM1 for OUD) and general neurotransmitter mechanisms (for example, CACNA1A for CUD). Surprisingly, even after accounting for the addiction-rf, dopaminergic genes ( DBH and PDE1C in particular) were implicated in substance-specific effects for tobacco (PTU). In contrast, CUD-specific genes did not include well-studied receptor targets (for example, CNR1 ) or metabolic mechanisms (for example, cytochrome P450 genes). The current addiction-rf is distinct from recent genetic factors 21 , 27 , 39 that were based upon analyses of SUDs with other substance use, psychiatric and behavioural traits. We focus on SUDs rather than measures of substance use or other externalizing traits, which prior data indicate have differing aetiologies and relationships with psychiatric health 9 , 40 , 41 . Our study also parses substance-general (that is, addiction-rf) and substance-specific loci. This approach distinguishes the addiction-rf from other genetic factors that include substance use measures. For example, despite genetic overlap between the addiction-rf and a recent index of externalizing behaviours (rG = 0.63) 27 , a significant portion of the variance in the addiction-rf was distinct. Our analyses highlight the robust genetic association of the addiction-rf with serious mental and somatic illness. The addiction-rf PRS was more strongly associated with using drugs to cope with internalizing disorder symptoms (anxiety, depression; rG = 0.60–0.62) than with the individual psychiatric traits and disorders themselves (rG = 0.3), suggesting that genetic correlations between SUDs and mood disorders may partially be attributable to a predisposition to use substances to alleviate negative mood states (‘self-medication’) 42 . The phenome-wide association study (PheWAS) provided insight into potentially complex mechanisms of genetic liability to environmental pathways of risk. In addition to indices of socioeconomic status (SES), the addiction-rf was correlated with maternal tobacco smoking during pregnancy and with attention deficit hyperactivity disorder, in line with evidence that effects ascribed to the prenatal environment may also be mediated by the inheritance of risk loci 43 , 44 . The addiction-rf PRS was associated with a family history of serious mental illness, which likely represents an amalgam of genetic and environmental vulnerability 45 . Finally, disability and SES were also associated with polygenic risk, further supporting the association between environmental risk factors and common genetic effects on SUD liability 9 , 41 , 46 . This study has limitations. First, our GWAS in individuals of African ancestry had few discoveries, underscoring the need for systematic data collection on SUDs in globally representative populations. Still, we chose to analyse and present these data as their exclusion only furthers disparities in genetic discoveries. Second, although we discovered many loci, they accounted for only a small proportion of the total variance. More samples, particularly from diverse populations, and the integration of rarer variants are needed to discover the biological pathways that fall below genome-wide significance or are missed in GWAS. Finally, despite interesting associations between our PRS and SUDs, our findings do not apply to prognostication of future disease risk. Conclusion A common and highly polygenic genetic architecture underlies multiple SUDs, a finding that merits integration into medical knowledge on addictions. Methods Summary statistics from each SUD-related GWAS Summary statistics from the largest available discovery GWAS were used to represent genetic risk for each construct. These include four measures of problematic substance use or SUD: (1) PAU 8 , (2) PTU 12 , 13 , 18 , (3) CUD 15 , (4) OUD 16 . All GWAS summary statistics were filtered to retain variants with minor allele frequencies >0.01 and INFO score >0.90 for GSCAN 12 and PGC 15 and INFO score >0.70 for the MVP 8 , 16 . For the current cross-trait GWAS, we maintained the same quality control (QC) metrics and only analysed SNPs that were present in all four input GWASs, that is, variants that passed QC thresholds at all levels, resulting in 3,513,381 SNPs in samples of European ancestry and 5,303,643 SNPs in samples of African American ancestry. The linkage disequilibrium (LD) scores used for the genomic structural equation modelling (GenomicSEM) 47 were estimated in the European ancestry samples only using the 1000 Genomes European data 48 . We restricted analyses to HapMap3 SNPs 49 as these tend to be well imputed and produce accurate estimates of heritability. We used the effective N , that was estimated for each GWAS 50 . For traits with a binary distribution, the effective sample size for an equivalently powered case-control study under a 50–50 case control balance was estimated using the equation: N effective = 4/((1/ N case ) + (1/ N control )) 51 ,where N represents the sample size. Continuous and quasi-continuous traits used the given N or if from MTAG, the equation N effective = (( Z / β ) 2 )/(2 × MAF × (1 – MAF)), where MAF is the minor allele frequency, Z is the z -score of the effect size and β is the beta of the effect size 8 , to approximate an equivalently powered GWAS of a single trait. Effective N values ranged from 46,351 (CUD) to 300,789 (PAU) and are described for each substance-specific GWAS in the Results. Individual GWAS details can be found in the Supplementary Methods. Genome-wide analyses in European ancestry We conducted a GWAS of a unidimensional addiction risk factor (addiction-rf) underlying the genetic covariance among PAU, PTU, CUD and OUD by applying GenomicSEM 47 to these European ancestry summary statistics. GenomicSEM conducts genome-wide association analyses in two stages. First, a multivariate version of LD score regression is used to estimate the genetic covariance matrix among all GWAS phenotypes, which is then combined with each individual SNP to calculate SNP-specific genetic covariance matrices. Second, these matrices are then used to estimate the SEM using the lavaan package in R 52 . Variable and unknown extents of sample overlap across contributing GWASs are automatically accounted for in the estimation procedure. The unifactor model fit the data well 53 ( χ 2 (1) = 0.017, P = 0.896, CFI = 1, SRMR = 0.002; residual r = 0.51, P = 0.016; Supplementary Fig. 1 ; see also our prior work 18 and Methods). As the sample size of summary data derived from African American samples ( N range = 9,835–56,648) was not sufficient for LD score 54 analyses, we used ASSET 24 to conduct the addiction-rf GWAS, as opposed to GenomicSEM, as described in the subsequent ASSET section below. ASSET trans-ancestry analyses ASSET 24 was used to identify pleiotropic (that is, SNPs that show associations with more than one SUD) and substance-specific (that is, SNPs only associated with a single SUD) SNPs within the European and African American ancestry samples (in addition to GenomicSEM in Europeans). ASSET was used in our African American ancestry addiction-rf GWAS because the sample size was not sufficient for the genomic structural equation modelling (SEM) approach used in the European addiction-rf GWAS. As a result, there are important differences in the primary addiction-rf GWAS and GWAS run in ASSET. First, the ASSET-based addiction-rf GWAS contains SNPs that may influence two, three, or all four individual SUDs, while the GenomicSEM-based addiction-rf GWAS in European ancestry samples includes SNPs associated with a common factor across all included SUDs. We used ASSET to identify pleiotropic SNPs in the European ancestry sample to facilitate method-consistent cross-ancestry meta-analysis GWAS (see subsequent ‘Cross-ancestry meta-analysis’ section below) and cross validate primary GenomicSEM results. ASSET does not leverage the genetic correlation to identify variants of interest (as GenomicSEM does); instead, subset searches scaffold effects into pleiotropic and non-pleiotropic variants based on effect size and standard error derivations that estimate the degree to which the SNP–trait association is due to pooled effects across the phenotypes, versus a single phenotype driving variant association. Loci were designated as substance specific when they were significantly associated with only one SUD. Because ASSET does not automatically account for sample overlap; we used the linkage disequilibrium score regression intercept (LDSC) to adjust for overlap within the European ancestry ASSET covariance term. Cross-ancestry meta-analysis We conducted a cross-ancestry meta-analysis of ASSET-derived (to maintain analytic consistency) European and African ancestry addiction-rf summary statistics. First, SNPs with evidence of SUD pleiotropy (that is, effects on two, three, or all four SUDs, including different sets of SUDs in each ancestry) in both ancestral groups were extracted. SNPs with evidence of cross-ancestral heterogeneity (that is, Cochran’s Q statistic <5 × 10 –8 ) were removed, leaving 317,447 SNPs. A meta-analysis in METASOFT 55 using a random-effects meta-analysis with ancestry group as a random effect was used to identify cross-ancestral effects. We report the random-effects beta and P -value as cross-ancestry effects. Substance specific genetics in European ancestry individuals To validate substance-specific SNPs, we used ASSET for discovery of these variants and, in the European ancestry GWAS, also examined Q-SNP results derived from GenomicSEM. Q-SNP 14 indexes violation of the null hypothesis that a SNP acts on a trait entirely through a common factor (for example, the addiction-rf). For example, if a SNP has a particular effect on one SUD trait (such as SNPs in CHRNA5 influencing PTU), then it should have significant Q-SNP statistics because it violates the assumption that its effect on PTU is via the addiction-rf. We identified Q-SNPs by estimating the association between each SNP and the addiction-rf. Then, we fit a model where the SNP predicted the indicators underlying the addiction-rf, that is, PAU, PTU, CUD, OUD. We compared the χ 2 difference statistic between the two models; those with significant decrement of fit ( χ 2 for Δd.f. = 4) in the model where the SNP predicted the addiction-rf alone relative to the SNP predicting the indicators themselves was considered a significant Q-SNP above GWS (that is, Q P < 5 × 10 –8 ). SNPs with significant Q-SNP statistics were removed from the addiction-rf summary statistics for all post-hoc analyses, including fine-mapping, gene-based tests, transcriptome-wide association analyses, LD score genetic correlations and PRS analyses. Q-SNP analysis also identified several SNPs that appeared to be specific to a single substance. However, as Q-SNP cannot be used for precise identification of substance-specific (trait-specific) SNPs, we relied on ASSET analyses (with a one-sided P -value), to identify the subset of SNPs with effects (at GWS, P < 5 × 10 –8 ) limited to only one SUD-related trait (for example, PAU-specific vs. PAU common with OUD). It is worth noting that the ASSET analysis determines both common addiction and substance specific SNPs. Here we would like to note that the common addiction SNPs from ASSET results were used for our cross-ancestry analysis, while specific SNPs in our results are described seperately for each population. Post-hoc analyses of European ancestry GWAS results Estimation of expected SNP effect sizes We estimated the distribution of genetic effect-sizes of the addiction-rf (GenomicSEM) and the four input GWASs (PAU, PTU, CUD, OUD) using genetic effect-size distribution inference from summary-level data (GENESIS). GENESIS is a likelihood-based approach 56 . In this approach, GWAS summary statistics and an external panel of LD (in our case, the 1000 Genomes Phase 3 reference panel) are used to estimate a projected distribution of SNP effect sizes. A flexible normal mixture model based on the number of tagged SNPs and LD scores is estimated. A three-component model is fit, where SNP effect sizes are estimated to belong to one of three components based on bins of effect sizes (large, medium and small). If the distribution of SNPs is multivariate normal, the estimation of the SNPs with large and medium effect sizes can be done via their independent effect sizes. The third component represents SNPs with null and small effect sizes, and these should follow a similar distribution. Therefore, this model reweights SNPs and generates a projected distribution of effect sizes, and from this projection, we can draw conclusions about the distribution of effect sizes 54 . Biological characterization FUMA 23 was used for post-hoc bioinformatic analyses of our five GWASs (that is, the addiction-rf (from GenomicSEM), PAU-specific, PTU-specific, CUD-specific, OUD-specific (from ASSET) loci) in European ancestry samples and to determine lead and independent variants. Within FUMA, gene-based tests and gene-set enrichment were conducted via MAGMA 57 ; gene annotation, and identification of SNP-to-gene associations via eQTLs and/or chromatin interactions (via Hi-C data) in PsychENCODE 58 and Roadmap Epigenomics tissues for prefrontal cortex, hippocampus, ventricles and neural progenitor cells 59 , 60 . For each specific SUD, the distribution of P -values included all non-pleiotropic SNPs identified by ASSET (that is, SNPs only associated with a single SUD, n SNP CUD-specific = 312,661, n SNP PTU-specific = 560,983, n SNP PAU-specific = 193,647, n SNP OUD-specific = 425,665). Fine-mapping with SusieR We fine-mapped the association statistics of the four phenotypes (the addiction-rf, PAU-specific, PTU-specific, CUD-specific; OUD-specific only had one significant locus, and that locus has a known mechanism of effect) that had more than one GWS SNP in a 1 Mb region around the lead SNP to determine the 95% credible set using susieR 61 with at most 10 causal variants (this analysis reduces the total number of SNPs at a lead genome-wide signal to those that can credibly be considered as causal SNPs). The credible set reports include the likelihood of being a causal variant; the marginal posterior inclusion probability (PIP) ranges from 0 to 1, with values closer to 1 being most likely causal. Transcriptome-wide association analysis We conducted two transcriptome-wide analyses. First, we used MetaXcan/S-MultiXcan 38 to conduct a cross-tissue analysis of all brain tissues in the GTEx v8 data 37 . S-MultiXcan returns a broad z -score across all tissues in the model, along with the top and lowest scores at each tissue. S-MultiXcan combines information across individual tissues, which improves the power for discovery by reducing the multiple correction burden. It also produces z -score and P -values for top-associated tissues. Second, we also used S-PrediXcan 62 to predict transcription using the weights trained on psychiatric cases versus controls transcriptional differences from the frontal and temporal cortex using the PsychENCODE 63 dataset. As these data were very densely sampled for psychiatrically relevant traits, it serves to complement the relatively healthy GTEx sample. Drug repurposing Our technique for drug signature matching used data from the LINCS L1000 database 64 . The LINCS L1000 database catalogues in vitro gene expression profiles (signatures) from thousands of compounds in over 80 human cell lines (level 5 data from phase I: GSE92742 and phase II: GSE70138 ) 26 . We selected compounds that were currently FDA approved or in clinical trials (via ; updated 24 March 2020). Our analyses included signatures of 829 chemical compounds (590 FDA approved, 239 in clinical trials) in five neuronal cell-lines (NEU, NPC, MNEU.E, NPC.CAS9 and NPC.TAK), a total of 3,897 signatures were present as not all compounds were tested in all cell lines in the LINCS dataset. In vitro medication signatures were matched with the addiction-rf signatures from the transcriptome-wide association analyses (conducted using S-MultiXcan) 25 , 62 via multi-level meta-regression. We computed weighted (by its proportion of heritability explained ( h MULTI-XCAN 2 )) Pearson correlations between transcriptome-wide brain associations and in vitro L1000 compound signatures using the metafor package in R 65 . We treated each L1000 compound as a fixed effect incorporating the effect size ( r weighted ) and sampling variability (se r_weighted 2 ) from all signatures of a compound (for example, across all time points, cell lines and doses). Analyses included time since perturbagen exposure as a random effect. Only genes that were Bonferroni significant in the S-PrediXcan analysis (transcriptome-wide correction = 0.05/14,389 = 3.48 × 10 –6 ) were entered into the model. We only report those perturbagens that were associated after Bonferroni correction (perturbagen correction = 0.05/3,897 = 1.28 × 10 –5 ). PRS analyses in Yale–Penn Yale–Penn 3 The Yale–Penn 16 , 66 sample includes 11,332 genotyped and phenotyped individuals recruited across three phases (that is, Yale–Penn 1, Yale–Penn 2 and Yale–Penn 3) based on the time of recruitment and genotyping array used. All cohorts were ascertained via recruitment at substance use treatment centres or targeted advertisements for genetic studies of cocaine, opioid and alcohol dependence, resulting in a sample highly enriched for problematic substance use, as well as control subjects and relatives. All participants were assessed using the Semi-Structured Assessment for Drug Dependence and Alcoholism (SSADDA) 67 . Analyses based on Yale–Penn 1 and 2 have been published previously 66 , and were used in the discovery sample of the present study. Here, we used data from Yale–Penn 3 16 for replication analyses and as a target sample for PRS analyses; the Yale–Penn 3 sample is independent from our discovery GWASs. Yale–Penn 3 comprises 3,026 genotyped and phenotyped Americans of European (EUR; N = 1,986) and African (AFR; N = 1,040) ancestry passing standard QC. Genotyping was performed at the Gelernter lab at Yale University using the Illumina Multi-ethnic Global Array containing 1,779,819 markers, followed by genotype imputation using Minimac3 68 and the Haplotype Reference Consortium reference panel 69 as implemented on the Michigan imputation server ( ). For the present analysis, only Yale–Penn 3 EUR subjects ( N = 1,986) were included. DSM-IV 29 substance abuse and dependence diagnoses (combined as abuse or dependence to represent use disorder) based on SSADDA assessments were used to determine case and control status for AUD, CUD, CoUD, TD and OUD. Of the 1,986 EUR subjects, 42.4% met criteria for AUD ( N = 843), 25.9% met criteria for CUD ( N = 515), 25.3% met criteria for CoUD ( N = 503), 31% met criteria for TD ( N = 615) and 22.6% met criteria for OUD ( N = 448). The mean age of Yale–Penn 3 EUR subjects is 41.5 years (s.e. = 15.1) and 51.5% are female ( N = 1,023). We calculated the addiction-rf PRS using the PRS-CS auto approach 70 . This method assumes a general distribution of effect sizes across the genome, and then reweights SNPs based on this assumption, their effect size in the original GWAS, and their LD; weights for every SNP were then summed to create a final score. PRS were associated with phenotypes (OUD, TD, CUD, AUD, CoUD) in Yale–Penn 3 via a logistic regression controlling for the first 10 ancestral principal components, age, sex and age by sex. PRS were scaled to unit variance. These logistic regression analyses were also examined for the following contrasts: (1) those with any SUD ( n = 985) versus those with no SUD ( n = 1,001), to represent ‘any SUD’; (2) those with at least two SUDs ( n = 729) versus those with less than two (including zero) SUDs ( n = 1,257) to represent ‘polysubstance use disorder’; and (3) those with at least two SUDs ( n = 729) versus those with one SUD ( n = 256) to represent polysubstance use disorder within those with SUD. The association between the addiction-rf PRS and the SUD common factor was estimated with lavaan 52 where the common factor loaded on the five SUDs. Genetic correlations and latent causal variable modelling To examine phenotypes that were genetically correlated with the addiction-rf, we calculated genetic correlations using LD score regression 54 , 71 through the MASSIVE pipeline 72 , which conducts LD score regression 13 , 46 and Latent Causal Variable Analysis 28 on 1,547 summary statistics for various phenotypic traits, including a mixture of ICD codes and self-reported traits from the UK Biobank and publicly available meta-analyses from GWAS consortia. Phenome-wide association studies PheWASs in adult samples As MASSIVE includes a fairly sparse set of diagnoses (not all ICD codes are available) for genetic correlation analyses, we conducted additional and theoretically relevant PheWASs using the addiction-rf PRS. We used EHR data for 66,914 genotyped individuals of European ancestry from the Vanderbilt University Medical Center biobank (BioVU) 30 . BioVU is a repository of leftover blood samples (~240,000 samples) from clinical testing, that are sequenced, de-identified and linked to clinical and demographic data. Genotyping and QC of this sample have been described elsewhere 30 . The addiction-rf PRS was used to predict 1,335 diseases in a logistic regression model, controlling for median age on record, reported gender and first 10 genetic ancestral principal components. For an individual to be considered a case, they were required to have two separate ICD codes for the index phenotype, and each phenotype needed at least 100 cases to be included in the analysis. A Bonferroni-corrected phenome-wide significance threshold of 0.05/1,335 = 3.7 × 10 –5 was used 73 . ABCD PheWAS of phenotypes collected in childhood To identify phenotypes that were associated with the addiction-rf before the onset of regular substance use, we used data from the ABCD Study (release 2.0 for genomic data and 3.0 for phenotypes) to conduct a phenome-wide association analysis of behavioural, social and environmental phenotypes in adolescence. The ABCD Study is an ongoing multi-site longitudinal study of child health and development (Methods) 31 , 74 . Children ( N = 11,875; including twins and siblings) ages 8.9–11 years were recruited from 22 sites across the United States to complete the ABCD Study baseline assessment. We restricted our sample to participants of genomically confirmed European ancestry (based on principal components) who were not missing any covariate measures ( N = 4,490). PRS were generated using the PRS-CS software package 70 consistent with our other (that is, Yale–Penn 3, BioVU) PRS analyses described above. Associations between the addiction-rf PRS and phenotypes were estimated using mixed-effects models in the lme4 75 package in R. PRS were scaled to unit variance. Family ID and site were included as random effects to account for non-independence of measurement associated with relatedness and scanner/site. We controlled for the first 10 ancestral principal components, age, sex and age by sex. We used a Bonferroni-corrected phenome-wide significance threshold of 0.05/1,480 = 3.38 × 10 –5 ; all results are presented in the Supplementary Table 21 . Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The MVP summary statistics were obtained via an approved dbGaP application ( phs001672.v4.p1 ). For details on the MVP, see and ref. 76 . This research is based on data from the MVP, Office of Research and Development, Veterans Health Administration, and was supported by the Veterans Administration Cooperative Studies Program award G002. Publicly available data were also downloaded from the psychiatric genomics consortium ( ) and the GSCAN consortium ( ). The datasets used for the BioVU analyses described were obtained from Vanderbilt University Medical Center’s biorepository, which is supported by numerous sources: institutional funding, private agencies and federal grants. These include the National Institutes of Health-funded Shared Instrumentation grant S10RR025141; and Clinical and Translational Science Awards (CTSA) grants UL1TR002243, UL1TR000445 and UL1RR024975. Genomic data are also supported by investigator-led projects that include U01HG004798, R01NS032830, RC2GM092618, P50GM115305, U01HG006378, U19HL065962 and R01HD074711; and additional funding sources listed at . Data from Yale–Penn 1 are available through dbGAP accession no phs000425.v1.p1 including 1,889 African American subjects and 1,020 European-American subjects. Yale–Penn 1 data are also available through dbGAP accession no phs000952.v1.p1 including 1,531 African American subjects and 1,339 self-reported European-American subjects. Summary statistics for all Yale–Penn data are available on request to J.G. ([email protected]). | By combing through genomic data of over 1 million people, scientists have identified genes commonly inherited across addiction disorders, regardless of the substance being used. This dataset—one of the largest of its kind—may help reveal new treatment targets across multiple substance use disorders, including for people diagnosed with more than one. The findings also reinforce the role of the dopamine system in addiction, by showing that the combination of genes underlying addiction disorders was also associated with regulation of dopamine signaling. Published today in Nature Mental Health, the study was led by researchers at the Washington University in St. Louis, along with more than 150 co-authors from around the world. There has been limited knowledge of the molecular genetic underpinnings of addiction until now. Further, most clinical trials and behavioral studies have focused on individual substances, rather than addiction more broadly. "Genetics play a key role in determining health throughout our lives, but they are not destiny. Our hope with genomic studies is to further illuminate factors that may protect or predispose a person to substance use disorders—knowledge that can be used to expand preventative services and empower individuals to make informed decisions about drug use," said NIDA Director, Nora Volkow, M.D. "A better understanding of genetics also brings us one step closer to developing personalized interventions that are tailored to an individual's unique biology, environment, and lived experience in order to provide the most benefits." In 2021, more than 46 million people in the United States aged 12 or older had at least one substance use disorder, and only 6.3% had received treatment. Moreover, people who use drugs are facing an increasingly dangerous drug supply, now often tainted with fentanyl. Approximately 107,000 people died of drug overdoses in 2021, and 37% of these deaths involved simultaneous exposure to both opioids and stimulant drugs. Drug use and addiction represent a public health crisis, characterized by high social, emotional, and financial costs to families, communities, and society. Substance use disorders are heritable and influenced by complex interactions among multiple genes and environmental factors. In recent decades, a data-rich method, called genome-wide association, has emerged to try to identify specific genes involved in certain disorders. This method involves searching entire genomes for regions of genetic variation, called single-nucleotide polymorphisms (SNPs), that associate with the same disease, disorder, condition, or behavior among multiple people. In this study, researchers used this method to pinpoint areas in the genome associated with general addiction risk, as well as the risk of specific substance use disorders—namely, alcohol, nicotine, cannabis, and opioid use disorders—in a sample of 1,025,550 individuals with genes indicating European ancestry and 92,630 individuals with genes indicating African ancestry. "Using genomics, we can create a data-driven pipeline to prioritize existing medications for further study and improve chances of discovering new treatments. To do this accurately, it's critical that the genetic evidence we gather includes globally representative populations and that we have members of communities historically underrepresented in biomedical research leading and contributing to these kinds of studies," said Alexander Hatoum, Ph.D., a research assistant professor at Washington University in St. Louis and lead author of the study. Hatoum and the research team discovered various molecular patterns underlying addiction, including 19 independent SNPs significantly associated with general addiction risk and 47 SNPs for specific substance disorders among the European ancestry sample. The strongest gene signals consistent across the various disorders mapped to areas in the genome known to control regulation of dopamine signaling, suggesting that genetic variation in dopamine signaling regulation, rather than in dopamine signaling itself, is central to addiction risk. Compared to other genetic predictors, the genomic pattern identified here was also a more sensitive predictor of having two or more substance use disorders at once. The genomic pattern also predicted higher risk of mental and physical illness, including psychiatric disorders, suicidal behavior, respiratory disease, heart disease, and chronic pain conditions. In children aged 9 or 10 years without any experience of substance use, these genes correlated with parental substance use and externalizing behavior. "Substance use disorders and mental disorders often co-occur, and we know that the most effective treatments help people address both issues at the same time. The shared genetic mechanisms between substance use and mental disorders revealed in this study underscore the importance of thinking about these disorders in tandem," said NIMH Director Joshua A. Gordon, M.D., Ph.D. Genomic analysis in the African ancestry sample revealed one SNP associated with general addiction risk and one substance-specific SNP for risk of alcohol use disorder. The dearth of findings here underscores ongoing disparities in data inclusion of globally representative populations that must be addressed to ensure data robustness and accuracy, Hatoum and co-authors note. The inclusion of data from different ancestral groups in this study cannot and should not be used to assign or categorize variable genetic risk for substance use disorder to specific populations. As genetic information is used to better understand human health and health inequities, expansive and inclusive data collection is essential. NIDA and other Institutes at NIH supported a recently released report on responsible use and interpretation of population-level genomic data by the National Academies of Sciences, Engineering, and Medicine. See also a corresponding statement from the NIH. While Hatoum and colleagues have identified a genetic pattern indicating broad addiction risk, they note that substance use-specific diagnoses still have meaning. "The current study validates previous findings of alcohol-specific risk variants, and, importantly, makes this finding in a very large and more diverse study population," said NIAAA Director George F. Koob, Ph.D. "The finding of shared genetic risk variants across different substance use disorders provides insight into some of the mechanisms that underlie these disorders and the relationships with other mental health conditions. Together the findings of alcohol-specific risk variants and common addiction-related variants provide powerful support for individualized prevention and treatment." | 10.1038/s44220-023-00034-y |
Earth | Climate change likely to increase human exposure to toxic methylmercury | Climate change and overfishing increase neurotoxicant in marine predators, Nature (2019). DOI: 10.1038/s41586-019-1468-9 , www.nature.com/articles/s41586-019-1468-9 Journal information: Nature | http://dx.doi.org/10.1038/s41586-019-1468-9 | https://phys.org/news/2019-08-climate-human-exposure-toxic-methylmercury.html | Abstract More than three billion people rely on seafood for nutrition. However, fish are the predominant source of human exposure to methylmercury (MeHg), a potent neurotoxic substance. In the United States, 82% of population-wide exposure to MeHg is from the consumption of marine seafood and almost 40% is from fresh and canned tuna alone 1 . Around 80% of the inorganic mercury (Hg) that is emitted to the atmosphere from natural and human sources is deposited in the ocean 2 , where some is converted by microorganisms to MeHg. In predatory fish, environmental MeHg concentrations are amplified by a million times or more. Human exposure to MeHg has been associated with long-term neurocognitive deficits in children that persist into adulthood, with global costs to society that exceed US$20 billion 3 . The first global treaty on reductions in anthropogenic Hg emissions (the Minamata Convention on Mercury) entered into force in 2017. However, effects of ongoing changes in marine ecosystems on bioaccumulation of MeHg in marine predators that are frequently consumed by humans (for example, tuna, cod and swordfish) have not been considered when setting global policy targets. Here we use more than 30 years of data and ecosystem modelling to show that MeHg concentrations in Atlantic cod ( Gadus morhua ) increased by up to 23% between the 1970s and 2000s as a result of dietary shifts initiated by overfishing. Our model also predicts an estimated 56% increase in tissue MeHg concentrations in Atlantic bluefin tuna ( Thunnus thynnus ) due to increases in seawater temperature between a low point in 1969 and recent peak levels—which is consistent with 2017 observations. This estimated increase in tissue MeHg exceeds the modelled 22% reduction that was achieved in the late 1990s and 2000s as a result of decreased seawater MeHg concentrations. The recently reported plateau in global anthropogenic Hg emissions 4 suggests that ocean warming and fisheries management programmes will be major drivers of future MeHg concentrations in marine predators. Main The exploitation of fisheries in the northwestern Atlantic Ocean for hundreds of years has led to large fluctuations in herring, lobster and cod stocks, which has altered the structure of food webs and the availability of prey for remaining species 5 . We synthesized more than three decades of ecosystem data and MeHg concentrations in seawater, sediment and biological species that represent five trophic levels from the Gulf of Maine, a marginal sea in the northwestern Atlantic Ocean that has been exploited for commercial fisheries for more than 200 years. These data were used to develop and evaluate a mechanistic model for MeHg bioaccumulation that is based on bioenergetics and predator–prey interactions (see Methods ), to better understand the effects of ecosystem changes and overfishing 6 . A comparison of simulated MeHg concentrations based on extensive analysis of the stomach contents of two marine predators (Atlantic cod and spiny dogfish, Squalus acanthias ) in the 1970s and 2000s reveals that the effects of shifts in trophic structures caused by overfishing differed between these two species (Fig. 1a, b ). In the 1970s, cod consumed 8% more small clupeids than in the 2000s as a consequence of the overharvesting and reduced abundance of herring 7 . Simulated tissue MeHg concentrations in cod (larger than 10 kg) in the 1970s were 6–20% lower than for cod consuming a diet typical of the 2000s that relied more heavily on larger herring, lobster and other macroinvertebrates 7 . The 1970s diet for spiny dogfish when herring were limited included a higher proportion (around 20%) of squid and other cephalopods, which exhibit higher MeHg concentrations than many other prey fish. In contrast to cod, simulated MeHg concentrations in spiny dogfish were 33–61% higher in the 1970s than in the 2000s, when they consumed more herring and other clupeids 7 . These results illustrate that perturbations to the trophic structure of marine organisms from overfishing can have contrasting effects on MeHg concentrations across species. Such changes must therefore be assessed before concluding that temporal trends in biological MeHg concentrations reflect shifts in environmental Hg contamination. Fig. 1: Modelled effects of ecosystem change on MeHg concentrations in Atlantic cod and spiny dogfish. a , b , Differences in modelled MeHg concentrations in Atlantic cod ( a ) and spiny dogfish ( b ) based on a diet typical of the 1970s (dotted line) and the 2000s (solid line). Prey preferences for each time period were derived from the stomach contents of more than 2,000 fish 7 . c , d , Modelled changes in fish MeHg concentrations (relative to a diet typical of the 2000s) that result from a temperature increase of 1 °C; a shift in diet composition driven by overfishing of herring (represented by 1970s prey preferences when this last occurred); an assumed 20% decline in seawater MeHg concentration; the combination of both an increase in temperature and a decrease in seawater MeHg; and the simultaneous combination of all three factors. Source data Full size image Northward migration of the Gulf Stream and decadal oscillations in ocean circulation have led to unprecedented seawater warming in the Gulf of Maine between a low point in 1969 and 2015, which places this region in the top 1% of documented seawater temperature anomalies 8 . Both laboratory and field mesocosm data have demonstrated that rising temperatures lead to increases in MeHg concentrations in estuarine and freshwater fish 9 , 10 , but the magnitudes of potential changes in wild species are poorly understood. The effects of seawater warming are complicated by the narrow temperature niches of many marine fish species, which we account for in our food web model (see Methods ). Seawater warming of greater than 1–2 °C can lead to shifts in preferred foraging territory to higher latitudes or deeper in the water column, which alters the availability of prey for remaining species 11 . The effects of ecosystem changes on MeHg bioaccumulation vary across species and are not additive for predatory fish because of feeding relationships and bioenergetics at lower trophic levels. We modelled the changes in MeHg tissue concentrations in Atlantic cod and spiny dogfish that would result from increases in seawater temperature, declines in seawater MeHg concentrations and shifts in trophic structure due to overfishing (Fig. 1c, d ). Experimental data indicate that MeHg uptake by most marine algae is not sensitive to variability in seawater temperature 6 and therefore our modelling analysis accounts for temperature-driven changes in MeHg at higher trophic levels, from zooplankton to predatory fish. For a 15-kg Atlantic cod, our model predicts that an increase of 1 °C in seawater temperature relative to the year 2000 would lead to a 32% increase in simulated tissue MeHg concentrations. A shift in trophic structure characteristic of overexploited herring fisheries would result in a 12% decrease in fish MeHg. In the absence of ecosystem changes, simulated fish MeHg concentrations shift proportionally to seawater MeHg concentrations. If we assume that seawater MeHg concentrations decline by approximately 20% as a consequence of reductions in Hg loading, the combination of all three factors simultaneously results in a 10% decrease in tissue MeHg concentrations for Atlantic cod (Fig. 1c ). For a 5-kg spiny dogfish, our model estimates that a temperature increase of 1 °C would result in a 70% increase in tissue MeHg concentrations, and that switching to a diet that is characteristic of low herring abundance would lead to a 50% increase in fish MeHg. When combined with the assumed 20% decline in seawater MeHg concentrations, the model predicts a 70% increase in tissue MeHg concentrations for dogfish (Fig. 1d ). Owing to a large reduction in Hg releases from wastewater and declines in atmospheric deposition of Hg in North America 12 , 13 , seawater MeHg concentrations in the northwestern Atlantic Ocean are presumed to have declined since the 1970s (Fig. 2a ). Our results help to explain why temporal changes in tissue MeHg concentrations in the Gulf of Maine have been mixed across species, despite declining inputs of Hg to the marine environment since the 1970s 12 . Fig. 2: Effects of seawater warming in the Gulf of Maine on tissue MeHg concentrations in ABFT. a , Modelled seawater MeHg concentrations over time. The model is based on measured MeHg concentrations between 2008 and 2010 17 and scaled by modelled temporal changes in seawater Hg 12 . b , Measured temperature anomaly in seawater in the Gulf of Maine 8 . The shaded grey area indicates the projected future change. c , Modelled MeHg tissue concentrations in 14-year-old ABFT based on changes in seawater MeHg concentrations (dashed line), and based on the combined effect of changes in seawater MeHg concentrations and seawater temperature anomaly (solid line). The symbols indicate means of observed concentrations in multiple fish: new data for ABFT that were captured in 2017 ( n = 33) are shown as a star; previously published data 16 , 18 , 19 , 20 are shown as crosses 16 ( n = 83), a square 18 ( n = 14), a triangle 19 ( n = 3) and a circle 20 ( n = 5). Sample size ( n ) represents the number of independent fish; s.d. and statistics are provided in Extended Data Table 3 . d , Changes in MeHg concentrations in ABFT that are due to temperature only. Source data Full size image We used historical temperature records to further investigate the effects of recent temperature changes on MeHg bioaccumulation in Atlantic bluefin tuna (ABFT), another important marine predator (Fig. 2 ). No time-series data on seawater MeHg are available, so we extrapolated measured concentrations using information on emissions in North America and projected total Hg concentrations in seawater (see Methods ). Increases in seawater temperature coincide with putative declines in seawater MeHg concentrations (Fig. 2b ). The implications of changes in seawater MeHg concentrations (Fig. 2a ) and seawater temperature (Fig. 2b ) in the Gulf of Maine for tissue MeHg concentrations in 14-year-old ABFT (250 ± 23 cm length 14 (mean ± s.d.)) are illustrated in Fig. 2 . The dashed line in Fig. 2c shows the changes in MeHg in ABFT tissue that result from changing seawater MeHg only, and the solid line shows the combined influence of changes in seawater MeHg and temperature. Without including the effects of temperature, shifts in MeHg concentrations in ABFT lag peak seawater MeHg concentrations by five years, and the amplitude of the peak is dampened relative to seawater (Fig. 2c , dashed line). Historical temperature oscillations result in an additional lag of six years in maximum MeHg concentrations in ABFT (Fig. 2c , solid line), and reduce the standard error of the modelled tissue MeHg concentrations in ABFT compared to observations (Fig. 2c , symbols) from 120 ng g −1 (Fig. 2c , dashed line) to 95 ng g −1 (Fig. 2c , solid line). Both the model and observations indicate that a large decline in MeHg concentrations in ABFT occurred after the late 1980s and early 1990s (Fig. 2c ). The modelled decrease from peak to low concentrations is equivalent to a 23% decline in tissue MeHg concentrations (Fig. 2c ). Observed concentrations in 14-year-old ABFT from the Gulf of Maine show a 31% decrease between 1990 and 2012. Our model results suggest that 25–40% of tissue MeHg decreases in the 1990s are attributable to temperature decreases over this decade (Fig. 2d ). Modelled effects of continued warming in the Gulf of Maine suggest a reversal of previous declines, and projected increases of almost 30% in 2015 that are sustained into 2030 (Fig. 2d ). Between 2012 and 2017, observations are consistent with model trends and show a statistically significant increase in MeHg (Fig. 2 ) of more than 3.5% per year in ABFT (one-way ANOVA, P < 0.05). These results illustrate the large effects on bioaccumulative toxicants in marine food webs that are expected as a result of climate-driven changes in marine ecosystems. Global anthropogenic emissions of Hg have been relatively stable since approximately 2011 4 . In North America and Europe, aggressive Hg regulations that began in the 1970s have successfully reduced or phased out most large Hg sources, and global emissions are now driving atmospheric Hg trajectories in the Northern Hemisphere. This means that future changes in tissue concentrations of MeHg in pelagic marine predators such as ABFT and Atlantic cod in the Gulf of Maine will be strongly influenced by further shifts in seawater temperature and prey availability. Biotic MeHg concentrations in other marine regions are likely to be similarly affected by widespread shifts in trophic interactions and seawater temperature. A two-pronged regulatory effort that involves reductions in the emissions of both greenhouse gases and Hg is therefore needed to reduce MeHg concentrations in pelagic predators. Notably, regulations that aim to reduce air pollution caused by carbon-intensive fuel sources (such as coal-fired utilities) also have the co-benefit of bringing about large reductions in anthropogenic Hg releases 13 . Atmospheric Hg concentrations in the Northern Hemisphere declined by approximately 30% between the mid-1990s and 2000s, as a result of successful reductions in emissions from coal-fired utilities, industry and waste incinerators, and the phasing out of Hg in many commercial products in the United States and Europe 13 . Previous studies have suggested that these and other regulations have led to corresponding declines in tissue Hg concentrations in ABFT and bluefish ( Pomatomus saltatrix ) in the Atlantic Ocean 15 , 16 . Despite these benefits, recent regulatory proposals in the United States threaten to overturn rules that regulate mercury releases from coal-fired utilities and proposals to curb carbon emissions. Climate change is likely to exacerbate human exposure to MeHg through marine fish, suggesting that stronger rather than weaker regulations are needed to protect ecosystem and human health. Methods Mercury concentration data in fish Many studies report total Hg rather than MeHg in fish tissue. Extensive data on total Hg and MeHg concentrations in pelagic, demersal and benthic food webs of the Gulf of Maine were collected between 2000 and 2002 19 . We used the measured MeHg fraction (90%) to scale total Hg values for ABFT. For lower trophic levels with variable MeHg concentrations we relied on direct MeHg measurements. Size-fractionated phytoplankton and zooplankton samples were obtained on research cruises and zooplankton species were identified and separated by a plankton ecologist. These data are shown in Extended Data Table 1 . Fish and shellfish data are summarized in Extended Data Table 2 . Trophic levels were determined from stable nitrogen isotopes (δ 15 N) measured in the same samples. Mercury concentrations in apex predators were compiled from several sources. A previous study 21 reported total Hg in swordfish ( Xiphias gladius ) from the western Atlantic Ocean ( n = 192) with corresponding weights. Another research team measured total Hg in n = 1,279 ABFT harvested from the Gulf of Maine 16 . Length (cm) and body weights (kg) were available for all tuna and used to estimate age, which ranged from 9 to 14 years. Data from this study 16 were converted from dressed weight to whole weight by multiplying dressed weight by 1.25. Temporal data on MeHg concentrations in ABFT harvested from the Gulf of Maine were compiled from several sources, for fish lengths (250 ± 23 cm (mean ± s.d.)) and ages that correspond to approximately 14-year-old fish (Extended Data Table 3 ). For 1971 ( n = 5) 20 and 2002 ( n = 3) 19 , 14-year old fish were identified based on reported length. For 1990, reported fish ages ( n = 14) ranged between 8 and 15 years 18 . For 2004–2012, MeHg concentrations in 14-year-old ABFT harvested from the Gulf of Maine were reported in a comprehensive study 16 . ABFT tissue from individual fish harvested in 2017 from the Gulf of Maine were analysed for Hg in this study and are reported in Extended Data Table 3 . Food web bioaccumulation model Measured MeHg concentrations in the northwestern Atlantic Ocean (Extended Data Fig. 1a ) show characteristic increases across more than five trophic levels (derived from δ 15 N) 19 . However, MeHg concentrations in swordfish and ABFT are underpredicted by the linear relationship between log[MeHg] and δ 15 N. The slope of this relationship is known as the trophic magnification slope, and this parameter has been used to assess global patterns in biomagnification of MeHg in freshwater ecosystems 22 . However, the factors that govern variability in trophic magnification slopes across ecosystems are poorly understood, and their application to marine ecosystems is further complicated by potential shifts in baseline δ 15 N for migratory species such as ABFT and swordfish 23 . We therefore developed a new mechanistic model for biomagnification of MeHg in marine food webs as a function of ecosystem properties 6 . We parameterized the mechanistic model for MeHg bioaccumulation to the food web that was characteristic of the Gulf of Maine in the early 2000s (Extended Data Fig. 2 ), and evaluated predicted tissue MeHg concentrations against measurements compiled previously for that period 19 . We then applied the evaluated model to simulate the effects of measured temperature anomalies and documented shifts in trophic structure on MeHg concentrations in predatory fish. The model can be run deterministically, using the central estimate of all parameter values, or stochastically, to capture variability in seawater MeHg, dissolved organic carbon (DOC), prey consumption and other parameters. The food web model includes three size classes for phytoplankton (picoplankton (0.2–2.0 μm), nanoplankton (2–20 μm) and microplankton (20–200 μm)); small (herbivorous) and large (omnivorous) zooplankton; macroinvertebrates; and fish. The lower (plankton) food web model has been described in detail previously 6 . In brief, our model simulates changes in MeHg uptake by phytoplankton due to varying seawater MeHg concentrations, differences in the composition of phytoplankton communities and varying DOC concentrations. The relative abundance of different size classes of phytoplankton is based on empirical relationships with surface concentrations of chlorophyll a 6 . Monthly average concentrations of chlorophyll a for the Gulf of Maine were derived from measurements collected at eight stations between 1997 and 2001 6 . Phytoplankton MeHg concentrations are modelled based on passive uptake of MeHg from seawater (driven by cell surface-to-volume ratios and DOC concentrations), because experimental data show that MeHg uptake by most phytoplankton species is not sensitive to seawater temperature 6 . This parameterization has previously been used to explain phytoplankton MeHg concentrations across a range of ecosystems in the northwest Atlantic 6 . DOC concentrations measured in the Gulf of Maine ( n = 82) are log-normally distributed (81 ± 15 μM (mean ± s.d.)) 6 . Seawater MeHg concentrations are based on previous measurements 17 in the upper 60 m of the water column in the Gulf of Maine. Measured MeHg concentrations ranged between 0.015 and 0.055 pM and an average of 7% of the total Hg was present as MeHg. Sediment MeHg concentrations are based on those reported previously 12 in integrated 15–20-cm grab samples of surface sediment ( n = 95) from the Gulf of Maine that were collected between 2000 and 2002 (0.44 ± 0.32 pmol g −1 (mean ± s.d.)). Time-dependent simulations for ABFT are based on measured MeHg concentrations in seawater 17 between 2008 and 2010, scaled by the trajectory in total Hg concentrations in the surface ocean between 1950 and 2030. Total Hg concentrations in the North Atlantic surface ocean were modelled using historical data on atmospheric Hg emissions 24 and a global geochemical model with resolved ocean basins 24 , 25 . The annual concentrations (in pM) of MeHg in seawater that were used to force the time-dependent bioaccumulation simulation are shown in Extended Data Table 4 . We used records of sea surface temperature (Extended Data Table 5 ) for the Gulf of Maine from 1950 to 2015 8 to simulate temperature-driven changes in MeHg in ABFT (Extended Data Table 6 ). Evaluation and sensitivity analysis of the food web model A comparison of modelled and observed MeHg concentrations in ABFT as a function of size revealed that measurements were substantially underestimated ( n = 1,195 observations, 3% within the 67% model confidence interval) when standard bioenergetics algorithms for energy expenditure, prey consumption and growth were used (Extended Data Fig. 1b , dashed line). Most bioaccumulation models assume that fish activity levels are constant 26 . This results in a decreasing proportion of energy that is expended for respiration as fish weight increases. By contrast, migratory distance and energy expenditures for pelagic marine fish increase as they grow and swim more rapidly 27 , 28 . Wild activity, particularly for migratory fish, is difficult to measure and thus rarely incorporated into estimates of consumption rates. Accurate consumption rates for fish in the wild are needed to model bioaccumulative contaminants such as MeHg. To account for these factors, we used swimming speed-, mass- and species-dependent activity multipliers (see Supplementary Information ). Increasing the migratory energy expenditure of ABFT on the basis of established relationships with body size and swimming speed results in a shift in the expected mean of the model to match the central tendency of observations (Extended Data Fig. 1b , solid line). After accounting for migratory energy expenditure, the 95% confidence interval of probabilistically simulated MeHg concentrations in ABFT captures 90% of the observations. The probabilistic simulation includes distributions for variable seawater MeHg, DOC, MeHg assimilation efficiencies and prey selection (Extended Data Table 7 , Supplementary Information ). Electronic tagging data show that western ABFT and swordfish spend a large fraction of their lifespan in shallow waters (<200-m depth) near the eastern coastline of North America 29 , 30 , where measured MeHg concentrations 17 , 31 range from 0.03 to 0.06 pM. The modelled upper and lower bounds for MeHg and DOC concentrations measured in the northwestern Atlantic Ocean capture 99% of the observed MeHg concentrations in ABFT. These results indicate a good model performance for ABFT when migratory energy expenditure is included. Prey consumption by most species is restricted by their body size—specifically, by the width of their mouth gape. This constrains the predator-to-prey length ratio to approximately 9:1, which we use in our standard model 32 . For swordfish, observed MeHg concentrations ( n = 156) 21 are underpredicted by both the standard bioenergetics model (Extended Data Fig. 1c , dashed line) and the model adjusted for increased migratory energy expenditure (Extended Data Fig. 1c , dotted line). Only 5% of observations fall within the 67% model confidence interval. Swordfish are known to slash and knock out prey of a larger size than that predicted by their mouth-gape width 33 . The primary prey for swordfish at maturity are cephalopods, which catch larger prey using their tentacles and are thus also less constrained by body size. Better agreement between modelled MeHg concentrations and observations is achieved by adjusting allowable predator-to-prey length ratios 32 , 33 to account for the larger prey sizes consumed by swordfish and cephalopods (Extended Data Fig. 1c , solid line). Model results show that 29% of the observations fall within the 67% confidence interval of the probabilistic simulation (orange shaded region in Extended Data Fig. 1c ; 57% within the 95% model confidence interval). Simulating the upper and lower envelope of predator-to-prey length ratios (ratios from 10:1 to 2:1; yellow region in Extended Data Fig. 1c ) captures 98% of the observations. Following these adjustments for apex predators, our results indicate excellent performance ( R 2 = 0.92) of the bioenergetics model for MeHg bioaccumulation 6 compared to observations 19 across five trophic levels in the Gulf of Maine food web (Extended Data Fig. 1d ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability All data and model algorithms are available in the Extended Data and Supplementary Information . Code availability All model code is available at the following link: . | Add another item to the ever-growing list of the dangerous impacts of global climate change: Warming oceans are leading to an increase in the harmful neurotoxicant methylmercury in popular seafood, including cod, Atlantic bluefin tuna and swordfish, according to research led by the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Harvard T. H. Chan School of Public Health (HSPH). Researchers developed a first-of-its-kind, comprehensive model that simulates how environmental factors, including increasing sea temperatures and overfishing, impact levels of methylmercury in fish. The researchers found that while the regulation of mercury emissions have successfully reduced methylmercury levels in fish, spiking temperatures are driving those levels back up and will play a major role in the methylmercury levels of marine life in the future. The research is published in Nature. "This research is a major advance in understanding how and why ocean predators, such as tuna and swordfish, are accumulating mercury," said Elsie Sunderland, the Gordon McKay Professor of Environmental Chemistry at SEAS and HSPH, and senior author of the paper. "Being able to predict the future of mercury levels in fish is the holy grail of mercury research," said Amina Schartup, former research associate at SEAS and HSPH and first author of the paper. "That question has been so difficult to answer because, until now, we didn't have a good understanding of why methylmercury levels were so high in big fish." It's been long understood that methylmercury, a type of organic mercury, bioaccumulates in food webs, meaning organisms at the top of the food chain have higher levels of methylmercury than those at the bottom. But to understand all the factors that influence the process, you have to understand how fish live. If you've ever owned a goldfish, you know that fish do pretty much two things: eat and swim. What they eat, how much they eat, and how much they swim all affect how much methylmercury fish will accumulate in the wild. Let's start with what fish eat. The researchers collected and analyzed 30 years of ecosystem data from the Gulf of Maine, including an extensive analysis of the stomach contents of two marine predators, Atlantic cod and spiny dogfish from the 1970s to 2000s. The researchers found that methylmercury levels in cod were 6 to 20 percent lower in 1970 than they were in 2000. Spiny dogfish, however, had levels 33 to 61 percent higher in 1970 compared to 2000 despite living in the same ecosystem and occupying a similar place in the food web. What accounts for these differences? A 1-degree increase in seawater temperature and a collapse in the herring population result in a 10-percent decrease in methylmercury levels in cod and a 70-percent increase in spiny dogfish. Credit: Harvard SEAS In the 1970s, the Gulf of Maine was experiencing a dramatic loss in herring population due to overfishing. Both cod and spiny dogfish eat herring. Without it, each turned to a different substitute. Cod ate other small fish such as shads and sardines, which are low in methylmercury. Spiny dogfish however, substituted herring with higher in methylmercury food such as squid and other cephalopods. When the herring population bounced back in 2000, cod reverted to a diet higher in methylmercury while spiny dogfish reverted to a diet lower in methylmercury. There's another factor that impacts what fish eat: mouth size. Unlike humans, fish can't chew—so most fish can only eat what fits in their mouth whole. However, there are a few exceptions. Swordfish, for example, use their titular bills to knock down large prey so they can eat it without resistance. Cephalopods catch prey with their tentacles and use their sharp beaks to rip off mouthfuls. "There's always been a problem modeling methylmercury levels in organisms like cephalopods and swordfish because they don't follow typical bioaccumulation patterns based on their size," said Sunderland. "Their unique feeding patterns means they can eat bigger prey, which means they're eating things that have bioaccumulated more methylmercury. We were able to represent that in our model." But what fish eat isn't the only thing that impacts their methylmercury levels. When Schartup was developing the model, she was having trouble accounting for the methylmercury levels in tuna, which are among the highest of all marine fish. Its place on the top of the food web accounts for part of this but doesn't fully explain just how high its levels are. Schartup solved that mystery with inspiration from an unlikely source: swimmer Michael Phelps. "I was watching the Olympics and the TV commentators were talking about how Michael Phelps consumes 12,000 calories a day during the competition," Schartup remembered. "I thought, that's six times more calories than I consume. If we were fish, he would be exposed to six times more methylmercury than me." As it turns out, high-speed hunters and migratory fish use a lot more energy than scavengers and other fish, which requires they consume more calories. "These Michael Phelps-style fish eat a lot more for their size but, because they swim so much, they don't have compensatory growth that dilutes their body burden. So, you can model that as a function," said Schartup. A 20-percent decrease in emissions, with no change in seawater temps, decreases methylmercury levels in both cod and spiny dogfish by 20 percent. Credit: Harvard SEAS Another factor that comes into play is water temperature; as waters get warmer, fish use more energy to swim, which requires more calories. The Gulf of Maine is one of the fastest warming bodies of water in the world. The researchers found that between 2012 and 2017, methylmercury levels in Atlantic bluefin tuna increased by 3.5 percent per year despite decreasing emissions of mercury. Based on their model, the researchers predict that an increase of 1 degree Celsius in seawater temperature relative to the year 2000 would lead to a 32 percent increase in methylmercury levels in cod and a 70-percent increase in spiny dogfish. The model allows the researchers to simulate different scenarios at once. For example: A 1-degree increase in seawater temperature and a 20-percent decrease in mercury emissions result in increases in methylmercury levels of 10 percent in cod and 20 percent levels in spiny dogfish. A 1-degree increase in seawater temperature and a collapse in the herring population result in a 10-percent decrease in methylmercury levels in cod and a 70-percent increase in spiny dogfish. A 20-percent decrease in emissions, with no change in seawater temps, decreases methylmercury levels in both cod and spiny dogfish by 20 percent. "This model allows us to look at all these different parameters at the same time, just as it happens in the real world," said Schartup. "We have shown that the benefits of reducing mercury emissions holds, irrespective of what else is happening in the ecosystem. But if we want to continue the trend of reducing methylmercury exposure in the future, we need a two-pronged approach," said Sunderland. "Climate change is going to exacerbate human exposure to methylmercury through seafood, so to protect ecosystems and human health, we need to regulate both mercury emissions and greenhouse gases." | 10.1038/s41586-019-1468-9 |
Physics | New pulsed magnet reveals a new state of matter in Kondo insulator | Ziji Xiang et al. Unusual high-field metal in a Kondo insulator, Nature Physics (2021). DOI: 10.1038/s41567-021-01216-0 Journal information: Nature Physics | http://dx.doi.org/10.1038/s41567-021-01216-0 | https://phys.org/news/2021-04-pulsed-magnet-reveals-state-kondo.html | Abstract Strong electronic interactions in condensed-matter systems often lead to unusual quantum phases. One such phase occurs in the Kondo insulator YbB 12 , the insulating state of which exhibits phenomena that are characteristic of metals, such as magnetic quantum oscillations 1 , a gapless fermionic contribution to heat capacity 2 , 3 and itinerant-fermion thermal transport 3 . To understand these phenomena, it is informative to study their evolution as the energy gap of the Kondo insulator state is closed by a large magnetic field. Here we show that clear quantum oscillations are observed in the resulting high-field metallic state in YbB 12 ; this is despite it possessing relatively high resistivity, large effective masses and huge Kadowaki–Woods ratio, a combination that normally precludes quantum oscillations. Both quantum oscillation frequency and cyclotron mass display a strong field dependence. By tracking the Fermi surface area, we conclude that the same quasiparticle band gives rise to quantum oscillations in both insulating and metallic states. These data are understood most simply by using a two-fluid picture in which neutral quasiparticles—contributing little or nothing to charge transport—coexist with charged fermions. Our observations of the complex field-dependent behaviour of the fermion ensemble inhabiting YbB 12 provide strong constraints for existing theoretical models. Main In Kondo insulators (KIs), an energy gap is opened up by strong coupling between a lattice of localized moments and the extended electronic states. The resulting Kondo gap E g is usually narrow (typically E g ≃ 5–20 meV), yet the role it plays in charge transport is more complicated than that of the bandgap in conventional semiconductors. A low-temperature ( T ) saturation of resistivity has long been known in two prototypical KIs, namely, samarium hexaboride (SmB 6 ) and ytterbium dodecaboride (YbB 12 ) (refs. 4 , 5 ); both are mixed-valence compounds with strong f – d hybridization that defines the band structure close to the Fermi energy. While the saturation might suggest additional metallic conduction channels, the high resistivity value within the weakly T -dependent ‘plateau’ implies that these are highly unconventional 1 , 4 ; one interpretation is the presence of topologically protected surface states 6 . Recently, magnetic quantum oscillations, suggestive of a Fermi surface (FS) and thus totally unexpected in an insulator, have been detected in both SmB 6 and YbB 12 (refs. 1 , 7 , 8 , 9 , 10 ). Whilst some have attributed the oscillations in SmB 6 to residual flux 11 , the flux-free growth process of YbB 12 (Methods) excludes such a contribution. The oscillations in YbB 12 are observed in both resistivity ρ (the Shubnikov–de Haas (SdH) effect) and magnetization M (the de Haas–van Alphen (dHvA) effect) at applied magnetic fields H where the gap is still finite. The T dependence of the oscillation amplitude follows the Lifshitz–Kosevich (LK) formula, which is based on the Fermi liquid theory 1 . Moreover, a contribution from gapless quasiparticle excitations to the heat capacity C has been detected in both KIs 2 , 3 , 9 . Though the thermal transport observations remain controversial in SmB 6 9 , 12 , 13 , YbB 12 shows a T -linear zero-field thermal conductivity κ x x , a characteristic of itinerant fermions 3 . Agreement among the FS parameters derived from quantum oscillations, heat capacity and thermal conductivity in YbB 12 suggests that the same quasiparticle band is responsible 3 . Despite this apparent consistency, the mystery remains: how can itinerant fermions exist in a gapped insulator and transport heat but not charge? In response, many theoretical models entered the fray, including magnetoexcitons 14 , Majorana fermions 15 , 16 , emergent fractionalized quasiparticles 17 , 18 and non-Hermitian states 19 . As these scenarios frequently envisage some form of exotic in-gap states, it is potentially invaluable to observe how the properties of KIs evolve as the energy gap E g closes. The cubic rare-earth compound YbB 12 is an excellent platform to carry out such studies. In YbB 12 , the Kondo gap ( E g ≈ 15 meV (ref. 20 )) is closed by large H , leading to an insulator-to-metal (I–M) transition at fields ranging from μ 0 H I–M ≃ 45–47 T ( μ 0 is the vacuum permeability; field vector, H ∥ [100]) to 55–59 T ( H ∥ [110]) (refs. 1 , 21 , 22 ). As revealed by a large Sommerfeld coefficient γ of the heat capacity 2 , Kondo correlation does not break down at the I–M transition, remaining strong to 60 T and beyond in the high-field metallic state; hence, this can be termed a Kondo metal (KM) 23 . In this study, we apply both transport and thermodynamic measurements, including ρ , penetration depth, M and magnetostriction, to YbB 12 . By resolving quantum oscillations and tracking their T and H dependence in the KM state, we trace the fate of the possible neutral quasiparticles at fields above the gap closure and expose their interactions with more conventional charged fermions. In our YbB 12 samples, M and magnetostriction data show that the I–M transition occurs at μ 0 H I–M = 46.3 T ( H ∥ [100]); the tiny valence increase in Yb ions at the transition suggested by the magnetostriction reinforces the KM nature of the high-field metallic state (Methods and Extended Data Fig. 1 ). To probe the electronic structure of the KM state, a proximity detector oscillator (PDO) was used (Methods) for contactless SdH effect studies. The PDO technique is sensitive to the sample skin depth, providing a direct probe of changes in conductivity of the metallic KM state. The setup illustrated in the inset of Fig. 1a was rotated on a cryogenic goniometer to achieve H -orientation-dependent measurements. Fig. 1a summarizes the H dependence of the PDO frequency f as H rotates from [100] to [110]. Fig. 1: The SdH effect in YbB 12 . a , The PDO frequency f for a YbB 12 single-crystal sample, measured at T = 0.6 K at various tilt angles θ up to 62.5 T. The ‘dip’ feature in f corresponds to the I–M transition, which shifts to a higher field at larger θ . The solid lines and short dashed lines are upsweeps and downsweeps, respectively. Inset: a photograph of the device used in the pulsed-field PDO experiments. The whole device was attached to a rotation stage with the rotation axis normal to the (001) plane. The tilt angle θ is defined as the angle between the field vector H and the crystallographic [100] direction. b , Oscillatory component of the PDO frequency, Δ f , obtained after a fourth-order polynomial background subtraction from the raw data shown in a for different tilt angles ranging from θ = 0.2° to θ = 20.7°. The numbers beside the SdH peaks are the Landau-level indices. The signs ‘+’ and ‘–’ mark the spin-split Landau sublevels with inferred Zeeman shift + g μ B B /2 (spin down) and – g μ B B /2 (spin up), respectively ( μ B is the Bohr magneton). The SdH effect is weaker on the downsweeps (short dashed lines), probably due to sample heating. c , Landau-level plots for YbB 12 in the low-field KI state (blue diamonds) and the high-field KM state (red circles), both under a magnetic field along the [100] direction. Data points in the orange dashed circle were taken in the 75 T duplex magnet (Extended Data Fig. 2 ). The grey vertical bar denotes the I–M transition. The inset shows the SdH oscillations in the KI state of YbB 12 . d , T dependence of Δ f at different fields. Data were taken at θ = 10.9° (Extended Data Fig. 2 b). Solid lines are the LK fits. Inset: field dependence of the fitted cyclotron energy \({E}_{\text{c}}^{* }\) with the standard deviations given by the error bars. Full size image The low conductivity of the KI state suggests that the megahertz oscillatory field from the PDO coil completely penetrates the sample 24 . Therefore, the response of the PDO is dominated by the sample skin depth only when the sample enters the KM state and the conductivity increases substantially (Methods). The transition between these regimes is marked by a dip in the f versus H curves close to H I–M , above which magnetic quantum oscillations emerge. Figure 1b displays Δ f , the oscillatory component of f , at various angles θ as a function of 1/ H . The distinct oscillation pattern observed for H ∥ [100] ( θ = 0) is preserved up to θ ≈ 21° (Fig. 1b ), being strongly modified at higher angles (Extended Data Fig. 2 a). The oscillations in Δ f represent a single series that is intrinsically aperiodic in 1/ H ; attempts to fit them using a superposition of conventional oscillation frequencies fail to reproduce the raw data ( Supplementary Information ). To demonstrate the point further, Fig. 1c compares Landau-level indexing plots for the low-field KI and high-field KM states, both with H ∥ [100]. The oscillatory component of resistivity, Δ ρ , in the KI state is shown in the inset of Fig. 1c and conventionally indexed using integers for minima and half integers for maxima in ρ . For the KM state, peaks in f correspond to peaks in conductivity 24 and therefore are indexed using integers 25 . A further subdivision of the oscillations—reminiscent of a second harmonic—is likely due to Zeeman splitting of the quasiparticle levels 25 ; these features are marked with ‘+’ and ‘–’, assuming the signs expected for conventional Zeeman shifts. In the KI state, the plot of Landau-level index N versus 1/ H is a straight line, as expected for a field-independent quantum oscillation frequency in a non-magnetic system. By contrast, in the KM state, the 1/ H positions of the oscillations have a nonlinear relationship with N , which will be described below (here we note that the magnetic induction B ≈ μ 0 H , since μ 0 H is large and YbB 12 has weak magnetization; see Methods). Nevertheless, despite their unusual periodicity, the T dependencies of individual oscillation amplitudes in the KM state (Fig. 1d ) closely follow the LK formula 25 $${{\Delta }}f(T)\propto \frac{2{\uppi }^{2}{k}_{{\rm{B}}}T/{E}_{\text{c}}^{* }}{\sinh (2{\uppi }^{2}{k}_{{\rm{B}}}T/{E}_{\text{c}}^{* })}.$$ (1) This suggests that they are almost certainly due to fermions. In this case, k B is the Boltzmann constant and \({E}_{\text{c}}^{* }\) is the cyclotron energy. However, the derived value of \({E}_{\text{c}}^{* }\) varies nonlinearly with H , indicating that the cyclotron mass \({m}^{* }={\rm{e}}B/{E}_{\text{c}}^{* }\) is a function of H (Fig. 2c ). Fig. 2: Field-dependent FS in the metallic state. a , The nonlinear Landau-level plots shown in Fig. 1c become linear after adding an offset of μ 0 H * = 41.6 T to the applied magnetic field. The linear fits yield slopes (Supplementary Table 1 ) that are denoted by the parameter F 0 in equation ( 3 ). The Landau diagrams are vertically shifted by index N = 1 for clarity. b , With the field applied along the [100] direction, the quantum oscillations in the KI state exhibit a field-independent frequency F KI (Fig. 1c ), whereas in the KM state, the SdH frequency F KM is described by equation ( 3 ). The solid lines and short dashed lines represent the field range in which SdH oscillations are detected and absent, respectively. The width of the coloured regions denotes the frequency uncertainty, given by the fast Fourier transform peak widths (considering both SdH and dHvA effects) 1 for F KI and the fitting errors in a for F KM . The light grey vertical bar marks the I–M transition. A black vertical line represents the maximum allowed mismatch between F KI and F KM of ~180 T. Inset: the angle dependence of quantum oscillation frequencies. Blue circles are dHvA frequencies in the KI state with the fast Fourier transform uncertainties given by the error bars 1 . Magenta and red diamonds are F KM calculated using equation ( 3 ) at μ 0 H I–M (Supplementary Table 1 ) and B = μ 0 H I–M – 0.8 T, respectively. c , The cyclotron mass ratio is m */ m 0 , where m 0 is the free electron mass. Data for the KI state are obtained from the LK fits of the dHvA oscillation amplitudes 1 , whereas those for the KM state are inferred from the cyclotron energy \({E}_{{\rm{c}}}^{* }\) (Fig. 1d ). d , The spin-splitting parameter S for each SdH peak. S values calculated for the nonlinear (Fig. 1c ) and linear ( a ) Landau-level (LL) plots (Methods) are denoted by squares and ‘+’ symbols, respectively. The field dependence of S can be scaled to that of the cyclotron mass ratio by choosing g = 0.084 (red circles). The error bars in c and d are standard deviations of the fits. Full size image The relationship between oscillation indices and magnetic field is empirically described by $$N+\lambda =\frac{{F}_{0}}{{\mu }_{0}({H}_{N}-{H}^{* })},$$ (2) where an offset field of μ 0 H * = 41.6 T has been subtracted from μ 0 H N . Here λ is a phase factor, N is the index, H N is the field at which the corresponding feature (for example, peak) occurs and F 0 is the slope. Equation ( 2 ) is symptomatic of an FS pocket that progressively depopulates as H increases, for reasons discussed below. In such cases, the Onsager relationship \(F(B)=\frac{\hslash }{2\uppi {\rm{e}}}A(B)\) between the FS extremal cross-sectional area A and the frequency F of the corresponding quantum oscillations still applies even when A changes with H (Methods). Since B ≈ μ 0 H , we write B * = μ 0 H * and represent the field dependence of equation ( 2 ) using a B -dependent frequency: $${F}_{{\rm{KM}}}(B)=\frac{{F}_{0}}{B-{B}^{* }}B;$$ (3) this is associated with a B -dependent extremal area \(A(B)=\frac{{A}_{0}}{B-{B}^{* }}B\) , where A 0 = \(\frac{2\uppi {\rm{e}}}{\hslash }{F}_{0}\) . We mention that these results are consistent with the ‘back-projection’ approach 26 for the measured SdH frequency ( Supplementary Information ). Analysed in these terms, our data indicate that the quantum oscillations seen in the KM state are due to an FS pocket that is the same as—or very closely related to—the FS pocket in the KI state, which contributes quantum oscillations and a T -linear term in both C and κ x x (refs. 2 , 3 ). This is strongly suggested by the field dependence of F KM (equation ( 3 )), as shown in Fig. 2b . Even a cursory inspection reveals that equation ( 3 ), describing oscillations in the KM state, gives a frequency close to that of the KI state oscillations when extrapolated back to H I–M . This is true for all angles θ at which the oscillations were measured; using appropriate F 0 values (Supplementary Table 1 ) and substituting fields B = μ 0 H I–M ( θ ) on the phase boundary (minima in the PDO data; Fig. 1a ) into equation ( 3 ) gives the frequencies shown as magenta diamonds in the inset of Fig. 2b . The θ dependence of F KM ( μ 0 H I–M ) thus deduced tracks the behaviour of the dHvA frequencies measured in the KI state, albeit with an offset of ~100 T (Fig. 2b ). This offset may be due to a discontinuous change in F at the phase boundary; however, it could also result from the potential uncertainty in determining the H position of any phase transition associated with a valence change 27 . Substituting μ 0 H I–M values decreased by ~0.8 T into equation ( 3 ) yields an exact match of F KM with the quantum oscillation frequencies observed in the KI state (Fig. 2b , inset). Further evidence that the same FS pocket contributes to quantum oscillations in both KI and KM states comes from the cyclotron masses. As shown in Fig. 2c , a cyclotron mass m * ≈ 7 m 0 ( m 0 is the free electron mass) is deduced from the T dependence of the quantum oscillations at fields just above and just below the I–M transition ( m * in the KI state is obtained from the dHvA oscillations, which we suggest are more fundamental; see Methods). As mentioned above, m * in the KM state is enhanced by increasing the field. This effect is also captured by the unusual field-dependent Zeeman splitting of the SdH peaks (Extended Data Fig. 2 , inset). In Fig. 2d , we plot the spin-splitting factor S (ref. 25 ) calculated from the SdH data (Methods). The field enhancement of S generally tracks that of m *, consistent with elementary expectations of Landau quantization (Methods). The scaling yields a g factor of ~0.084, much smaller than that for typical weakly interacting electron systems ( g ≈ 2) (ref. 25 ). The consistency of the frequency and mass across the I–M phase boundary indicates that the novel quasiparticles detected in the KI state of YbB 12 1 , which are probably charge neutral 3 , also cause the SdH effect in the KM state (see Methods for further discussion). The unusual nature of the KM state is further revealed by magnetotransport experiments. To reduce Joule heating as the KI state is traversed, we used a pulsed-current technique (Methods). Current is only applied to the sample when H > H I–M , as shown in the inset of Fig. 3a . Below 10 K, very weak longitudinal magnetoresistance (MR) is observed above 50 T (Extended Data Fig. 3 ) and is preserved up to ~68 T (Extended Data Fig. 2 ), permitting an analysis of the T dependence of resistivity ρ in the KM state. Figure 3a,b shows ρ at 55 T as a function of T and T 2 , respectively. The ρ − T curve shows a maximum at T * = 14 K. With decreasing T , a linear T dependence develops below 9 K and extends down to T ≈ 4 K; subsequently, a Fermi-liquid-like T 2 behaviour is established below T FL = 2.2 K. The overall behaviour of ρ ( T ) mimics that of a typical Kondo lattice, where Kondo coherence develops at T * and a heavy Fermi liquid state forms below T FL (ref. 28 ). The residual resistivity ρ ( T → 0) ≈ 0.34 mΩ ⋅ cm indicates that the KM state may be classified as a ‘bad metal’. Fig. 3: Temperature dependence of resistivity in the metallic state. a , The resistivity of the YbB 12 sample at μ 0 H = 55 T plotted as a function of T . Both current I and magnetic field were applied along the [100] direction. The hollow symbols are data measured using the pulsed-current technique (Methods). The solid symbols are data taken with a constant excitation in a 3 He gas environment. A maximum in ρ appears at T * = 14 K. The dashed line is a linear fit of ρ ( T ) from 4 K to 9 K. The inset illustrates the magnetic-field pulse and current pulse in our measurement in the time domain. b , The same data plotted against T 2 . The dotted line is a linear fit to the 4 He liquid data, showing the behaviour of the T 2 dependence below T FL = 2.2 K. The error bars in a and b represent the noise level for each MR curve. A 1 and A 2 are the T -linear coefficient and T 2 coefficient of ρ (resistivity), respectively. c , The deviation of YbB 12 from the KW relation. We use the value of the Sommerfeld coefficient γ reported for YbB 12 in ref. 2 . The data points for transition metals, d -electron oxides, and Ce- and U-based heavy fermions are taken from refs. 39 , 40 , whereas the data for Yb-based compounds are taken from refs. 29 , 30 . Full size image Owing to the relatively high resistivity in the KM state, the Kadowaki–Woods (KW) ratio A 2 / γ 2 ( A 2 is the T 2 coefficient of resistivity) is surprisingly large. Using the value of A 2 obtained from the fit in Fig. 3b and the Sommerfeld coefficient given by the pulsed-field heat capacity measurements ( γ ≃ 63 mJ mol −1 K −2 at 55 T (ref. 2 ); see Methods), the estimated KW ratio is 1.46 × 10 −2 μ Ω cm (K 2 mol 2 mJ – 2 ), three and four orders of magnitude larger than typical values for heavy-fermion compounds and transition metals, respectively (Fig. 3c ). Such an abnormal KW ratio cannot be addressed by the degree of degeneracy of the quasiparticles, which tends to suppress the KW ratio in many Yb-based systems 29 , 30 . It should be clear that the fermions responsible for the high-field quantum oscillations cannot, on their own, account for the transport and thermal properties of the KM state. First, even if they were capable of carrying charge, their field-dependent SdH frequency and mass would yield strong MR (Methods); the negligible MR observed suggests that an additional, much more effective conducting channel is present. Second, the FS pocket deduced from the SdH oscillations would only account for 4–5% of the observed γ (Methods). Therefore, a separate FS of more conventional heavy fermions is required to account for the charge transport properties and large γ of the KM state. The heavy effective mass of these fermions (Methods) and the relatively high T > 0.5 K used for our measurements would prevent the observation of their quantum oscillations, even in a very clean system (a combination of dilution refrigerator and pulsed magnet is under long-term development to overcome such challenges). The ‘bad-metal’ properties of the KM state, indicative of low charge carrier mobilities and thus create additional impediments towards the observation of quantum oscillations, give further support to the idea that the SdH effect is due to unconventional quasiparticles that are distinct from heavy electrons. Our results imply an intriguing two-fluid picture in YbB 12 that includes (1) an FS pocket of quasiparticles obeying Fermi–Dirac statistics but contributing little to the transport of charge and (2) more conventional charged fermions. For brevity, we refer to the particles in (1) as neutral fermions (NFs). These NFs cause quantum oscillations in both KI and KM states, whereas the particles in (2) dominate the electrical transport properties and the low- T heat capacity in the KM state. Under this two-fluid description, the I–M transition produces a sudden increase in the density of particles in (2), which changes from a thermally excited low-density electron gas in the KI state to a dense liquid of heavy, charged quasiparticles in the KM state. As a result, when H increases past H I–M , the much enhanced number of states available close to the Fermi energy acts as a ‘reservoir’ into which quasiparticles from the NF FS can scatter or transfer (for analogous situations in other materials, see refs. 31 , 32 ). In this sense, the falling SdH frequency parameterized by equation ( 3 ) can be interpreted as an indication that the NF FS becomes less energetically favourable in the KM state; the availability of the KM ‘reservoir’ means that quasiparticles can transfer out and thus be depopulated. As the NF FS shrinks, the effective mass increases (Fig. 2c ), possibly due to non-parabolicity of the corresponding band and/or field-induced modification of the bandwidth or interactions contributing to the effective mass renormalization 33 . Also, the fact that the quantum oscillations caused by NF FS are strongly affected by the I–M transition (that is, the frequency becomes dependent on B and Zeeman splitting becomes evident in the KM state) shows that they are an intrinsic property of YbB 12 , laying to rest suggestions that they are caused by a minority phase (c.f. the Al flux proposal for SmB 6 (ref. 11 ); see Extended Data Fig. 4 for further evidence of their absence) or surface effect. The survival of NFs above the gap closure and their coexistence with charged excitations in a Fermi-liquid-like state 17 , 34 , 35 leads to an interesting scenario; in such a state, Luttinger’s theorem may be violated and a continuous variation of the FS properties is allowed 35 . In addition, the interaction of a relatively conventional FS with an ensemble of quasiparticles unable to transport charge is a feasible prerequisite for ρ varying linearly with T (Fig. 3a ) 36 . We stress here that the continuous evolution of both FS size and cyclotron mass, the small g factor and interactions with more conventional charge carriers are unusual characteristics of NFs that should be addressed by future theoretical works. Finally, a scaling behaviour between the magnetization and B dependence of orbit area is observed ( Supplementary Information ), which may suggest underlying non-trivial magnetic properties in the KM state. Indeed, the ‘offset’ field H * needed to linearize the Landau diagrams (Fig. 2a ) bears a qualitative similarity to the gauge field 37 in the composite fermion interpretation of the two-dimensional fractional quantum Hall effect. Whether this similarity is coincidental, however, is still an open question. In three-dimensional materials, an emergent gauge field, which can be felt by the charge carriers and manifests itself in transverse transport measurements, might originate from chiral spin textures in a skyrmion lattice phase 38 . Nevertheless, such a topologically non-trivial magnetic structure has not yet been proposed for YbB 12 , and the potential physical importance of H * awaits further investigation. Methods Sample preparation and pulsed-field facilities YbB 12 single crystals were grown by the travelling-solvent floating-zone method 41 . The two samples studied in this work were cut from the same ingot and shown to have almost identical physical properties, including the SdH oscillations below the I–M transition, in our previous investigations 1 , 3 . The YbB 12 sample characterized in the magnetostriction and magnetization ( M ) measurements corresponds to sample N1 in ref. 1 and crystal 2 in ref. 3 , whereas the high-field MR was measured in the YbB 12 sample N3 in ref. 1 , which is also crystal 1 in ref. 3 . Both samples were used in the PDO experiments. The PDO data from sample N3 taken at a fixed field direction were published elsewhere 1 . X-ray diffraction measurements on sample N1 show clear (00 l ) series Bragg peaks (Extended Data Fig. 4 ) with no indication of crystalline impurity phases. Magnetostriction and M of YbB 12 samples were measured in a capacitor-driven 65 T pulsed magnet at the National High Magnetic Field Laboratory (NHMFL), Los Alamos. In the PDO and MR measurements, fields were provided by 65 T pulsed magnets and 75 T duplex magnet. Temperatures down to 500 mK were obtained using a 3 He immersion cryostat. A 4 He cryostat was also used for MR measurements above 1.4 K. Magnetostriction measurements The linear magnetostriction Δ L / L of YbB 12 was measured using fibre Bragg grating dilatometry 42 , 43 . In our setup (Extended Data Fig. 1 ), the dilatometer is a 2-mm-long Bragg grating contained in a 125 μm telecom-type optical fibre. The oriented YbB 12 single crystal was attached to the section of fibre with the Bragg grating using a cyanoacrylate adhesive. The crystallographic [100] direction was aligned with the fibre, which is also parallel to H . Thus, we measure the longitudinal magnetostriction along the a axis of cubic YbB 12 . The magnetostriction Δ L / L was extracted from the shift of the Bragg wavelength in the reflection spectrum 43 . The signal from an identical Bragg grating on the same fibre with no sample attached was subtracted as the background. In a paramagnetic metal, the high-field longitudinal magnetostriction contains both M 2 and M 3 terms 44 . In this sense, the power-law H dependence of Δ L / L with an exponent of ~3.5 (Extended Data Fig. 1 b) is consistent with the weak superlinear M in YbB 12 at 40 K (ref. 21 ). As T decreases, Δ L / L decreases and a non-monotonic field dependence develops at 30 K (Extended Data Fig. 1 b). We note that the fast suppression of Δ L / L coincides with the sharpening of the I–M transition in the derived susceptibility below 30 K (ref. 45 ), suggesting an additional energy scale in YbB 12 that is much lower than the Kondo temperature T K ≈ 240 K and the gap opening temperature T g ≈ 110 K (ref. 20 ). Below 5 K, Δ L / L becomes quite small and a step-like feature is observed with an onset at μ 0 H = 46.3 T, perfectly aligned with the sudden increase in M (Extended Data Fig. 1 c,d). We identify this characteristic field as the I–M transition field H I–M at which a metamagnetic transition also happens 2 , 22 , 45 . Negative volume magnetostriction is a characteristic of mixed-valence Yb compounds in which the volume of non-magnetic Yb 2+ (4 f 14 ) is 4.6% smaller than that of magnetic Yb 3+ (4 f 13 ). Therefore, a volume decrease with increasing H is expected 46 , 47 . The step-like decrease at H I–M (Extended Data Fig. 1 c) may, therefore, be evidence that the sudden shrinkage results from a valence transition of the Yb ions. Using a simple isotropic assumption, the change in volume magnetostriction at H I–M is δ (Δ V / V ) = 3 δ (Δ L / L ) ≃ 6 × 10 −6 , corresponding to a valence increase of 0.00013. Such a small average valence enhancement implies a fairly weak and incomplete breakdown of the Kondo screening. Consequently, the state immediately above H I–M is confirmed to be a KM in which mixed-valence features persist. Magnetization measurements M was measured using a compensated-coil susceptometer 48 , 49 . The 1.5-mm-bore, 1.5-mm-long, 1,500 turn coil was made of 50 gauge high-purity copper wire. The sample was inserted into a 1.3-mm-diameter non-magnetic ampoule that can be moved in and out of the coil. When pulsed fields are applied, the coil picks up a voltage signal V ∝ (d M /d t ), where t is time. Numerical integration is used to obtain M and a signal from the empty coil measured under identical conditions is subtracted. Pulsed-field M data are calibrated using M of a YbB 12 sample of known mass measured in a vibrating sample magnetometer (Quantum Design) 3 . As shown in Extended Data Fig. 1 d, a metamagnetic transition occurs at 46.3 T, coinciding with the onset of the step-like feature in the magnetostriction. This observation further confirms the location of H I–M in our YbB 12 samples. At the highest H used in this experiment, M ≈ 1 μ B /Yb, so that M contributes only ~0.2% of B . Therefore, we can ignore the M term and equate B to the external magnetic field, that is, B ≈ μ 0 H . Radio-frequency measurements of resistivity using the PDO technique The PDO circuit 24 , 50 permits convenient contactless measurements of the resistivity of metallic samples in pulsed magnetic fields. In our experiments, a 6–8 turn coil made from 46 gauge high-purity copper wire is tightly wrapped around the YbB 12 single crystals and secured using GE varnish. The coil is connected to the PDO, forming a driven LC tank circuit with a resonant frequency of 22–30 MHz at cryogenic T and H = 0. The output signal is fed to a two-stage mixer/filter heterodyne detection system 24 , with mixer intermediate frequencies provided by a dual-channel B&K Precision function/arbitrary waveform generator. The intermediate frequency of the second mixer was 8 MHz, whereas the intermediate frequency of the first mixer was adjusted to bring the final frequency down to ~2 MHz. The resulting signal was digitized using a National Instruments PXI-5105 digitizer. Considering all the contributions, the shift in PDO frequency f due to H is written as 1 , 24 $${{\Delta }}f=-a{{\Delta }}L-b{{\Delta }}R,$$ (4) where a and b are positive constants determined by the frequency plus the capacitances, resistances and inductances in the circuit; L is the coil inductance; and R is the resistance of the coil wire and cables. In the case of a metallic sample, coil inductance L depends on skin depth λ of the sample. If we assume that sample magnetic permeability μ and coil length stay unchanged during a field pulse, we have Δ L ∝ ( r − λ )Δ λ , where r is the sample radius. At angular frequency ω , the skin depth is proportional to the square root of resistivity ρ : $$\lambda =\sqrt{\frac{2\rho }{\omega \mu }}.$$ (5) Therefore, for a metallic sample, resonance shift Δ f reflects the sample MR and the detected quantum oscillations are due to the SdH effect. In YbB 12 , the PDO measurement only detects the signal from the sample in the high-field KM state, that is, when H > H I–M (ref. 1 ). In the low-field KI state, the sample is so resistive that skin depth λ is larger than sample radius r . As a result, Δ f mainly comes from the MR of the copper coil 24 . The ‘dip’ in PDO f in Fig. 1a consequently indicates where the skin depth is comparable to the sample radius and provides an alternative means to find H I–M . We note that the H I–M value assigned to the onset of the ‘dip’ feature (Supplementary Table 1 ) is ~0.2 T lower than the metamagnetic transition shown in Extended Data Fig. 1 . Onsager relationship for a field-dependent FS The Onsager relation 25 relates frequency F of quantum oscillations to FS extremal orbit area A : \(F=\frac{\hslash }{2\uppi {\rm{e}}}A\) . Textbook derivations 25 invoke the correspondence principle to give an orbit-area quantization condition \((N+\lambda )\frac{2\uppi {\rm{e}}B}{\hslash }=A\) , where N is a quantum number and λ is a phase factor. The derivation makes no assumptions about A being constant, so for a field-dependent A = A ( B ), we can write $$(N+\lambda )\frac{2\uppi {\rm{e}}{B}_{N}}{\hslash }=A({B}_{N}),$$ (6) where B N is the magnetic induction at which the N th oscillation feature (peak, valley, etc.) occurs. Evaluating equation ( 6 ) for N and N + 1 and taking the difference gives $$\frac{A({B}_{N+1})}{{B}_{N+1}}-\frac{A({B}_{N})}{{B}_{N}}=\frac{2\uppi {\rm{e}}}{\hslash }.$$ (7) From an experimental standpoint, in materials where F is constant, a particular feature of a quantum oscillation is observed whenever \(N+\lambda ^{\prime} =\frac{F}{B}\) ; \(\lambda ^{\prime}\) is another phase factor. Allowing F to vary (that is, F = F ( B )), evaluating the expression for N and N + 1 and taking the difference yields $$\frac{F({B}_{N+1})}{{B}_{N+1}}-\frac{F({B}_{N})}{{B}_{N}}=1.$$ (8) A comparison of equations ( 7 ) and ( 8 ) shows that the Onsager relation still holds, that is, \(F(B)=\frac{\hslash }{2\uppi {\rm{e}}}A(B)\) . In Fig. 2a , B * = μ 0 H * = 41.6 T is subtracted from the applied field to yield linear Landau-level index diagrams for θ ≤ 20.7°. The resulting fit is described by \(N+\lambda =\frac{{F}_{0}}{({B}_{N}-{B}^{* })}\) (equation ( 2 )), which can be written in terms of a B -dependent frequency \(N+\lambda =\frac{F({B}_{N})}{{B}_{N}}\) , where \(F(B)=\frac{{F}_{0}B}{(B-{B}^{* })}\) (equation ( 3 )). Following the reasoning given above, this is associated with a B -dependent extremal area $$A(B)=\frac{{A}_{0}}{B-{B}^{* }}B,\ {\rm{where}}\ {A}_{0}=\frac{2\uppi {\rm{e}}}{\hslash }{F}_{0}.$$ (9) Note that we have not considered the Zeeman splitting of the peaks; therefore, equations ( 3 ) and ( 9 ) describe the average of the spin-up and spin-down components. Spin-splitting parameter The spin-splitting parameter S is introduced to describe the Zeeman splitting of the quantum oscillation peaks/valleys. In the most straightforward scenario, the magnetic inductions \({B}_{N}^{+(-)}\) at which the spin-down (spin-up) component of the N th Landau tube reaches the FS are given by $$\frac{F}{{B}_{N}^{\pm }}=N+\lambda \pm \frac{1}{2}S.$$ (10) Therefore, S causes a term in addition to the phase factor λ , which has opposite signs for the spin-down and spin-up Landau sheets. According to equation ( 10 ), the value of S can be obtained by multiplying the interval of the spin-split peak/valleys (in 1/ B ) by the oscillation frequency F . In Fig. 2d , S for nonlinear (Fig. 1c ) and linear (Fig. 2a ) Landau-level plots are calculated as S = \({F}_{{\rm{m}}}(1/{B}_{N}^{+}-1/{B}_{N}^{-})\) and S = \({F}_{0}[1/({B}_{N}^{+}-{B}^{* })-1/({B}_{N}^{-}-{B}^{* })]\) , respectively. Here F m represents the oscillation frequency directly extracted from the slope of the nonlinear Landau-level plot ( Supplementary Information ). We mention that these two approaches yield the same values of S (Fig. 2d ; data for θ = 10.9°). As pointed out by Shoenberg 25 , the effect of Zeeman splitting on oscillation extrema is determined by the cyclotron mass m * and g factor: $$S=\frac{1}{2}g\frac{{m}^{* }}{{m}_{0}},$$ (11) which provides a simple method to evaluate the g factor in the system. As shown in Fig. 2d , with a g factor of 0.084, the field dependence of S can be adequately mapped onto the mass enhancement. Such behaviour does not only prove the validity of our Zeeman splitting analysis of the SdH effect but also reveals an unusually small g factor that may reflect the exotic nature of the corresponding quasiparticle band. Resistivity measurements in the KM state From μ 0 H = 0 to 60 T, the resistivity of YbB 12 decreases by five orders of magnitude 1 . As a result, MR measurements in pulsed fields are challenging. If a constant current is used during the entire field pulse, either the signal-to-noise ratio is poor in the KM state or the high resistance in the KI state causes serious Joule heating. To resolve this issue, we developed a pulsed-current technique. The experimental setup is shown in Extended Data Fig. 3 a. A ZFSWHA-1-20 isolation switch is used to apply current pulses with widths less than 10 ms. The switch is controlled by two square-wave pulses generated by a B&K Precision model 4065 dual-channel function/arbitrary waveform generator triggered by the magnet pulse. Thus, a relatively large electric current (2–3 mA) can pass through the sample during a narrow time window within the field pulse (Fig. 3a , inset). Current is applied only when YbB 12 enters the KM state. In our low- T MR measurements, the switch turns on at 47 T during the upsweeps and turns off at 47 T during the downsweeps. To reduce heating due to eddy currents, we measured the longitudinal MR of a needle-shaped sample (length of 6.5 mm between voltage leads; cross-sectional area, 0.33 mm 2 ). As shown in Extended Data Fig. 3 b, the downsweeps still suffer from some Joule heating. Hence, in the main text, we focus on the upsweeps, which show very weak longitudinal MR and should reflect the intrinsic electrical transport properties of the KM state of YbB 12 . (During the experiments, we found that when we attempted to stabilize the target temperature by applying a high heater excitation in an environment of dense 3 He vapour, the temperature reading according to the thermometer could be higher than the actual sample temperature. We removed all the data points measured using this procedure.) According to equation ( 9 ), A ( B ) for the pocket detected in the PDO measurement shrinks by ~45% from 50 T to 60 T. Assuming a spherical FS, this corresponds to an ~60% reduction in quasiparticle density n . Meanwhile, the cyclotron mass increases by ~60% (Fig. 2b , inset). Consequently, a textbook Drude expression ρ = m */ n e 2 τ ( τ is the relaxation time) predicts that the resistivity would increase by a factor of ~4 from 50 T to 60 T if the electrical transport is dominated by this pocket. In sharp contrast, almost no MR is observed in this field range (Extended Data Fig. 3 ), indicating the negligible contribution of this FS pocket to the electrical transport. On the other hand, the weak MR and T -linear resistivity of the KM state between 4 K and 9 K are in agreement with the predicted behaviour of a marginal Fermi liquid (without macroscopic disorder) or an incoherent metal, which is potential evidence for the coexistence of fermions inactive in the charge transport and more conventional charged fermions 36 . KW ratio For the Sommerfeld coefficient γ in the KM state of YbB 12 , a pulsed-field heat capacity study reports values of 58 and 67 mJ mol −1 K −2 at 49 and 60 T, respectively 2 . A linear interpolation gives γ = 63 mJ mol −1 K −2 at 55 T. Since γ = ( \({\uppi }^{2}{k}_{{\rm{B}}}^{2}\) /3) N ( E F ) and the density of states at the Fermi energy N ( E F ) = ( m */π 2 ℏ 2 )(3π 2 n ) 1/3 , the Sommerfeld coefficient can be written as $$\gamma =\frac{{\uppi }^{2}{k}_{\rm{B}}^{2}}{3}\frac{{m}^{* }{k}_{{\rm{F}}}}{{\uppi }^{2}{\hslash }^{2}},$$ (12) where k F is the Fermi vector. As for the FS pocket detected in the SdH measurement, equation ( 3 ) gives F (55.0 T) = 231.9 T. In a spherical FS model, F = \(\hslash {k}_{\rm{F}}^{2}/2\rm{e}\) and n = \({k}_{\rm{F}}^{3}\) /3π 2 = (2e F / ℏ ) 3/2 /3π 2 ( k F is the Fermi wavevector); therefore, F = 231.9 T corresponds to n = 1.99 × 10 19 cm −3 . Also, for 55 T, m * ≃ 10 m 0 , as shown in Fig. 2b (inset). Putting these parameters into equation ( 12 ), we estimate that the FS pocket responsible for the SdH effect could contribute only 4.4% of the measured γ . Consequently, an additional band with a much larger density of state must exist in the KM state of YbB 12 . The unusually large KW ratio (Fig. 3c ) suggests the somewhat unusual nature of the heavy quasiparticles that dominate the charge transport and thermal properties of the KM state. Analysis of the KW ratio shows that for a single-band strongly correlated system, the value of A 2 / γ 2 does not depend on the strength of correlations; however, it is solely determined by the underlying band structure 39 . We consider the simplest case of a single-band isotropic Fermi liquid model. Using the same calculations as in ref. 39 , we have $${A}_{2}=\frac{81{\uppi }^{3}{k}_{\rm{B}}^{2}}{4{\rm{e}}^{2}{\hslash }^{3}}\frac{{({m}^{* })}^{2}}{{k}_{\rm{F}}^{5}}.$$ (13) Taken together, equations ( 12 ) and ( 13 ) yield k F = 2.15 nm −1 (corresponding to n = 3.36 × 10 20 cm −3 ) and a rather large effective mass of m * = 90 m 0 . Such a heavy mass is unusual for Yb-based mixed-valence compounds, but explains the anomalous KW ratio as well as the absence of SdH oscillations due to these quasiparticles in the PDO response. SdH oscillations due to charge-neutral quasiparticles Magnetic quantum oscillations are observed in both M and ρ in the KI state of YbB 12 , with their behaviours following the description of the LK formula 1 . The size of FS and the effective mass of quasiparticles inferred from the oscillations are consistent with the fermion-like contribution to the thermal conductivity 3 . A natural explanation—given the high electrical resistivity—is that the thermal conductivity and quantum oscillations are due to charge-neutral fermions. (We mention that a hypothetical model of the SdH effect in a system containing gapless neutral fermions and gapped charged bosons has been established 18 , which predicts the LK behaviour above a certain T .) Of the two oscillatory effects observed in the KI state, the dHvA effect is the more fundamental. As pointed out by Lifshitz, Landau and others 25 , it involves oscillations in a thermodynamic function of state ( M ) that may be related to the electronic density of states with a minimum number of assumptions. The fact that M in the KI state oscillates as a function of H can only be due to the oscillation of the fermion density of states and consequently their free energy. Even in conventional metals, quantum oscillations in ρ are harder to model quantitatively. A starting point was suggested by Pippard 51 ; the rate at which quasiparticles scatter depends on the density of states available via Fermi’s golden rule. Hence, if the quasiparticle density of states oscillates as a function of H , the scattering rate τ −1 and consequently ρ will also proportionally oscillate, leading to the SdH effect. Before modifying this idea to tackle the SdH effect in the KI state, we remark that the H -dependent frequency of oscillations in the KM state suggests that the exchange of fermions between charge-neutral and conventional FS sections occurs readily, probably via low-energy scattering. This is also supported by the T -linear resistivity 36 (Fig. 3a ). The rate at which this scattering happens will obviously reflect the joint density of fermion states. Returning to the KI state of YbB 12 , ρ is thought to be due to charge carriers thermally excited across the energy gap, plus contributions from states in the gap that lead to ρ saturation at low T . Following the precedent of the KM state, it is likely that fermions in the KI state scatter back and forth between the charge-neutral states and the more conventional bands. The situation is more complicated than Pippard’s original concept 51 because the scattering between two bands is involved (for example, ref. 52 ). Nevertheless, the amount of scattering will be determined by the joint density of states; because the density of states of the neutral quasiparticles oscillate with H , so does ρ . Unlike conventional metals, the density of charge carriers depends on T , so the amplitude of SdH oscillations in the KI state follows a different T dependence from that of the dHvA oscillations, as observed in experiments 1 . As described by equation ( 4 ), the PDO f in the KM state is determined by the skin depth, that is, the conductivity. As described above, the scattering of fermions occurs between the charge-neutral states and more conventional bands; in the KM state, our conductivity results and heat capacity data of others suggest that the latter is an FS of heavy charged fermions. The SdH effect in conductivity caused by the oscillatory density of states of the neutral quasiparticles in magnetic fields is detected in the PDO experiments. By contrast, any intrinsic quantum oscillations due to the metallic FS are suppressed by a combination of the very heavy effective mass and relatively high T ( ≳ 0.5 K) of our measurements. Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request. | A recent series of experiments at the National High Magnetic Field Laboratory (National MagLab) at Los Alamos National Laboratory leveraged some of the nation's highest-powered nondestructive magnets to reveal an exotic new phase of matter at high magnetic fields. The experiments studied the unusual Kondo insulator ytterbium dodecaboride (or YbB12) and were the first results from the new 75-tesla duplex magnet housed at the National MagLab's Pulsed Field Facility at Los Alamos. "This magnet and the resulting experiments are the first fruits of the National Science Foundation-supported pulsed magnet surge," said Michael Rabin, director of the Pulsed Field Facility at Los Alamos. "The surge is creating new science capabilities in the 75-85 tesla range—leading eventually to an expanded portfolio of some of the most powerful nondestructive magnetic fields in the world." Researchers from the University of Michigan, Kyoto University and Los Alamos conducted the research, published last week in Nature Physics. In Kondo insulators, there is an unusual quantum-mechanical mixing of mobile electrons and magnetic atoms, making these materials attractive as model systems for basic science, electronic devices, and possibly quantum computing. Unlike simple metals and insulators, YbB12 exhibits properties of both—its electrical resistance behaves like that of an insulator, but it also clearly shows quantum oscillations at high magnetic fields that are a fundamental metallic property. "A plethora of theories has emerged to account for such behavior," said John Singleton, a fellow at the MagLab's Los Alamos campus and co-author on the paper. "Some physicists believe this is a condensed-matter incarnation of neutral Majorana fermions, entities normally explored in particle physics." To test these theories, the research team wanted to observe how the neutral fermions that they found in YbB12 responded to extreme conditions. The high magnetic fields in the new 75-tesla duplex magnet available at the National MagLab's Pulsed Field Facility were used to suppress the insulating properties of YbB12 and measure quantum oscillations and various properties that are affected by the presence of the neutral fermions. "The extra 10 tesla above our standard pulsed magnets provided by the duplex magnet enabled this new state of matter—exotic fermions gradually being drowned in a sea of normal electrons—to be tracked across a wide range of magnetic fields for the first time," said Singleton. This confirmed that the phenomena observed were definitely associated with the neutral fermions and provided a test of the various theoretical models. The duplex magnet is a result of a Pulsed Field Facility "magnet surge" supported by the National Science Foundation. The purpose of the surge is to create new and expanded science capabilities in the 75-85 tesla range. The duplex magnet system is available to researchers from around the world to conduct their own experiments, and an 85-tesla duplex magnet and necessary supporting technology are under development. These new pulsed magnets are powered solely by capacitor banks without relying on the Los Alamos generator that powers the much larger 100 tesla magnet. The generator is currently offline for repairs and maintenance, so magnets from the surge fill an important gap for condensed-matter physics in high magnetic fields. The new magnets are called duplex magnets because they are made from two concentric electromagnet coils (solenoids) that are powered independently from separate capacitor-bank modules. When the generator is back online, technology developed by the pulsed magnet surge will be combined with Los Alamos' largest magnets to further expand the MagLab's science capabilities. | 10.1038/s41567-021-01216-0 |
Medicine | Research shows mass production can make customised PPE for healthcare workers | Luke N. Carter et al, A feasible route for the design and manufacture of customised respiratory protection through digital facial capture, Scientific Reports (2021). DOI: 10.1038/s41598-021-00341-3 Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-021-00341-3 | https://medicalxpress.com/news/2021-11-mass-production-customised-ppe-healthcare.html | Abstract The World Health Organisation has called for a 40% increase in personal protective equipment manufacturing worldwide, recognising that frontline workers need effective protection during the COVID-19 pandemic. Current devices suffer from high fit-failure rates leaving significant proportions of users exposed to risk of viral infection. Driven by non-contact, portable, and widely available 3D scanning technologies, a workflow is presented whereby a user’s face is rapidly categorised using relevant facial parameters. Device design is then directed down either a semi-customised or fully-customised route. Semi-customised designs use the extracted eye-to-chin distance to categorise users in to pre-determined size brackets established via a cohort of 200 participants encompassing 87.5% of the cohort. The user’s nasal profile is approximated to a Gaussian curve to further refine the selection in to one of three subsets. Flexible silicone provides the facial interface accommodating minor mismatches between true nasal profile and the approximation, maintaining a good seal in this challenging region. Critically, users with outlying facial parameters are flagged for the fully-customised route whereby the silicone interface is mapped to 3D scan data. These two approaches allow for large scale manufacture of a limited number of design variations, currently nine through the semi-customised approach, whilst ensuring effective device fit. Furthermore, labour-intensive fully-customised designs are targeted as those users who will most greatly benefit. By encompassing both approaches, the presented workflow balances manufacturing scale-up feasibility with the diverse range of users to provide well-fitting devices as widely as possible. Novel flow visualisation on a model face is presented alongside qualitative fit-testing of prototype devices to support the workflow methodology. Introduction The COVID-19 pandemic has highlighted the need for effective respiratory personal protective equipment (PPE), particular FFP3/N99 filtering standards, that may be comfortably worn by front-line workers for prolonged periods. At the time of writing, alert levels in most countries show the virus in ‘general circulation’, and with vaccine deployment in the early stages alongside the threat of further mutations, front-line workers face an ongoing need for filtering PPE. On 3rd March 2020, a statement by the World Health Organisation 1 called for a 40% increase in PPE manufacturing worldwide to meet growing demand, warning of serious threat to health and life of frontline workers without adequate supplies of effective face-masks and respirators. Acquisition of respiratory PPE has become globally competitive with many local jurisdictions concerned about supply availability 2 . Against this backdrop this research aims to address the persistent problems of poor respirator fit rates and user comfort by proposing a system exploiting existing accessible digital technologies to rapidly capture and process facial topographies. We further demonstrate the ability to automate parameterisation enabling individual fit selection and/or customisation of a new respirator device. Assuming appropriate filtering material, typically melt-blown polypropylene, effectiveness of a respirator device is largely dictated by the fit or seal it forms with the user’s face. Quantitative and qualitative physical test methods are both used during respirator selection; the former provides superior sensitivity and the latter a robust simplicity 3 . Both are time consuming for larger workforces 4 . Additionally, manufacturers recommend users performs a ‘fit check’ upon every donning, by covering the filter and inhaling to feel for a good facial seal. The reliability of this method is questionable with Lam et al . 5 reporting accuracy of self-checks between 57.5 and 70.5% compared against quantitative results. Likewise, fit test failure rates have been widely studied with pass rates varying wildly depending on devices tested, testing methods, and usage situation. Shaffer and Janssen 6 reported pass rates of 0–88% across ten studies examining different N95 models in a review highlighting the extent of the problem. An investigation by Zhuang et al . 7 examined 101 different respirators using 25 subjects. Only 32% of the devices achieved acceptable fit in at least one of three donnings for > 75% of participants. Burgess amd Mashingaidze 8 reported a failure rate of 69% in devices across ten pharmaceutical companies with success shown not to be associated with usage frequency, years of experience, or user training. Further data by Foereland et al . 9 showed a more promising 62% pass rate for 701 fit tests across 14 different respirators in the smelting industry. It also highlighted pass rates of 92% and 100% for the only silicone-to-skin interfacing respirators with changeable filters included in the study. User gender is a notable factor in the fit success rates drawing into question the gender and racially blended anthropometric datasets currently used for respirator design and standard testing. McMahon et al . 10 reported 95.1% fit success for male participants compared with 85.4% for females and highlights that younger women are more likely to fail fit tests. One design in the study by Lam et al . 5 revealed pass rates of 72.2% vs. 58.1% for males and females respectively and 63% for males compared with 56% for females in the study by Foereland et al . 9 . These overall fit test failure rates highlight the extent of the problem, but do little to clarify where and why respirator fit has failed. Only a limited number of studies examine the location of leaks in the facial seal. In a study by Brosseau 11 , the most common sites were the nosepiece (30%), chin (6%), and cheeks (4%). Similarly, Oestenstad and Bartolucci 12 showed the most common leak sites reported were nose and cheek (24.9%), cheek (19.8%), and nose (14.3%). This supports anecdotal evidence from users and clinicians that the nose presents a specific problem region due to the wide variety in user’s nasal profile and lack of soft tissue able to accommodate mismatch between a standard respirator nose form and the user. One further consideration of widespread respiratory PPE use is user comfort. Throughout the COVID-19 pandemic, images on social media have highlighted the bruising and abrasions that can result from prolonged wearing of disposable PPE, particularly designs with deformable metal nose clips, an issue that is supported by user focussed studies. Locatelli et al . 13 present reports from health workers during a focus group setting with direct statements that respirators leave deep indentations on the user’s face, cause skin irritation, that the metal nose bars pinch, and that fitted masks are too tight. Even more concerning is acknowledgement by users that wearing respiratory protection interferes with patient care both directly from distraction due to the discomfort, and also due to ‘rushing’ when the user feels as though the facial seal is poor. A study by MacIntyre et al . 14 reported 52.2% of users (366/701) complained of pressure on the nose during respirator usage and 59% of participants in a study presented by Radonovich et al . 15 discontinued respirator use within an 8 h period due to discomfort. Given this background, it is clear a fitted or customised approach to respirator design may dramatically improve the fit, comfort, and ultimately the level of protection that a user receives from their respiratory PPE. 3D printing is a well-positioned technology to support this demand having already been highly visible in the global COVID-19 response 16 . The concept of customisation using 3D digital data is not new. In 1997 a U.S. military report 17 discussed applying laser 3D scanning to customise manufacture, fit, and model helmets and face-pieces worn by military personnel. More recently a rigid polymer proof-of-concept respirator device was customised using 3D scan data by Swennen et al . 18 . It demonstrated a viable method of decentralised respirator production given local shortages of mass-produced PPE, however acknowledged that the rigid design may cause skin abrasions during extended use and does not present data to verify the technique’s efficacy. Likewise, Cai et al . 19 proposed a user specific rigid polymer interface device for use with traditional N95 masks in an effort to improve comfort. They show a reduced point-load on the user’s face when using the device under a neutral expression, but more severe loads with different user expressions potentially due to the rigid nature of the material. These issues were addressed it a pilot study by Makowski and Okrasa 20 . Their fully-customised 3D printed respirator was produced using an elastomeric polymer allowing for some level of compliance with the subject’s face. The prototype not only met the relevant standards for such a device but showed significantly improved user comfort and did not cause bruising or skin irritation. It highlights the importance of considering the individual user’s face when evaluating the effectiveness of respiratory protection and, in the authors’ words, shows “A good quality respirator equipped with high-efficiency filters is not nearly enough to ensure a proper protection of workers”. A further novel and automated approach to move directly from stereophotogrammetry scan data to a customised design was demonstrated by Shaheen et al. 21 combining the use of traditional facial landmarks and direct mapping of the facial contour to form the device design. The only notable drawback of these two approaches is the need to customise and produce the entire device for each user which adds significant time and cost to the process thus limiting the potential for scale-up. The current research presents a method to rapidly capture and quantify key features from a user’s facial profile using non-invasive optical imaging. Key facial parameters drive the design and selection of a semi-customised or fully-customised prototype respirator design as appropriate. Combining these innovative customisation techniques and intelligent material choice, the fit and comfort of these devices is improved for individual users whilst standardising production where possible to give a feasible larger scale solution. Methods Process overview Through examination of commercial designs and discussions with end-users regarding problems associated with disposable respirator models, a workflow has been developed to provide improved fit and comfort in a new reusable ‘3-part’ mask design; an overview of this workflow is shown in Fig. 1 a. 3D facial scan data is collected using one of several possible methods. Bespoke post-processing routines extract dimensional parameters from the face including the distance from eyeline to the chin (eye-to-chin distance) and parameters defining a Gaussian fit of the subject’s nasal profile. These facial parameters are used to categorise the user in to one in a set of mask designs initially defined by examining distributions in a sample cohort. The analysis will inform the user which of these semi-customised designs is suitable to achieve a good fit. Should the user’s facial parameters lie outside those of the semi-customised set, then a fully customised approach driven by the initial scan data will be recommended. Figure 1 ( a ) Workflow from scan to product, ( b ) mask design highlighting fully customised and semi-customised aspects, ( c ) experimental setup for flow visualisation used to rapidly assess fit. Full size image The general design is shown in Fig. 1 b consisting of a rigid polymer hard-shell defining the overall shape, flexible silicone seal providing compliant interface to the user’s face, and front cap to securely hold standard filter material. It is intended to be simple to manufacture and disinfect between uses. Overall size of hard-shell and seal is scaled according to the user’s eye-to-chin distance in the semi-customised approach and the nasal profile of the silicone seal follows a Gaussian curve as indicated in Fig. 1 b. The fully customised approach features a silicone seal with surface fully mapped to the user’s facial 3D scan data. The novel bespoke test rig shown in Fig. 1 c was constructed to assess the efficacy of the two design methodologies against a model facial profile via flow visualisation and high-speed photography. Facial data Sample selection A cohort of 100 volunteer participants working in a clinical academic centre during April–May 2020 of mixed facial form, gender and ethnicity were included (King’s College London Institutional Ethics Approval MRPP-19/20-18570). This dataset was combined with a further random sample of 100 three-dimensional facial images of staff and students at the University of Leeds and originated from a larger data set previously analysed for dynamic facial asymmetry 22 (Dental Research Ethics Committee (DREC) approval (240915/BK/179)). All participants had no previous history of facial surgery or trauma, were aged between 18 and 40 years of age and had symmetrical faces and normal occlusions. Facial images were analysed and only numerical data was generated and shared, in the form of an anonymised spreadsheet. As part of the two listed ethical approval processes, informed consent was obtained from all participants and all methods were performed in accordance with the relevant guidelines and regulations. For this feasibility study no formal sample size calculation was performed for the combined 200 participants. Accordingly quantitative measurements should not be generalised beyond this specific study although the process feasibility and approach is likely to be valid given the strong accordance with key average facial topographical measurements in the collected sample population with the United States Centre for Disease Control anthropometric averages, against which the developed algorithms for sizing selection were tested. Facial capture Images of volunteers were captured using a stereophotogrammetry three-dimensional (3D) image capture and analysis system (Dimensional Imaging (DI4D), Glasgow, Scotland). Four linked cameras capture simultaneous photographs of the patient which is then reconstructed in to a 3D surface and exported in stereolithographic (*.stl) format. A subset of 9 volunteers from King’s College London were also imaged in the same session using the Bellus3D application using Apple iPhone X TrueDepth camera in order to compare DI4D and Bellus3D scan methods. Facial parameter extraction Extracting key parameters from facial scan data allows for rapid evaluation of each large and cumbersome scan dataset. Eye-to-chin distance was selected to evaluate facial height being representative of the typical span in a well-fitting device. Likewise, a Gaussian curve fitted to the 2-dimensional nasal profile extracted perpendicular to the nasal ridge describes the general height and width of a well-fitting respirator nose form in two simple numeric parameters, ‘A’ and ‘σ’ respectively where: $$Standard\;form\;of\;a\;Gaussian\;curve{:}\quad f\left( x \right) = Ae^{{ - \frac{{x^{2} }}{{2\sigma^{2} }}}} .$$ (1) This evaluation was performed by a bespoke MATLAB 23 script. Initial user registration informs the correct orientation of the scan data before the three key parameters are extracted. The full logic of this operation is presented schematically in Fig. 2 and MATLAB source code is available on request to the authors. Figure 2 Diagram showing facial scan data processing logic for key parameter extraction as applied within the custom MATLAB script. ( a ) Shows the orientation stage with blue crosses indicating user input and red dashed lines showing the relevant axes/planes. ( b ) Illustrates the method for eye-to-chin distance measurement. ( c ) Illustrates nose profile extraction ( d ) shows the Gaussian fit to the extracted nose profile. Full size image Data analysis Extracted facial parameters from the volunteer cohort using scanning methods from both institutions were used to determine the semi-customised mask sizes. Categories were established in a two stage process; firstly a normal distribution was fitted to the eye-to-chin distance of the full cohort. Three hard-shell sizes corresponding to eye-to-chin distance were centred on the lower full width half maximum (FWHM) boundary, maximum, and upper FWHM boundary (‘Small’, ‘Medium’, ‘Large’). A similar process was performed for distributions of σ values in each size subset to determine the three nose profiles (I, II, III) for each hard-shell size. Thus nine different semi-customised mask designs were established. Corresponding mean ‘ A ’ values were calculated for the subsets to fully define each nasal profile. CAD development Computer aided design (CAD) models and methods were developed for both the semi-customised and fully customised design approaches. CAD customisation: semi-customised CAD for the semi-customised design was developed using a combination of Autodesk Inventor 24 and Autodesk Fusion 360 25 as appropriate. The facial interface profile of the hard-shell was designed to be scaled according to eye-to-chin distance whilst keeping the filter geometry constant enabling common filter cap geometry and filter area across all design variants. Seal geometry was created using a combination of sweep and loft operations following the hard-shell profile (Fig. 3 a(i)) with a final Boolean subtraction of the hard-shell to ensure a good mate between the two components. In general, a 5 mm of silicone thickness was specified between the face and hard-shell with initial prototyping advising slight thickening of the seal (1–3 mm) in the region of the user’s maxilla and chin (Fig. 3 a(ii)). Minimum silicone thicknesses (2 mm) were exceeded in all regions to ensure manufacturability. Finally, the Gaussian nasal profile defined by the A and σ parameters was cut from the inner contour of the seal at the mean nasal ridge angle from the initial cohort, 25°, to the facial plane (Fig. 3 a(iii)). Figure 3 CAD Methodology ( a ) Shows the semi-customised seal design method from the (i) initial sweep, (ii) thickened regions, and finally (iii) cut to form the nasal profile. ( b ) Shows the fully-customised route; (i) initial alignment and offset of the hard-shell from the facial scan data (ii) import of a generic seal, and (iii) formation of the mould. Full size image CAD customisation: fully customised A fully customised design approach was established to accommodate the small minority of users falling outside the standard semi-customised sizes and two fully-customised seals were produced for testing via this method. Subject facial scan data was imported in to Geomagics FreeForm Plus 3D 26 software. Appropriately sized hard-shell were also imported and translated onto the virtual face with a minimum 5 mm offset between the two to accommodate the seal (Fig. 3 b(i)). A generic seal model was imported around the perimeter of the hard-shell and an operator ensured that it intersected the facial profile around the entire perimeter (Fig. 3 b(ii)). Finally a Boolean subtraction was performed to create the customised surface of the seal and mate with the generic hard-shell design. Seal mould CAD Both customisation approaches used the same methodology to produce seal mould geometry. A CAD body was created by offsetting the outer surface by 4 mm and subtracting the seal geometry by Boolean operation. Finally, the mould was sectioned around its perimeter to create the two halves ensuring that no undercuts exist which would inhibit part removal (Fig. 3 b(iii)). Manufacture of prototypes Rigid components Hard-shell and respirator cap components were 3D printed using a Connex 1 Objet 260 Polyjet Printer (Stratasys, UK) in UV cured resin, RGD720 (UTS 50–65 MPa, Elastic Modulus 2000–3000 MPa 27 ), with 16 µm layer thickness and 600 dpi x/y resolution. Standard dissolvable support structures were removed following production using water-jet during post-processing. Silicone seals Seal moulds were 3D-printed from the thermoplastic polymer, polylactic acid (PLA), by fused filament fabrication (Cubicon Single Plus 3D, iMakr, UK). The system was fitted with a 0.4 mm diameter nozzle operating at 210 °C and a layer thickness of 0.1 mm. They were produced and used without further post-processing. Completed moulds were sprayed with a wax based release spray (MACWAX, Technovent, UK) to facilitate silicone removal. All seals were made using a biocompatible silicone, M511 (Technovent, UK), traditionally used in the manufacture of facial/body prostheses and was shown to reliably cure in a representative mould during initial tests. It was softened from 25 to 15 shore with the addition of M513 (10% vol.) softening agent providing a balance between compliance and structural stability. Moulds were manually packed, secured with cable ties and processed in a pressure chamber; 3 h, 55 °C, 2 bar. Following demoulding, silicone trim was manually removed with a blade. Characterisation To evaluate the fit of each mask design, a novel flow visualisation methodology was developed. The respirator filter opening is sealed using a polythene sheet and fitted to a demonstration facial profile. A glycerol vapour (90% glycerol, 10% water) is injected in to the mask via an aperture approximately located at the profile’s mouth. The vapour fills the mask volume and may be observed escaping in areas of poor or discontinuous seal. High speed photography records the process and post-processing is used to qualitatively and semi-quantitatively evaluate the seal. A single demonstration facial profile was produced and used for all tests. It consisted of 4 mm thick silicone layer, analogous to facial tissue, supported by a rigid polymer 3D printed base to better represent the overall compliance of a real face. Silicone was moulded and formed using the same general method and specification to that used for the seal production. Flexible tubing was connected to the rear of the facial profile to enable injection of glycerol vapour. The facial profile was positioned on a stand and illuminated using two spot-lights in front of a matt black background within a fume hood as shown in Fig. 1 c. High-speed footage was recorded using a high-speed camera (Photron Fastcam SA3, CA, USA) at 250 f.p.s. and with a shutter speed of 500 µs. Maximum illumination was used to allow the lens aperture to be adjusted to f/11 and provide the greatest possible depth of field. Recording was initiated approximately 1 s before manual vapour injection and allowed to continue for approximately 10 s for each test. Post-processing was via bespoke MATLAB script. Briefly, greyscale values from the first five frames were averaged to create a background image intended to mitigate minor variations between frames. Mean images across ten frames were derived from the raw footage every 25 frames (0.1 s) and subtracted from the background image. A threshold was applied with pixels with an absolute deviation of < 3 assumed to be showing no deviation from the background and a simple despeckle filter used to reduce noise. Finally, values were scaled by a simple factor between 1 and 2 according to corresponding background value in an attempt to accommodate for inconsistent brightness (i.e. Difference between facial profile and matt black background). Resulting deviation values were visualised using a false colour palette and super-imposed on to the background image to produce an animation showing qualitative vapour intensity throughout the experiment. For each processed frame a simple pixel count was performed to provide a semi-quantitative measure of vapour escape against time. Three of the semi-customised designs, ‘Small’(II)/’Medium’(II)/’Large’(II), were tested along with the no seal condition, a fully customised seal, and an incorrectly customised seal (i.e. customised to facial scan data other than that of the test profile). Prototype fit testing A subset of clinician volunteers who had a record of failures in fit-testing of multiple commercially available soft-interface N99 half-mask respirators (both PureFlo 1000 (Gentex Corporation, USA) and 3 M 6200, (3 M, USA)) were scanned using DI4D and assigned prototype respirators according to our developed pipelines. Devices were qualitatively fit-tested and evaluated for a peripheral seal 3 . Use of facial images Informed consent was gained for publication of the images shown as illustrative scan data in Figs. 1 , 2 , 3 , and 4 and those used as exemplars of mask fit in Fig. 7 . Figure 4 Comparison of DI4D and Bellus3D scanning method showing ( a ) typical false colour direct comparison highlighting deviation between the two techniques for a subject and ( b ) the extracted measurements of eye-to-chin distance and ‘σ’ from two different scanning methods of nine subjects. Full size image Results and discussion Scan method evaluation Comparison of the nine subjects scanned using both DI4D and Bellus3D methods is shown in Fig. 4 . Meshes for a typical subject generated using both methods are pictured in Fig. 4 a alongside a false colour image showing straight line deviation between the DI4D and Bellus3D data. Accuracy of the Bellus3D method tends to decrease radially around the face compared to the ‘Gold Standard’ of the DI4D system, however the central features used to extract the key parameters of eye-to-chin distance and nasal profile are seen to only deviate by approximately ± 2 mm between scanning methods. Figure 4 b shows the extracted facial parameters, eye-to-chin distances and ‘σ’, for the nine subjects using both scan methods. It shows no notable trend in deviation between the two methods and indicates only small variations in the extracted parameters. For eye-to-chin distance, the mean difference is 2.4 mm (2.25%) with a standard deviation of 2.9 mm. Likewise for ‘σ’, the mean difference is 0.22 (2.18%) with a standard deviation of 0.26. This generally good agreement resulted in the same semi-customised design assigned to each of the nine subjects regardless of scan method. This comparison supports the concept that user scanning and fitting using the semi-customised approach could effectively be conducted in a decentralised way without the need for specialised equipment. Whilst the iPhone 11 does represent a higher end consumer device, it is likely that one could be available within a wide range of settings where respirator fitting is required. Scanning using a familiar device would allow for it to effectively be carried out by non-specialists locally without the need for further training, travel, or expense. Sample set data analysis Figure 5 shows the extracted facial parameters for the sample cohort and corresponding determination of the nine semi-customised designs as described in “ Facial data ”. The full distribution of eye-to-chin distances is shown in Fig. 5 a with the three hard-shell size categories, ‘Small’/’Medium’/’Large’, centred around the lower FWHM boundary, maximum, and upper FWHM boundary of the fitted normal distribution respectively. Small overlaps at the Small/Medium boundary and Medium/Large boundary are shown in yellow and cyan respectively. These regions allow for some flexibility during mask assignment with borderline users potentially belonging in either category. The cumulative range across all hard-shell sizes includes 87.5% of subjects with the remaining 12.5% located at either extreme. Users in these outlying regions would be flagged as candidates for the fully-customised route. Figure 5 Graphs showing the extracted facial parameters from the sample cohort of 200 and subsequent determination of semi-customised designs. ( a ) Shows distribution of the eye-to-chin distance of the full 200 subjects with colour representing the different corresponding hard-shell sizes (S/M/L). ( b ) Shows the distribution of σ values for each hard-shell size and ( c ) shows the corresponding nasal profiles (I, II, III) for each. ( d ) Illustrates the nasal profiles established for each of the semi-customised designs. Full size image Similar normal distributions were fitted to ‘ σ ’ values within the hard-shell size categories as show in Fig. 5 b. These distributions were categorised in a similar way to that of the eye-to-chin distance to give three nasal profile ranges, I/II/III, for each hard-shell shown in Fig. 5 c. Finally, corresponding mean ‘ A ’ and σ values for each σ range were calculated resulting in nine unique nasal profiles for the semi-customised designs shown in Fig. 5 d. Range limits and parameters defining each of the nine designs are provided in Table 1 . Table 1 Ranges and mean values of eye-to-chin distance, ‘σ’, and A for each of the nine semi-customised designs. Full size table These distributions form the basis of the semi-customised sizing process. Parameters from a user are first assigned an overall hard-shell size based on eye-to-chin distance and then a corresponding fitted nasal profile based on σ value. Users falling in to one of the overlap regions are assigned hard-shell size based on the closest σ match in either category. For example, an eye-to-chin distance of 100.5 and σ of 11.2 would yield a Small/Medium mask size which is closer to Small. However, as σ is too large to fit within the Small group a Medium (III) mask is selected instead. Users falling outside the category limits at either stage are flagged for the fully-customised route. 3D scanning provides a rapid and increasingly available method of capturing the shape of a user’s face at a useful resolution made possible by growing sophistication in consumer devices. Unfortunately the resulting datasets are often large and unwieldy making their use in customisation labour intensive. By extracting facial parameters using the methods outlined here, each 3D facial scan can be rapidly evaluated using a few simple values. In this case aspects of respirator design, namely the overall size and nasal profile, that have been reported to critically impact device effectiveness 11 , 12 are efficiently described in three numbers. Data analysis of the cohort demonstrates that a large proportion of users can be categorised in to a relatively small number of designs showing potential to improve fit and function whilst maintaining viability of large scale manufacture. Identifying outliers in this process recognises that the variety and diversity in user’s faces will inevitably make any level of standardisation impossible for some and directs potential manufacturers to target labour intensive, fully-customised, devices at those who truly need them. Flow visualisation Figure 6 shows typical false colour vapour visualisation frames for each of the mask designs tested. All visualisations show vapour within the mask observed through the sealed filter opening and translucent body. Facial parameter extraction categorised the test facial profile was categorised as ‘Medium’(II). Both the ‘Medium’(II) (Fig. 6 b) and the fully customised designs (Fig. 6 e) showed good seal performance with minimal vapour release. All other designs showed extensive and rapid vapour loss; ‘Small’(II) (Fig. 6 a) design appeared to have severe leaks around the nose whereas the ‘Large’(II) (Fig. 6 c) design appeared to show vapour escape along the sides adjacent to the cheeks. This dramatically highlights the need to for a good fit both in size and nasal profile to ensure device effectiveness. Figure 6 Figure showing typical false colour vapour visualisations for ( a ) ‘Small’(II), ( b ) ‘Medium’(II), ( c ) ‘Large’(II), ( d ) no seal condition, ( e ) fully customised seal, and ( f ) an incorrectly customised seal. Alongside each is resented the corresponding vapour pixel count against time for the fully analysed video. Horizontal dashed lines indicate a pixel count corresponding to the filter area, vertical dashed lines (red arrow) indicate the moment of vapour injection. Full size image Adjacent plots in Fig. 6 show the vapour pixel count against time indicating severity of vapour loss. Vertical dashed lines (arrows) indicate the moment of the vapour injection and horizontal dashed lines show the approximate count relating to vapour observed through the translucent filter area. Both the ‘Medium’(II) (Fig. 6 b) and Fully Customised (Fig. 6 e) designs stabilise at a count similar to the filter area following an initial transient phase suggesting that the mask volume fills with vapour but allows very little to escape thus indicating a good seal. Following the initial transient phase a linear fit (least-squares) was applied to the count data to semi-quantitatively assess the severity of the leak. Large gradients indicate all vapour leaving the mask volume very quickly. Smaller gradients suggest a longer sustained release and comparatively better seal. By this method designs rank best to worst in the order: fully-Customised, ‘Medium’(II) ( selected design ), ‘Large’(II), Incorrectly-Customised-Seal, ‘Small’(II), and No Seal. These results not only validate the semi-customised approach of approximating the user’s nasal profile to a Gaussian curve, but also the selection of silicone as appropriate seal material. Its compliant properties have previously been reported to improve respirator fit rates 9 and here it is shown to be effective at accommodating the small mismatch between the user’s nasal profile and Gaussian fit. Whilst providing a useful rapid evaluation method, this experimental setup has some limitations. Firstly, vapour injection is manual and, whilst the operator aims for consistency, the amount of vapour used in each experiment may vary. Secondly, the silicone coated facial profile offers something closer to a human subject compared with a rigid mannequin, however the uniform compliance does not fully represent variations in soft-tissue on a person’s face. Finally, refinement of post-processing techniques is required to better qualify the severity and intensity of vapour release allowing for greater analytical detail. Nevertheless, these results show fully-customised and semi-customised designs produce a good facial seal when applied to an individual thus supporting both approaches in providing well-fitting and effective devices. Prototype fit testing All volunteers passed the qualitative fit test using prototype customised devices providing further confidence in the design and customisation methodology. Figure 7 a shows photographs of a volunteer wearing a correctly fitted device whilst Fig. 7 b shows a device with correct eye-to-chin dimension but incorrect nasal profile and Fig. 7 c shows a device with an incorrect combination of eye-to-chin dimension and nasal profile leading to a significant mismatch. Regions of poor fit are indicated by arrows. Figure 7 Figure showing ( a ) best matching eye-to-chin and nasal profiles ( b ) a nasal profile mismatch leading to lifting of the device away from the nose and loss of seal and ( c ) a combination of eye-to-chin and nasal profile mismatches leading to obvious gapping from the cheeks. Informed consent by the participant was obtained to publish these images. Full size image The route to commercial product Although the viability of the customisation methods from 3D scan data have been proven using a prototype design, development of a commercial product that exploits these methods requires broader considerations, refinement, and potentially further development of the techniques. Extraction of facial parameters is currently limited to facial size and nasal profile, however any device development would require examination of a larger sample dataset. This would further guide the design around the non-customised features, for example the cheeks where soft tissue provides more compliance, and identify other key regions where these techniques could be applied. For example, a larger dataset may reveal that parametric extraction around the zygomatic arch or facial width is necessary to fit a greater proportion of users. Critically however in exploring these novel methods, the balance between a bespoke product and the practicalities of mass manufacture have been considered at all stages. Any marketable device would likely need to be injection moulded to be viable from a production rate and unit cost point of view. The limited number of design variations for the semi-customised and hard shell of the fully customised approach are currently limited to three and nine for the shell and seal respectively with a single common front cap. Whilst this would result in higher tooling costs compared to a single design, it is still not seen as a prohibitive number of variations given the potential benefits to the user. Furthermore, the simple Gaussian curve used to tailor the nasal profile could be achieved using inserts in to common seal tooling allowing for cost savings in tooling or expansion of the range of profiles. The rapid and automated evaluation of the scan data quickly identifies, and limits, the labour intensive and expensive fully-customised approach to those users where it is necessary. Finally the appropriate regulatory standard for half-face respirators, British Standard EN 149 28 , presents a unique barrier to market for customised respiratory protection. In its current form the standard does not address the certification of a customised device. Any successful commercialisation of these methods in to an FFP3 rated device on the market would undoubtedly require as a minimum clarification from the regulatory body and potentially amendments to the standard. Conclusions A viable workflow for both semi-customised and fully customised respirator devices has been presented driven by readily available 3D scanning technologies. This approach aims to address the current shortcomings in traditional mask fit failure rates and user comfort through customisation and material selection. Extraction of design relevant facial parameters from 3D scan data has been shown to provide a route of rapid evaluation and categorisation of user faces. This semi-customised approach balances the need for customisation with that of large scale production making such devices practically viable. From this study, nine design variants have been identified encompassing 87.5% of participants based on facial size. This process further enable viability of fully customised devices by targeting only users falling outside standard sizes where it will provide the most benefit. Physical testing using a novel flow visualisation method has validated the approach and illustrated the difference between correctly and incorrectly customised masks for both design routes. This is further supported by standard qualitative fit testing of semi-customised prototypes on a subset of five volunteers. Compliance of the silicone interface material not only ensures a comfortable facial interface for the user, but also provides the necessary properties to mitigate the mismatch between the true and approximated nasal profile proving that effectiveness is not only a matter of design, but also material selection. Customisation of respiratory protection has not only been proven feasible with existing technology but further development is essential to improve the fit, function, and consequently the protection that can be offered to frontline workers during the current pandemic. Data availability 3D facial images will not be made available. However, extracted anthropometric data and other datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. | Research published today has demonstrated the feasibility of mass producing customized respiratory protection for healthcare workers with a comfortable close-fitting seal, that is suitable for nearly 90% of face shapes and sizes. Providing adequate filtering PPE for frontline healthcare workers has been a major challenge since the start of the pandemic, and the key issue in providing effective protection is the need for a close-fitting seal between the mask and the face. However achieving mask fit across the full range of face sizes and shapes has proven difficult, and 'fit-test' failure remains a commonly reported occurrence. The areas of greatest variability, and where fit is most difficult to achieve, are around the nose, chin and cheeks. Dr. Sophie Cox, from the School of Chemical Engineering, who leads the University of Birmingham's Centre for Custom Medical Devices worked with Birmingham researcher Dr. Luke Carter, and Professor Owen Addison from Academic Centre of Reconstructive Science at King's College London to determine a manufacturing route that will allow mass production of masks that fit the greatest possible number of people. The researchers captured 3-dimensional face measurements from 200 people, with mixed facial form, gender and ethnicity, to ensure their sample was representative of the different facial shapes. They then performed statistical analysis which showed that a large proportion of people (87.5%) can be categorized into a relatively small number (nine) of designs for a rigid mask shell with an approximate fit to the person's face. The researchers achieved a close-fitting seal between the mask shell and the face using 3D-printed softened silicone, to deliver nine mask prototypes. The fit of each of these prototypes was demonstrated using a novel method to detect whether vapor could escape from the mask. The researchers further tested these designs on volunteer clinicians who had failed fit tests with commercially available half-mask respirators; each passed standard qualitative fit testing using the closest prototype available. Dr. Sophie Cox commented: "Doctors and nurses have consistently reported high rates of 'fit failure' with mass-produced masks, leaving them at higher risk of exposure to the virus. While the most desirable option is a fully customized 'made to measure' mask, this option would be prohibitively expensive. We have shown that the semi-customized approach, which is compatible with mass production, will provide a solution for nearly 90% of people." Both the University of Birmingham and King's College London filed patent applications based on the research. The technology is being commercialized by MyMaskFit. The company is working with technology, design and manufacturing partners to bring customized face masks to market. It has developed a face-scanning app for use on mobile devices that enables customers to scan facial dimensions, and securely transmit these measurements for rapid manufacturing. MyMaskFit will also pursue regulatory approval for custom-fitted, reusable, medical grade face masks. | 10.1038/s41598-021-00341-3 |
Nano | Study may lead to less expensive consumer lasers and LEDs | The paper can be downloaded at www.nature.com/srep/2012/12061 … /full/srep00461.html Journal information: Scientific Reports | http://www.nature.com/srep/2012/120615/srep00461/full/srep00461.html | https://phys.org/news/2012-07-expensive-consumer-lasers.html | Abstract We computationally study the effect of uniaxial strain in modulating the spontaneous emission of photons in silicon nanowires. Our main finding is that a one to two orders of magnitude change in spontaneous emission time occurs due to two distinct mechanisms: (A) Change in wave function symmetry, where within the direct bandgap regime, strain changes the symmetry of wave functions, which in turn leads to a large change of optical dipole matrix element. (B) Direct to indirect bandgap transition which makes the spontaneous photon emission to be of a slow second order process mediated by phonons. This feature uniquely occurs in silicon nanowires while in bulk silicon there is no change of optical properties under any reasonable amount of strain. These results promise new applications of silicon nanowires as optoelectronic devices including a mechanism for lasing. Our results are verifiable using existing experimental techniques of applying strain to nanowires. Introduction Potential advantages of silicon nanowires (SiNWs), such as quantum confinement, large surface-to-volume ratio, adjustable bandgap, sensitivity of electronic properties to surface ligands and mechanical excitation and compatibility with mainstream silicon technology have resulted in a flurry of experimental and theoretical investigations of these nano-structures. Over the years SiNWs have been explored for use in transistors 1 , 2 , logic circuits 3 and memory 4 , quantum computing 5 , chemical 6 and biological 7 sensors, piezo-resistive sensor 8 , nano mechanical resonator 9 and thermoelectric converters 10 , 11 . We are also witnessing the utilization of SiNWs in optoelectronic applications, such as in solar cells 12 , 13 , photodetectors 14 , 15 and avalanche photodiodes 16 , 17 . The basic properties and applications of Si nanowires are discussed further in references 18 , 19 . Efficient light emission in SiNWs requires direct bandgap and symmetry allowed optical transition between conduction and valence states. In other word to have a nonzero optical transition matrix element which is defined as the integrand should have an even symmetry. Ψ c , Ψ v and r represent conduction band state (wave function), valence band state and position operator, respectively. In bulk silicon and large diameter SiNWs the conduction band minimum and valence band maximum have different values of momentum within the Brillouin Zone (BZ) of the crystal. Spontaneous emission of a photon which arises from electron-hole recombination is a momentum conserving process. As photons cannot provide the momentum difference in these materials, a phonon absorption/emission is necessary, making it a weaker second order process. Vital to realizing SiNW-based light emitting devices, narrow diameter [110] and [100] SiNWs are direct bandgap. This arises from folding of four degenerate indirect X conduction valleys of bulk silicon into the BZ center (Γ point) due to confinement in transverse directions 20 . In addition to direct bandgap, the possibility of adjusting the bandgap with mechanical strain provides a new degree of freedom for SiNWs. Computational studies using both tight binding 21 , 22 , 23 and Density Functional Theory (DFT) 24 , 25 , 26 have shown that axial strain changes the bandgap of narrow SiNWs. Additionally it causes direct to indirect bandgap conversion. On the experimental side, recent research points to promising directions for applications involving strain in nano-structures. For example, strain that is generated by an acoustic wave can modulate the energy levels of an artificial atom (quantum dot) and initiate lasing inside a Fabry-Perot device 27 . Similarly, the energy levels of a phosphorous atom embedded in a SiGe super lattice can be modulated by acoustic waves travelling back and forth in the super lattice 28 . There is evidence of modulation of threshold voltage in a transistor by transverse as well as longitudinal strains applied to the SiNW-based channel 29 . Carrier mobility enhancement due to residual tensile strain from the oxide layer in Gate-All-Around (GAA) 5 nm thick SiNWs have been reported 2 . He and Yang 8 have reported a large diameter ( d )SiNW ( d >50 nm) bridge in which the piezo-resistivity is varied by deformation of the substrate. Deforming an elastomeric substrate can apply ±3% strain to the buckled SiNWs grown on the substrate 30 . Strain induced by the cladding of SiNWs causes a blue shift in the UV Photoluminescence (PL) spectrum 31 . Similarly the observed red shift in the PL spectrum of 2–9 nm thick SiNWs is also attributed to the radial strain induced by oxide cladding 32 . Motivated by the aforementioned experiments, the scope of our work is to address this question: Is it possible to modulate the spontaneous emission time of electrons in SiNWs using axial strain? We demonstrate for the first time that applying strain can change the spontaneous emission time by more than one order of magnitude. The underlying reasons are a change in the symmetry of electronic wave functions and a change in the nature of the bandgap (direct or indirect), with strain. Qualitatively we might expect that converting direct bandgap to indirect bandgap hinders light emission from a SiNW because the emission of a photon is now a slow second order phonon-mediated process. We use Density Functional Theory (DFT) and tight binding methods to quantify the change of spontaneous emission time (see Methods section). Results The cross section and electronic structure of an unstrained 1.7 nm [110] SiNW are shown in Fig. 1(a,b) . All SiNWs in this work have direct bandgap at 0% strain ( Fig. 1b ), where the bandgap is observed to be inversely proportional to the SiNW diameter ( d ). For example the bandgap of 1.26 nm, 1.7 nm, 2.3 nm and 3.1 nm diameter SiNWs are 2.24 eV, 1.74 eV, 1.58 eV and 1.477 eV, respectively. The bandgap values are calculated using the tight binding framework after the atomic structure of the nanowires is relaxed by DFT. This effect of SiNW diameter on the bandgap agrees with experimental results using scanning tunneling spectroscopy (STS) 33 and the observed blue shift in PL spectrum of small SiNWs 31 . It is also observed that by increasing the diameter of SiNWs the difference between direct and indirect conduction band minima (band offset or ΔE cmin in Fig. 1b ) decreases. This aspect is important for this work, since it implies a corresponding decrease in the value of compressive strain (threshold strain) required to change the bandgap from direct to indirect 21 , 22 . The values of the threshold strain for 1.7 nm, 2.3 nm and 3.1 nm [110] SiNWs are −5%, −4% and −3%, respectively. Figure 1 Cross section, electronic structure and momentum matrix element. (a: left) Cross section of a [110] SiNW with the diameter (d) of 1.7 nm in xy plane. Diameter ( d ) is defined as average of large (D1) and small (D2) diameters. Right panel shows the side view of two unit cells along the length of nanowire (z) with unit cell length of U c . Dark and bright atoms represent Si and H atoms, respectively. (b) Electronic structure of the 1.7 nm diameter [110] SiNW, which shows direct bandgap i.e. conduction/valence band minimum/maximum reside on the BZ center (K = 0). Band offset, ΔE cmin , is smaller for larger diameter nanowires. (c) Normalized momentum matrix element, (in eV) between conduction and valence band along BZ. Red, Blue and green correspond to Z, Y and X polarizations of emitted photon. Full size image Direct bandgap regime If the conduction and valence states in the center of the 1 st BZ are called initial ( i ) and final ( f ) states, respectively, the spontaneous emission rate or 1/τ spon can be written as where m , e , ε o , c, are free mass of electron, magnitude of electronic charge, free space dielectric permittivity, speed of light in vacuum and Planck's constant, respectively. n r is the refractive index of SiNW, which is assumed to have the same value as of bulk silicon ( n r = 3.44) within the optical spectrum of interest. For all nanowires in this work, the bandgap lies in the 1–2 eV range and the maximum change of bandgap due to strain can be as large as 500 meV (in a strain window of ±5%). Therefore assuming a constant value for the refractive index causes up to a 14% change in our calculated results (which is a small effect as we are discussing more than an order of magnitude change in τ spon ). The quantity of ω if is the frequency of the emitted photon, which is defined as . Here Δ E if is the energy difference between initial ( i ) and final ( f ) states at the BZ center, which is the bandgap E g . is the matrix element of the momentum operator P , between initial (conduction) and final (valence) states. Ψ i and Ψ f are the wave functions of the initial (conduction) and final (valence) states. ê is the direction of polarization. The value of momentum matrix element normalized to electronic mass, , between conduction (Ψ i ) and valence (Ψ f ) bands along the BZ is plotted for an unstrained 1.7 nm diameter [110] SiNW ( Fig. 1c ). Corresponding to each direction of photon polarization (x, y and z), there are 3 different values for , which in turn leads to three different values for τ spon according to equation (2) . As Fig. 1c suggests, the momentum matrix element corresponding to z-polarized photons is significantly larger than the corresponding amounts for x- and y- polarized cases at BZ center. This difference manifests itself as a large spontaneous emission rate for z-polarized photons. This implies that the emitted light from the SiNW is mostly polarized along the length of the nanowire. In other words, if the average rate of spontaneous emission is defined as τ −1 avg = τ −1 x +τ −1 y +τ −1 z , then the degree of anisotropy or τ avg /τ z has a value close to unity. Our approach of neglecting dielectric mismatch is justified because local field effects do not cause a significant change in the dielectric function of nanowire for z-polarized light 34 , 35 . The squared momentum matrix element in equation (2) is inversely proportional to the area of the nanowire or d 2 , which can be explained using the particle in a box model 36 , 37 . Combination of this effect and the bandgap change with diameter results in the direct proportionality of spontaneous emission time to the cross sectional area. For 1.7 nm, 2.3 nm and 3.1 nm diameter [110] SiNWs, the value of τ avg is calculated to be 1.47×10 −7 s, 2.3×10 −7 s and 4×10 −7 s, respectively. For the SiNWs investigated in this work, the bandgap remains direct for most of the strain values in the ±5% range. For 1.7 nm, 2.3 nm and 3.1 nm [110] SiNWs the bandgap becomes indirect once the compressive strain reaches −5%, −4% and −3%, respectively 21 , 22 . As long as the bandgap is direct, photon emission is a first order process and its rate is governed by equation (2) . The average spontaneous emission times for [110] axially aligned SiNWs in this study are tabulated in Table 1 . As can be seen in Table 1 , compressive strain leads to an increase of spontaneous emission time by one to two orders of magnitude. This is due to the movement of sub bands in the compressive strain regime. As pictured in the graphics of Fig. 2a , the rise of the second valence sub band (V2) due to its anti-bonding nature is more dominant than the rate with which the first valence sub band (V1) rises or the conduction sub band (C2) falls. As a result, V2 determines the new highest valence band. The aforementioned mechanism can be further understood by looking at Fig. 2b that shows the normalized probability density (|Ψ| 2 ) of conduction and valence states at BZ center. Comparing the valence/conduction bands (VB/CB) at 0% and −2% strain values shows that the dominant change is due to the valence band symmetry change induced by compressive strain (e.g. −2% ). Left panel of Fig. 2b shows that the newly raised valence sub band (V2) has different wave function symmetry as opposed to the centro-symmetric nature of valence band V1 at 0% and +2% strains. Therefore the matrix element, <Ψ c | r |Ψ v >, changes accordingly and modulates the spontaneous emission time (rate) through equation (2) . Comparing wave function symmetries of valence and conduction bands at strain values of 0% and +2%, as in Fig. 2b (center and right panel), further illustrates why the spontaneous emission time is almost unchanged within this tensile strain regime. Table 1 Average spontaneous emission time, τ avg (s) vs. diameter (nm) of [110] SiNWs. Although all nanowires in these strain values have direct bandgap, the change of τ avg with compressive strain is mainly due to valence sub band exchange Full size table Figure 2 Wave function symmetry change with strain. (a) Effect of compressive strain on second valence sub band (V2) which results in the change of wave function symmetry. From left to right, it can be seen that compressive strain raises the energy of V2 faster than it lowers the energy of C2. (b) Normalized squared value of wave function (|Ψ| 2 ) in the cross sectional plane of a 1.7 nm [110] SiNW. From left to right strain values are −2%, 0% and +2%, respectively. Valence and conduction band states (VB and CB) are at BZ center. As can be seen in the left panel (for −2% strain) the change of symmetry is more pronounced for valence sub band. Full size image Direct to indirect bandgap conversion The second kind of strain induced change of spontaneous emission arises from a direct to an indirect bandgap conversion. Fig. 3a elucidates this mechanism, in which, a compressive strain lowers the indirect conduction sub band C2, resulting in an indirect bandgap. Fig. 3 (b,c) shows the band structure of a 3.1 nm SiNW at 0% and −5% strains, respectively. As can be seen in Fig. 3c the heavy (H) or high effective mass sub-band (C2) has a lower energy than the light (L) or low effective mass direct conduction sub band (C1). The energy difference between two conduction band minima or band offset (ΔΩ) [ Fig. 3a (right panel)] matters in determining the order of phonon-mediated spontaneous emission process. As mentioned before, ΔE cmin ( Fig. 1b ) is inversely proportional to the nanowire diameter. As a result, a large diameter SiNW has a larger value of ΔΩ. For 1.7 nm, 2.3 nm and 3.1 nm diameter [110] SiNWs, ΔΩ is 21 meV, 56 meV and 80 meV, respectively at −5% strain. When ΔΩ is less than the Debye energy of LA phonons (E Debye = 54 meV) many secondary states in direct conduction sub band are available to which an electron can scatter from indirect sub band by absorbing a LA phonon. Alternatively, when ΔΩ is less than the maximum energy of LO phonons, E LO = 63 meV, a few secondary states in direct conduction sub band can be found to which an electron can scatter from indirect sub band by absorbing a LO phonon. This implies that if the secondary state within the direct conduction sub band C1 is not at the BZ center, the only possible 1 st order transition is due to LA phonon absorption. Otherwise both LA and LO phonon absorption processes will contribute in the 1 st order inter-sub band scattering event. The process of finding secondary states can be understood by recalling that LO and LA phonons are modelled as dispersion-less and a linear dispersion around BZ center, respectively 38 . Figure 3 Direct to indirect bandgap conversion. (a) Mechanism of bandgap conversion in response to compressive strain is due to lowering of C2 sub band. Electronic structure of (b) unstrained and (c) −5% strained 3.1 nm [110] SiNW. The indirect conduction band minimum of strained case (c) is lower than the direct conduction band minimum by ΔΩ = 80 meV. This quantity determines the order of phonon mediated process for light emission. Full size image First order transition Within the regime of small ΔΩ, the spontaneous emission is modeled as two consecutive 1 st order processes as shown in Fig. 4a (top panel), the first of which is an electron-phonon scattering event from indirect sub band minimum to the direct sub band minimum (via absorption of a phonon), while the second is a direct transition from conduction band to valence sub band maximum via emission of a photon. With this model, the total spontaneous emission rate is limited by the slower of the two processes, which is the optical transition. Figure 4 First and second order phonon mediated transition. (a) Top: Two serial 1 st order processes which model the light emission calculation in indirect bandgap nanowires with ΔΩ< E LO/LA . Bottom: A model based on a 2 nd order process presented for calculation of light emission in indirect nanowires with ΔΩ> E LO/LA . (b) Inter-sub band (direct to indirect) electron-phonon scattering rate (s −1 ) versus the energy starting from the bottom of direct conduction band (E−E cmin ). As can be seen scattering times for electron-LO phonon and electron-LA phonon scattering events are in the order of 1ps and 100ps, respectively. (c) Possible second order transitions from indirect conduction sub band to the valence band maximum. As it is discussed in the Results section the A→D→C event is less probable than A→B→C. Initial, intermediate and final states are represented by i , m and f , respectively. Full size image Fig. 4b shows the inter-sub band electron-phonon scattering rates for electrons making transition between the indirect sub band minimum at K z = 0.8π/U c to the direct sub band minimum at K z = 0, where U c is the nanowire unit cell length. One can observe that the scattering rate due to LO phonons is 2 orders of magnitude higher than the scattering due to LA phonons. Therefore when the nanowire bandgap is indirect with ΔΩ < E Debye = 54 meV or ΔΩ < E LO = 63 meV, τ spon is determined by the slower (optical transition) process. Recalling Fig. 4a (top panel) and the rates given in Fig. 4b , τ spon is calculated by equation (2) . This results in spontaneous emission times that are comparable with that of direct bandgap nanowires i.e. in the 10 −5 –10 −7 second range, depending on the value of . Second order transition For indirect bandgap SiNWs in which ΔΩ > E Debye /E LO (e.g. 3.1 nm diameter [110] SiNW at −5% strain), the spontaneous emission is possible through two second order processes that correspond to LA and LO phonons [ Fig. 4a (bottom panel)]. The rate of both processes is given by second order perturbation theory. Equation (3) shows the spontaneous emission lifetime of an electron in indirect conduction sub band. The recombination of this electron with a hole residing at BZ center is possible through virtual transitions to intermediate states Ψ m in the first conduction band (via phonon emission and absorption) Here, H e-R and H e-P are electron-photon (radiation) and electron-phonon interaction Hamiltonians, respectively. The innermost summation is over all intermediate states. Indices i , m and f correspond to initial (indirect conduction band minimum), intermediate (within the 1 st conduction band) and final (within the last valence band) states. Ψ i , Ψ m and Ψ f are Bloch states of initial ( i ), intermediate ( m ) and final ( f ) states, respectively. As shown in Fig. 4c , this transition is possible through two processes of A→B→C and A→D→C. The process of A→D→C is not included in equation (3) since for this transition, the denominator of the inner most terms i.e. E i -E m is large (>3 eV). Hence the contribution of A→D→C processes is negligible. Two outer summations are performed over phonon wave vector and branches ( q , λ) and photon wave-vector and polarizations ( k' , σ), respectively. The outermost summation is over all final states within the valence band, which are weighted by Fermi-Dirac occupancy written as F(E f ). Dirac and Krönecker delta functions impose energy and momentum conservation, respectively. After converting summations to integration and calculating the interaction Hamiltonian matrix elements 39 , 40 , the following equation results for spontaneous emission time including LA phonon D, ρ and v s represent electron deformation potential (D = 9.5 eV), mass density (ρ = 2329 Kg/m 3 ) and velocity of sound in silicon (9.01×10 5 cm/sec), respectively 39 , 40 . , where Δ E fm is energy difference between final and intermediate state. Ph LA (k f , k i ) contains integration over all possible transverse phonon wave vectors (q t ), Bose-Einstein occupancy factors and matrix element of terms like e i q . r i.e. <Ψ m |e i q.r |Ψ i >(see Supplementary Information ). The electron wave vectors at initial (final) states are represented by k i (k f ). Δk f is k z -space grid element (resolution) along the BZ of the nanowire. This equation is evaluated numerically. The corresponding equation for spontaneous emission time by including LO phonon is given by where a dispersion-less optical phonon energy of = 63 meV has been assumed. Ph LO (k f ) contains integration over all possible LO phonon wave vectors (q t ), Bose-Einstein occupancy factors and matrix element of terms like e i q . r i.e. <Ψ m |e i q . r |Ψ i >(see Supplementary Information ). D op is the electron deformation potential for LO phonon (D op = 11×10 8 eV/cm) 39 , 40 . E c and E v represent the energy of conduction and valence states at a given wave vector (k i or k f ), respectively. Equations (4) and (5) can further be reduced to semi-analytic equations by making a few simplifying assumptions. First the momentum matrix element between conduction and valence bands can be assumed to be constant around the BZ center ( Fig. 1c ). Even if this value varies around BZ center, a corresponding drop of off center values of Fermi-Dirac factor, F(E f ), makes this approximation plausible. More simplifications are also possible if the energy of phonon (e.g. in equation (5) ) is neglected in comparison with E c (k i )−E v (k f ) = E gindir and E c (k f )−E c (k i ) = ΔΩ. As a result, equations (4) and (5) can be reduced to a semi-analytic equation as follows where the constant value, R cv , is the position matrix element between conduction and valence state at BZ center. E gdir and E gindir represent the direct and indirect bandgap values, respectively. β LO/LA is a proportionality factor that contains all relevant constants as given in equations (4) and (5). The term S LA/LO represents Ph LA in equation (4) or Ph LO of equation (5) which is multiplied by simplified energy quotient [see equation (S20) of Supplementary Information ]. The results of the semi-analytic method are less than 30% off from the numerical method. At room temperature (T = 300 K) and E Fermi = E vmax , the average spontaneous emission time due to LA phonons is τ avg = 3.4×10 3 s. Including LO phonons results in τ avg = 16 s, which reveals the strong role of LO phonons in the second order light emission process as well. It is noteworthy that increasing the Fermi level or decreasing the temperature will lead to less available empty states in the valence band; hence larger spontaneous emission time than above is expected. Table 2 lists the spontaneous emission time for an indirect bandgap 3.1 nm [110] SiNW at −5% strain for both LA and LO mediated cases. The stronger LO mediated scattering governs the indirect light emission process according to Fig. 4a (bottom panel). Furthermore, the strong optical anisotropy along z direction is evident in Table 2 . Table 2 Spontaneous emission time (s) in a 3.1 nm [110] SiNW (at −5% strain with indirect bandgap). * Role of LO phonons is more significant than LA phonons (100 times more) in determining the emission times Full size table Discussions Based on the aforementioned results we anticipate that preparing an inverted population of electrons in the indirect sub band and the subsequent application of strain may cause efficient light emission due to the indirect to direct bandgap conversion. To ensure that there are sufficient carriers in the indirect sub band under realistic conditions of multiple sequential electron-phonon scattering events and an applied electric field, we have calculated the relative occupancy of direct and indirect sub bands using Ensemble Monte Carlo (EMC) simulations 41 (for details refer to Methods section). Fig. 5a includes the contributions from the first two sub bands which are less than 4 meV apart (C1 and C2 are taken together; see the Inset of Fig. 5a ). As can be seen, the occupancy shows only a relatively small variation across the electric field range considered. The occupancy of the indirect sub bands decreases from approximately 95% at low electric fields, to approximately 92% at 25 kV/cm. The factor of ∼ 10 difference between occupancies of indirect and direct sub bands ( Fig. 5a ) suggests that periodically straining a biased SiNW can induce population inversion and lasing corresponding to indirect to direct bandgap conversion cycle. The observed increase in the occupancy of electrons in the direct sub band at higher electric fields is attributed to the transition of electrons away from the negative k z sub bands within BZ as they respond to the electric field. This can be observed in Fig. 5b , which depicts the time evolution of the electron distribution function under different electric fields. The lack of a significant change in the distribution with electric field is primarily attributed to the high electron-phonon scattering rate due to LO phonon emission and is a feature consistent with other small diameter nanomaterials such as single-wall carbon nanotubes (CNT) 42 and unstrained SiNWs 43 . This can be understood by comparing Fig. 5b with Fig. 5c that depicts the total electron-phonon (LA and LO) scattering rates for conduction sub bands C1 and C2. Under an applied electric field, electrons initially near the bottom of the sub bands gain crystal momentum until they reach the peaks associated with the onset of LO phonon emission. This inelastic-scattering event prevents electrons from gaining further momentum and induces the observed behaviour in Fig. 5b . To investigate the role of temperature in the aforementioned carrier population analysis we have also conducted simulations for occupancy of the sub bands at 77 K. We observed that the occupancy of the direct part of the sub bands (near BZ center) at 77 K is negligible compared to 300 K. LO phonon emission for inter and intra-sub band are indeed very strong and are the dominant players. As soon as an electron gains 63 meV of energy, it emits a phonon and makes a transition to either the same indirect sub band bottom with a very high scattering rate, or the opposite indirect sub band bottom. Because of this very high scattering rate, electrons do not gain enough energy to make a transition to the direct sub band bottom. Also, the LO absorption is very low to begin with since it has dropped by 3 orders of magnitude. At 300 K, the direct to indirect transition or vice versa are almost symmetric since the emission (direct to indirect sub band) and absorption (indirect to direct sub band) have almost the same transition time (10 −11 s). On the other hand at 77 K the absorption rates drop by 3 orders of magnitude (1/rate = 10 −8 s) while emission rates have still the same order as 300 K case i.e. 1/rate = 10 −11 s. This asymmetry means that carriers have more chance to escape from direct sub band and reach to indirect rather than the reverse. Figure 5 Monte Carlo evolution of sub bands occupancy and electron-phonon scattering rates. (a) Occupancy of indirect and direct sub bands vs. electric field for a 3.1 nm [110] SiNW at −5% strain (indirect bandgap). Inset shows the positive half of the BZ (i.e. k spans [0,π]) with two conduction sub bands used in the EMC simulation. (b) Time evolution of the electron distribution function under different electric fields. (c) The total LA and LO electron-phonon scattering rates. In the legend, C1 and C2 show the initial sub band from which the electron is scattered by emission/absorption of LA and LO phonons. Full size image Another non-radiative effect which matters in studying the carrier population is the Auger recombination which can deplete the indirect sub bands. However photoluminescence study of SiNWs with a diameter of 3.3 nm±1.6 nm, as a function of temperature and optical pump intensity has shown 44 that the fastest Auger recombination time can be 19 μs. Since this time is much more than the electron-phonon scattering times that we deal with, we have neglected the Auger recombination effect in our work. Finally we speculate on how the idea of strain-modulated spontaneous emission time can be developed for SiNWs. There are many examples of recently implemented platforms and methods to apply strain to a single CNT. Piezoelectrically driven table 45 , pressure driven deformable substrate 46 and applying force by an atomic force microscope probe 47 are among the methods of generating mechanical strain in CNTs. Additionally there are examples of using a deformable substrate to modulate the energy band diagram of piezoelectric zinc oxide nanowires via strain 48 . Embedding SiNWs on plastic 6 , elastomeric 30 and metallic deformable substrates 8 all show that our results are verifiable using existing methods. It is also advantageous that the direct to indirect conversion and the resulting modulation of spontaneous emission time is a reversible process. However the strain value which results from embedding SiNWs in silicon dioxide or nitride shell is unchangeable. The observed population difference in Fig. 5a leads us propose an experiment to observe population inversion and lasing in SiNWs ( Fig. 6 ). If we assume that an indirect bandgap SiNW (under compressive strain) is biased under a moderate electric field, there will be a larger population of electrons in indirect sub band ( Fig. 6a ). After releasing the strain (or applying tensile strain) to make the nanowire direct bandgap ( Fig. 6b ), the population which is still significant can scatter to the direct sub band (within picosecond time scales) and stimulated emission takes place, provided that suitable feedback and gain mechanisms have already been designed for such SiNW. It is noteworthy that although strain may cause changes in optical properties of silicon quantum dots, electrical injection and reversible application of strain are practical issues which render the usability of silicon quantum dots difficult. Figure 6 A proposed experiment to implement a population inversion in SiNWs. (a) Injecting current in a compressively strained nanowire (top) which has an indirect bandgap (bottom) can generate an initial population in SiNW. Although non-radiative processes may deplete this sub band of most of those injected carriers, there will still be a factor of 10 difference in the electron occupancy between the indirect and direct sub bands. (b) During strain release or when applying tensile strain (top), the initial population can scatter into the direct sub band via electron-phonon scattering processes within picosecond time scale. Thereafter (bottom) the inverted population can initiate lasing (coherent stimulated emission) if the nanowire is already embedded in a suitable mode enhancing cavity. In the case of incoherent (spontaneous) emission the light has a broader spectrum suitable for light emitting diode (LED) applications. Similar set up can be devised for light absorption (i.e. photocurrent) modulation using strain. Full size image In summary, we found that the spontaneous emission time can be modulated by more than one to two orders of magnitude due to strain via two distinct physical mechanisms, a feature that is not observable in bulk silicon crystal. Based on this, we propose an experiment to observe population inversion and lasing in SiNWs . To more accurately take into account the excitation of carriers from the indirect to direct conduction bands via multiple phonon scattering, we have simulated the nanowire via Ensemble Monte Carlo simulations at room temperature. We found that non-radiative scattering events deplete the initial population of carriers from the indirect sub band. However under moderate electric fields there is still a factor of 10 difference between indirect and direct sub band occupancies. This is of significance when we consider a nanowire device in which lasing and population inversion can be inhibited or initiated by periodic application of axial strain. A similar scheme can also be proposed to modulate the photo-absorption of narrow SiNWs, either due to wave function symmetry change or direct to indirect band gap conversion. Change of wave function symmetry can potentially induce large nonlinear optical effects in nanowires in response to strain 49 , 50 which merits a deeper study. While the calculations are performed for silicon nanowires, similar phenomena should exist in nanowires made out of other material systems. Methods The nanowires in this work are cut from bulk silicon crystal in [110] direction and dangling silicon bonds on the surface are terminated with hydrogen atoms ( Fig. 1a ). The diameters of the nanowires range from 1.7 nm to 3.1 nm. The cross section of nanowires lies in the x-y plane and z is the axial direction of each nanowire. The relative stability of [110] direction compared to [100], which is quantified as free energy of formation 51 , is experimentally verified 33 , 52 . Energy minimization of nanowires is performed by Density Functional Theory (DFT) method within SIESTA 53 (version 3.1) using Local Density Approximation (LDA) functional with Perdew-Wang (PW91) exchange correlation potential 54 . Spin polarized Kohn-Sham orbitals of double-zeta type with polarization (DZP) have been used. The Brillion Zone (BZ) has been sampled by a set of 1×1×40 k points along the axis of the nanowire (z axis). The minimum center to center distance of SiNWs is assumed to be at least 2 nm to avoid any interaction between nanowires. Energy cut-off, split norm, maximum force tolerance and maximum stress tolerance are 680 eV (50 Ry), 0.15 and 0.01 eV/Å and 1 GPa, respectively. The relaxation stops if the maximum absolute value of inter-atomic forces is smaller than force tolerance and the maximum stress component is smaller than the stress tolerance. The energy of the unstrained unit cell of the nanowire is minimized using Conjugate Gradient (CG) algorithm during which variable unit cell option is selected. Afterwards, for each percent of strain (ε) the unit cell is relaxed using the constant volume chosen by fixed unit cell option. With this option atoms are only free to move within the fixed unit cell volume. The result of each minimization step is fed to the next step of minimization. The unit cell length (U c ) as defined in Fig. 1a (right) is updated according to the applied strain value ε, i.e. U c-new = U c-old (1+ε). The electronic structure is calculated within 10 orbital semi-empirical sp 3 d 5 s* tight binding scheme 55 . This is to avoid bandgap underestimation due to DFT and diameter sensitive many–body GW corrections. The tight binding method has been successful in regenerating the bulk band structure as well as correctly simulating the boundary conditions i.e. surface passivation 56 . Trend of tight binding bandgap change with diameter in the case of Si nanowire closely matches with the STS experiments 33 . The orbitals of silicon atoms (i.e. s, p, d and s*) are assumed to be of Slater 57 type in which the radial part of each orbital is given by the following equation where N is the normalization factor and Z is the atomic number. The shielding constant (s) and effective quantum number (n*) are found using the rules given by Slater 57 . To calculate the spontaneous emission life times in direct bandgap nanowires [ equation (2) in the main section], Fermi's golden rule with first order perturbation theory 58 is used. The matrix element of electron-photon interaction Hamiltonian (H e-R ) can be simplified to momentum matrix element, <Ψ i | ê.P |Ψ f >. This is further reduced to its position representation and integrals of type <α( r-R o )| r |β( r-R o )>. Here r is the position operator, α and β are atomic orbitals of which Ψ i and Ψ f are composed. These integrals have two parts i.e. R o δ αβ + <α( u )| u |β( u )> where R o is the position of the atom. The second part consists of radial and angular integration of Slater type orbitals which are both found analytically using Wolfram Mathematica® online integrator ( ). Among 100 combinations of 10 orbitals, 15 of them have symmetry-allowed nonzero value. The first order electron-phonon scattering rates are also calculated using Fermi's golden rule within the first order perturbation scheme 58 . The electron-phonon interaction Hamiltonian (H eP ) is of deformation potential type for bulk LA and LO phonons. As it is stated by M. Nawaz et al 59 , taking confined LO phonons into account will reduce the scattering rate. Thus phonon confinement does not have an adverse effect on the spontaneous emission times calculated. Also it is shown that there is only a 10% difference in calculated mobility between the cases where bulk and confined phonons are used 60 . Details of calculating the electron-phonon interaction Hamiltonian matrix elements and scattering rate have been skipped and the interested reader can refer to 39 , 40 . For indirect bandgap nanowires with larger energy offset (i.e. ΔΩ >E LO /E LA ), second order perturbation method is used in which all interaction Hamiltonian matrix elements are calculated likewise. The expressions for spontaneous emission time are explained in the main section. To further investigate the carrier population statistics of indirect sub bands under the influence of electric field and multi electron-phonon scattering events, we use standard Ensemble Monte Carlo (EMC) methodology 41 . In setting up our EMC simulation, we consider an infinitely long, defect free SiNW with a uniform temperature. The electric field is also uniform and directed along the axis of the SiNW. In performing the simulation, tabulated values of two lowest conduction sub bands of 3.1 nm diameter [110] SiNW with indirect bandgap (due to −5% strain) (C1 and C2 in the Inset of Fig. 5a ) are used. For each initial state starting from indirect conduction sub band minimum, all possible final states within C1 and C2 with corresponding scattering rates for both phonon types (LA/LO) were utilized. Both inter- and intra-sub band electron-phonon scattering events have been calculated. The rest of the conduction sub bands are not included in the simulation since the third conduction sub band is at least 100 meV above the first two sub bands (C1 and C2 in Fig. 5a ). Electron transport is confined to the first BZ, which is divided into 8001 k grid points (4000 positive k and 4000 negative k values) and for which the tabulated energy values and electron-phonon scattering rates are computed and stored. Electrons are initially injected into the SiNW at the bottom of an indirect sub band and the simulation is initially executed for 500,000 scattering cycles at 0 kV/cm electric field so as to allow the electrons to approach as close to an equilibrium distribution as possible. | (Phys.org) -- A recent theoretical study by a University of Waterloo doctoral candidate in nanotechnology engineering suggests that manufacturers may one day make lasers and LEDs out of silicon, with the potential to significantly lower their price. Daryoush Shiri says that making lasers and light emitting diodes (LED) from bulk silicon, one of the most abundant minerals on earth, has been a longtime goal of the photonic engineering community. But while the cost of silicon is low, it suffers from an inherent electronic property called indirect bandgap that hinders the light emission from this material. As a result, lasers are currently manufactured using other, more expensive materials. "Extensive numerical calculations that involved first-principle quantum mechanical and other studies proved that silicon nanowires show dramatic changes of light emission properties when we apply mechanical strain," said Shiri. His findings were recently published in Scientific Reports, a journal of the Nature Publishing Group. Shiri¹s research shows that by stretching or compressing silicon nanowire, you can induce significant change in its light emission capabilities. Passing an electrical current through the device in its strained state leads to population inversion, which means that electrons are ready for light emission. Once the strain is removed, the electrons release their energy as packets of light, the central principle of lasing. Shiri points out that this mechanism for lasing is unique to silicon nanowires and it is not achievable in bulk silicon. The colour of light emitted can be controlled both by strain and using nanowires with different diameters. Collaborating with a lab with the capability of fabricating the silicon nanowires is the next step. Shiri would like to see the tiny whisker-like silicon structures reduced from the typical 10 nanometers to three to five nanometers. By reducing the diameter of the nanowire, electrons have a better chance of giving off their energy as light. He expects it to take between two to five years of experimentation until nanowires are fabricated to the ideal scales. Consumers see lasers in everything from supermarket barcode scanners, laser printers, to CD or DVD players. LEDs are used in car headlights, lamps, and other consumer products. Shiri collaborated on his research with his doctoral thesis supervisors C.R. Selvakumar, an electrical and computer engineering professor at Waterloo; Anant Anantram, a former engineering professor at Waterloo who is now at the University of Washington-Seattle; and Amit Verma, a professor at Texas A & M University-Kingsville. | www.nature.com/srep/2012/12061 … /full/srep00461.html |
Biology | Bacterial protein impairs important cellular processes | Alessia Landi et al. Pseudomonas aeruginosa lectin LecB impairs keratinocyte fitness by abrogating growth factor signalling. Life Science Alliance (2019) DOI: 10.26508/lsa.201900422 | http://dx.doi.org/10.26508/lsa.201900422 | https://phys.org/news/2019-11-bacterial-protein-impairs-important-cellular.html | Abstract Lectins are glycan-binding proteins with no catalytic activity and ubiquitously expressed in nature. Numerous bacteria use lectins to efficiently bind to epithelia, thus facilitating tissue colonisation. Wounded skin is one of the preferred niches for Pseudomonas aeruginosa , which has developed diverse strategies to impair tissue repair processes and promote infection. Here, we analyse the effect of the P. aeruginosa fucose-binding lectin LecB on human keratinocytes and demonstrate that it triggers events in the host, upon binding to fucosylated residues on cell membrane receptors, which extend beyond its role as an adhesion molecule. We found that LecB associates with insulin-like growth factor-1 receptor and dampens its signalling, leading to the arrest of cell cycle. In addition, we describe a novel LecB-triggered mechanism to down-regulate host cell receptors by showing that LecB leads to insulin-like growth factor-1 receptor internalisation and subsequent missorting towards intracellular endosomal compartments, without receptor activation. Overall, these data highlight that LecB is a multitask virulence factor that, through subversion of several host pathways, has a profound impact on keratinocyte proliferation and survival. Introduction Bacteria can use many different strategies to infect host cells. In all cases, the initiation of an infection requires the recognition of specific structures at the host cell plasma membrane. This is often achieved by lectins, which bind to glycosylated residues on proteins and/or lipids present on the cell surface, mediating the attachment of the bacterium to the cell. Multivalency is an important feature of most lectins. On one hand, multivalency increases the binding affinity and specificity of the lectin–glycan interaction ( Dam & Brewer, 2010 ). On the other hand, the binding of lectins to multiple cell surface receptors can induce receptor clustering and plasma membrane rearrangements, triggering their entry into the host ( Römer et al, 2007 ; Windschiegl et al, 2009 ; Pezeshkian et al, 2017 ). Pseudomonas aeruginosa is a Gram-negative bacterium, ubiquitously spread in nature. It is an opportunistic pathogen that can cause severe infections, especially in immunocompromised individuals, because of its resistance to most of the available antibiotics and its ability to form impenetrable biofilms. Hence, it has been classified in the “priority 1/critical” category of the World Health Organisation global priority pathogens list (global PPL) of antibiotic-resistant bacteria to promote the research and development of new antibiotic treatments ( World Health Organization, 2017 ). It is frequently implicated in hospital-acquired infections, where it has been reported to cause different types of infections. Wounded skin, after traumatic injuries, surgery or burns, is one of the preferentially targeted tissue by this bacterium, which has also been associated with the delay and prevention of wound healing. The presence of P. aeruginosa correlates in fact with a bad prognosis of healing, and leads to the persistence of the inflammatory stage of the wound healing process ( Gjødsbøl et al, 2006 ; Bjarnsholt et al, 2007 ). P. aeruginosa possesses two tetravalent lectins in its arsenal of virulence factors, LecA and LecB (also called PA-IL and PA-IIL, respectively). LecB is a tetramer, consisting of four monomers with high specificity for L-fucose and its derivatives ( Garber et al, 1987 ; Gilboa-Garber et al, 2000 ). LecB production is regulated by rhl and Pseudomonas quinolone signal, which are part of the quorum-sensing systems ( Winzer et al, 2000 ; Diggle et al, 2003 ). Once synthesised, LecB is exposed on the outer bacterial membrane, where it has been described to interact with the outer membrane porin OprF ( Tielker et al, 2005 ; Funken et al, 2012 ). The current assumption is that LecB mainly functions by promoting the adhesion of P. aeruginosa to both the host cell and the exopolysaccharide matrix, which encases bacterial cells together. However, several in vitro and in vivo studies have shown LecB to act not only as an adhesin but also as an important virulence factor, capable of triggering additional host cell responses ( Schneider et al, 2015 ; Wilhelm et al, 2019 ). LecB has been reported to be a determinant of P. aeruginosa cytotoxicity in lung epithelial cells and to block ciliary beating in human airways ( Adam et al, 1997 ; Chemani et al, 2009 ). LecB-negative mutant bacteria exhibit an impaired biofilm formation in comparison with wild-type strains and no type VI pili assembly ( Tielker et al, 2005 ; Sonawane et al, 2006 ). Furthermore, LecB induces alveolar capillary barrier injury in vivo, leading to a higher bacterial dissemination into the bloodstream ( Chemani et al, 2009 ). Previous studies have reported additional effects of LecB in inhibiting cell migration and proliferation ( Cott et al, 2016 ). However, its precise mechanism of action has not yet been elucidated and none of the existing studies have addressed its role in skin infections. Here, we report that the P. aeruginosa lectin LecB is present in chronically infected human wounds, implying its contribution to the persistence of wound infections. Moreover, we show that insulin-like growth factor-1 receptor (IGF-1R) coprecipitates with LecB and that LecB leads to IGF-1R internalisation and missorting towards intracellular LC3-positive compartments. Notably, IGF-1R is internalised without being activated. We further demonstrate that LecB blocks the cell cycle and induces cell death, which is preceded by a strong vacuolisation. These vacuoles, which possess peculiar morphological features, originate from ruffle-like structures at subdomains of the plasma membrane where LecB is enriched. Therefore, we propose that LecB, in addition to play a role as an adhesion factor, both misregulates growth factor receptor signalling and subverts the endocytic system, leading to an impairment of vital keratinocyte functions. Results LecB is present in chronically infected human wounds Although LecB is enriched in biofilms, which often characterise chronic wounds, no evidence had yet been provided showing its presence in wounds. Before we addressed the effects of LecB on human keratinocytes at a molecular level, we first verified the presence of LecB in infected human wounds. To this aim, we collected chronically wounded tissue from patients infected with P. aeruginosa , as shown by wound swabs. We stained the paraffin-embedded tissue sections with an antibody against P. aeruginosa to confirm its presence in the wounds. As control, we used normal skin samples. Indeed, we could detect the presence of P. aeruginosa , either as biofilm or in the form of small colonies ( Fig 1A ). We subsequently specifically stained for LecB, and strikingly, we found that this lectin was distributed in the wound sections, both in the keratinocyte layers and in the dermis ( Fig 1B ). In contrast, no LecB was detected in the normal skin control samples. This result provides the first evidence of LecB in chronic human wounds, with superinfection possibly playing a role in the wound chronicity. Download figure Open in new tab Download PowerPoint Figure 1. LecB localisation in chronically infected human wounds. (A, B) Tissue sections of human infected wounds embedded in paraffin and stained for (A) P. aeruginosa (green) and for (B) LecB (green). Normal skin is used as negative control. Note: green signal in the upper left panel (A) is due to unspecific staining in the stratum corneum , not present in wounds. Rectangular squares refer to the zoomed area. Arrows point at LecB localised in the epidermal layers; arrowheads indicate LecB distributed in the dermis. LecB coprecipitates with essential plasma membrane receptors in normal human keratinocytes (NHKs) Next, we moved to NHKs, the predominant cell type in the epidermis, to study the mechanism of action of LecB in molecular detail. LecB plays a crucial role in the adhesion of P. aeruginosa to host cells ( Chemani et al, 2009 ), implying the necessity for the lectin to target plasma membrane receptors via their glycosylated residues. Therefore, we screened for potential LecB-interacting proteins via a pull-down assay. Briefly, we cultured NHKs in the presence or absence of 5 μg/ml (106 nM) of biotinylated LecB, lysed them under conditions that preserve protein–protein interactions and incubated with streptavidin agarose beads to precipitate LecB in complex with interacting proteins. Subsequently, we performed on bead digestion and analysed the obtained peptides by protein mass spectrometry. This analysis revealed the presence of important cell growth factor receptors within the enriched proteins in the LecB pull-down fractions (Table S1). The protein mass spectrometry results were verified by Western blot and confirmed the presence of two of the major keratinocyte growth factor receptors implicated in epidermal keratinocytes proliferation and migration, IGF-1R, and the epidermal growth factor receptor 1 (EGFR) ( Haase, 2003 ; Sadagurski et al, 2005 ) ( Figs 2A , and S1A and B ). In addition to stimulating the cells with biotinylated LecB, an excess of L-fucose (30 mM) was added to clarify whether the interaction of LecB with IGF-1R or EGFR was carbohydrate dependent. As L-fucose saturates the carbohydrate-binding pockets of LecB, no binding to cell membranes could be detected when LecB and L-fucose were simultaneously applied to the cells, as neither IGF-1R nor EGFR could be coprecipitated ( Fig S1A and B ). We further investigated in detail the effect of LecB on IGF-1R because of its higher fold enrichment. We hypothesised that LecB interaction with IGF-1R triggers receptor internalisation. Indeed, surface staining experiments showed that IGF-1R is depleted from the plasma membrane of keratinocytes in a time-dependent manner as a consequence of LecB incubation ( Fig 2B and C ). Concurrently, we observed an intracellular redistribution of the receptor, which accumulates in regions positive for the lysosomal marker protein LAMP1 ( Fig S1C ). This observation correlated with the strong increase of lysosomes upon treatment ( Fig S1D ) and suggests that the receptor is targeted for degradation upon LecB. However, lysosomal degradation inhibition with bafilomycin A1 only partially restored IGF-1R levels ( Fig S1E and F ), indicating that LecB can act on other degradative pathways. Table S1 List of coprecipitated growth factor receptors identified by mass spectrometry analysis. Source data are available for this table. Source Data for Table S1 [LSA-2019-00422_Sdata_TS1.xlsx] [LSA-2019-00422_Sdata_TS2.txt] Download figure Open in new tab Download PowerPoint Figure 2. LecB depletes IGF-1R from the plasma membrane without inducing its activation. (A) Western blot of eluted samples from pull-down assay. NHKs were stimulated in the presence of 5 μg/ml biotinylated LecB (106 nM). The lysates were incubated with streptavidin beads and further eluted. Western blot was performed and IGF-1R and LecB were detected in the precipitated fractions. (B, C) Surface staining of keratinocytes treated with LecB (5 μg/ml) for the indicated time points. (B) Images show maximum intensity projections. After stimulation, the cells were stained for IGF-1R (red) and DAPI (blue). Scale bar: 10 μm. (C) Quantification of IGF-1R surface intensity from n = 4 independent experiments. Bars display the mean value ± SEM. ** denotes P < 0.01; **** denotes P < 0.0001; one-way ANOVA was used for statistical analysis. (D) Representative blots from lysates after LecB (5 μg/ml) and IGF-1 (100 ng/ml) stimulation for the indicated times. Antibodies against different phosphorylation sites of IGF-1R (Tyr1131 and Tyr1135/1136) and against total IGF-1R were used. Tubulin was used as loading control. (E, F, G) Blot quantification. Phosphorylated IGF-1R (E, F) and total IGF-1R levels (G) are represented as fold change compared with the loading control from n = 3 independent experiments. * denotes P < 0.05; *** denotes P < 0.001; two-way ANOVA was used for statistical analysis. Expand view - Figure S1. LecB leads to IGF-1R degradation. Download figure Open in new tab Download PowerPoint Figure S1. LecB leads to IGF-1R degradation. (A, B) Western blot of eluted samples from pull-down assay using biotinylated LecB (5 μg/ml) with or without 30 mM L-fucose. Membranes were probed for IGF-1R, EGFR, LecB, and GAPDH. M indicates marker lanes. (C) Whole-cell staining of IGF-1R (red) and LAMP1 (green) upon LecB treatment from 30 min to 6 h. Arrows point at IGF-1R accumulations, whereas arrowheads indicate colocalisation between LAMP1 and IGF-1R. Scale bar: 10 μm. (D) Quantification of LAMP1 intensity. Bars show the mean value ± SEM of N = 3 independent experiments. **** denotes P < 0.0001; one-way ANOVA was used for statistical analysis. (E) Western blot of IGF-1R after 3 h treatment with LecB with or without 100 nM bafilomycin A1. (F) Quantification of the fold change of the receptor levels normalised to actin. Bars show the mean value ± SEM of N = 8 independent experiments. * denotes P < 0.05, ** denotes P < 0.01, ns denotes not significant; one-way ANOVA was used for statistical analysis. IGF-1R internalisation from the plasma membrane was also detectable after stimulation with 100 ng/ml of IGF-1 ( Fig S2A and B ). However, whereas IGF-1 induced a fast autophosphorylation of the receptor on multiple tyrosine residues (Tyr1135, Tyr1131, and Tyr1136), LecB did not ( Fig 2D–G ). Thus, we conclude that LecB does not lead to the activation of IGF-1R but rather triggers a “silent” receptor internalisation. Expand view - Figure S2. IGF-1R surface expression upon IGF-1 stimulation. Download figure Open in new tab Download PowerPoint Figure S2. IGF-1R surface expression upon IGF-1 stimulation. (A) Surface staining of IGF-1R (red) upon stimulation of keratinocytes with 100 ng/ml IGF-1 for the indicated times. Maximum intensity projections are depicted. Nuclei are shown by DAPI (blue). (B) Quantification of IGF-1R surface intensity. Bars show the mean value ± SEM of N = 3 independent experiments. **** denotes P < 0.0001; one-way ANOVA was used for statistical analysis. LecB impairs cell survival signalling pathways and leads to cell cycle arrest Growth factor receptors activate two major downstream signalling pathways, the mitogen-activated protein kinase/extracellular signal-regulated kinase (MAPK/ERK) and the phosphatidylinositol 3-kinase-protein kinase B (PI3K-AKT) cascade, which are responsible for inhibition of apoptosis, and stimulation of both protein synthesis and cell proliferation ( Lemmon & Schlessinger, 2010 ; Mendoza et al, 2011 ). Whereas the former ultimately results in the phosphorylation and activation of extracellular signal-regulated kinase 1/2 (ERK1/2), the latter activates the mammalian target of rapamycin (mTOR). To determine whether these two signalling cascades were affected by LecB, we monitored ERK1/2 and mTOR phosphorylation at Thr202/Tyr204 and Ser2448, respectively. The amounts of phosphorylated proteins were normalised to the respective pan-proteins. Indeed, LecB did not lead to the phosphorylation of both ERK1/2 and mTOR ( Fig 3A–D ). ERK1/2 phosphorylation actually significantly decreased 1 h post-stimulation ( Fig 3A and C ). In contrast, LecB activated 5′ adenosine monophosphate–activated protein kinase (AMPK), a cellular energy and nutrient status sensor, which is activated in response to cellular energy depletion ( Fig 3A and E ). AMPK phosphorylation significantly increased 3 h post-stimulation and was inhibited by addition of L-fucose, confirming that this posttranslational modification is LecB dependent ( Fig S3A and B ). Download figure Open in new tab Download PowerPoint Figure 3. LecB impairs cell survival signalling pathways and leads to cell cycle arrest. (A, B, C, D, E) Representative blots and relative quantifications of N = 3 independent experiments. NHKs were treated with 5 μg/ml LecB for the indicated time and lysates were subjected to SDS–PAGE and Western blot analysis using the designated anti-phospho (A) and anti-pan antibodies (B). GAPDH was added as loading control. Graphs (C, D, E) depict the fold change of the phosphorylated protein compared with pan levels and represent the mean value ± SEM. ** denotes P < 0.01, **** denotes P < 0.0001, ns denotes not significant; multiple t tests were used for statistical analysis. (F) Western blot showing cyclin D1 levels after 12 and 24 h of LecB stimulation (5 or 10 μg/ml). (G) Quantification of cyclin D1 relative to tubulin. The mean value ± SEM of N = 3 independent experiments is reported. * denotes P < 0.05, ** denotes P < 0.01; one-way ANOVA was used for statistical analysis. (H) MTT assay assessing the cytotoxic effect of LecB. NHKs were treated with the indicated LecB concentration with or without 30 mM L-fucose for 24 h. Staurosporine (1 μM) was used as positive control. MTT was added to the medium and left for 4 h. The absorbance at 570 nm was measured and plotted as fold change compared with the untreated or L-fucose–treated sample. The mean value ± SEM of N = 5 is plotted. ** denotes P < 0.01; one-way ANOVA was used for statistical analysis. (I) Representative images of keratinocyte monolayers after 24-h exposure to 5 μg/ml LecB with or without 30 mM L-fucose. White arrows in the zoomed image point at vacuolar structures induced by lectin treatment. Scale bar: 200 μm. Expand view - Figure S3. AMPK activation is blocked by L-fucose supplementation. Download figure Open in new tab Download PowerPoint Figure S3. AMPK activation is blocked by L-fucose supplementation. (A, B) Whole-cell lysates were analysed by Western blot to detect levels of pAMPK upon 4-h incubation with LecB (5 μg/ml) ± 30 mM L-fucose. The phosphorylated protein levels were normalised to actin. The graph reports the mean value ± SEM of N = 3 independent experiments. ** denotes P < 0.01, ns denotes not significant; one-way ANOVA was used for statistical analysis. The activation of AMPK correlates with the inhibition of ERK1/2 because the latter has been reported to negatively regulate the phosphorylation of AMPK via the liver kinase B1 (LKB1) ( Woods et al, 2003 ; Shaw et al, 2004 ). Furthermore, consistently with the inhibition of ERK1/2 activity, we observed a strong cyclin D1 degradation ( Fig 3F and G ), which led to the arrest of cell cycle and, thus, to the reduction of cell viability ( Fig 3H ). Interestingly, the cytotoxic effect was preceded by an extensive cytoplasmic vacuolisation ( Fig 3I ). L-fucose supplementation rescued the phenotype and restored cell viability ( Fig 3H and I ). Altogether, these observations elucidate that LecB impairs a key mechanism that regulates cell survival and proliferation, resulting in the blockage of cell cycle. The cytotoxic effect of LecB is accompanied by the formation of LecB-positive intraluminal vesicle-containing vacuoles Next, we used transmission electron microscopy to further investigate the cellular vacuolisation triggered by LecB treatment. Untreated keratinocytes presented several light content vacuoles (Category 1), corresponding to endo-lysosomal compartments ( Fig 4A ). In contrast, LecB-treated cells displayed numerous degradative vesicles (Category 2) and an additional type of vacuoles (Category 3), with irregular shapes and variable sizes ( Fig 4B ). Surprisingly, these vacuoles contained a large number of clearly defined intraluminal vesicles ( Fig 4B ) and they appeared to cluster together to form an intricate network ( Fig 4B , zoom). To get insights into the origin of these structures, we performed a time-course experiment by collecting cells at different time points, from 30 min to 12 h after exposure to LecB ( Fig S4A–E′ ). The Category 3 vacuoles were detectable after 30 min of LecB incubation, but they were quite rare. However, they became more prominent 3 h post-LecB exposure ( Fig S4D ). Interestingly, specific regions of the plasma membrane of the LecB-treated keratinocytes were extremely irregular and characterised by the presence of ruffle-like structures ( Fig S4A–E ). As the number of Category 3 vesicles located at these regions increased over time, we hypothesised that the intraluminal vesicle-containing vacuoles originate from subdomains of the plasma membrane upon LecB interaction with surface proteins. This hypothesis implies that LecB is present in the Category 3 vacuoles. To test this, we exposed keratinocytes to biotinylated LecB before performing immunoelectron microscopy (IEM) using anti-biotin antibodies ( Fig 4C–F ). Notably, we specifically detected LecB on both the limiting membrane of the Category 3 vacuoles and to a minor extent on their intraluminal vesicles ( Fig 4D ). Moreover, we also identified biotinylated LecB at the plasma membrane, where it was especially enriched in the ruffle-like regions ( Fig 4F ), supporting the notion that LecB drives the formation of a plethora of intracellular vesicles originating from the plasma membrane. Download figure Open in new tab Download PowerPoint Figure 4. The cytotoxic effect of LecB is preceded by the formation of intraluminal vesicle-containing vacuoles. (A, B) NHKs were incubated with 5 μg/ml LecB and processed for conventional electron microscopy at 12 h post-incubation. (A, B) Representative electrographs of the untreated (A) and LecB-treated (B) cells. Black hashtags point at Category 1 vacuoles, whereas black and red asterisks indicate Category 2 and Category 3 vacuoles, respectively. Zoomed image of panel (B) shows a higher magnification of the Category 3 vacuoles containing intraluminal vesicles. Scale bars: 1 μm. (C, D, E, F) Cells treated with 5 μg/ml biotinylated LecB or un-tagged LecB for 12 h were subjected to immuno-EM. (C, E) Control staining showing no aspecific antibody binding. (D) LecB localises in internal vesicles, either in the lumen (panel [D] zoom, black arrowheads) or at the limiting membrane (white arrowheads). (F) LecB localisation at the plasma membrane, in the ruffle-like region. Scale bars: 500 nm. Scale bar zoom panel (D): 200 nm. E, endosome; ER, endoplasmic reticulum; G, Golgi apparatus; k, keratin; M, mitochondrion; N, nucleus; PM, plasma membrane. Expand view - Figure S4. LecB containing vesicles originate from ruffle-like regions at the plasma membrane. Download figure Open in new tab Download PowerPoint Figure S4. LecB containing vesicles originate from ruffle-like regions at the plasma membrane. (A, B, C, D, E) Representative electrographs of keratinocytes treated with 5 μg/ml LecB for the indicated time points and processed for conventional EM. (A′, B′, C′, D′, E′) Additional area for each indicated condition with higher magnification. Black hashtags point at Category 1 vacuoles, whereas black and red asterisks indicate Category 2 and Category 3 vacuoles, respectively. Rectangular box highlights ruffle-like regions. Scale bars: 1 μm. E, endosome; ER, endoplasmic reticulum; G, Golgi apparatus; k, keratin; M, mitochondrion; N, nucleus; PM, plasma membrane. LecB is trafficked towards an endocytic degradative route Next, we proceeded to characterise the intracellular LecB-positive vacuoles detected by IEM in immunofluorescence experiments, where keratinocytes exposed to fluorescently labelled LecB were stained with antibodies recognising different organelle marker proteins. We found a time-dependent colocalisation of LecB with microtubule-associated proteins 1A/B light chain 3B (LC3) and ras-related protein Rab-9A (RAB9A), which are generally enriched on autophagosomes and late endosomes, respectively ( Fig 5A and B ). Moreover, but to a minor extent, we also observed a time-dependent colocalisation between LecB and lysosomal-associated membrane glycoprotein 1 (LAMP1) ( Fig S5A ). In contrast, no colocalisation with the recycling endosome marker protein ras-related protein Rab-11 (RAB11) was detected ( Fig S5B ). Endocytosis and internalisation of LecB was also assessed biochemically by analysing the ubiquitination of the proteins associated with LecB, as a large number of growth factor receptors undergo ubiquitination upon endocytosis ( Sehat et al, 2008 ; Haglund & Dikic, 2012 ). In a time-course pull-down experiment, we detected a decrease of the coprecipitated levels of IGF-1R and an increase of the ubiquitin levels in the eluted fraction, indicating that LecB-interacting partners are ubiquitinated, thus very likely internalised ( Fig 5C and D ). Download figure Open in new tab Download PowerPoint Figure 5. LecB localises in LC3- and RAB9-positive compartments. (A, B) Confocal micrographs with respective colocalisation quantification of keratinocytes stimulated with 5 μg/ml of fluorescently labelled LecB (green). Panels indicated as “+LecB” refer to 12 h incubation. (A, B) After fixation and permeabilisation, the cells were stained for LC3 (A) and RAB9 (B) (both in red). Graphs report the mean value ± SEM of Mander’s overlap coefficients calculated from at least three independent experiments. **** denotes P < 0.0001; one-way ANOVA was used for statistical analysis. Scale bar: 10 μm. (C, D) Western blot of eluted samples from time series pull-down assays using biotinylated LecB (5 μg/ml). (C, D) Membranes were probed for LecB, IGF-1R (C) and ubiquitin (D). Expand view - Figure S5. LecB colocalises with LAMP1 positive compartments. Download figure Open in new tab Download PowerPoint Figure S5. LecB colocalises with LAMP1 positive compartments. (A, B) Confocal micrographs of NHKs stimulated with 5 μg/ml of fluorescently labelled LecB (green) permeabilised and stained for (A) LAMP1 and (B) RAB11 (both in red). Panels indicated as “+LecB” refer to 12-h incubation. Note that LAMP1 signal in the untreated panel is not visible due to the high difference in intensity between untreated and treated conditions. Scale bar: 10 μm. Graphs right to the panels show the quantification of Mander’s overlap coefficients between LecB and LAMP1 and LecB and RAB11. Error bars indicate means ± SEM of N = 3 independent experiments. ** denotes P < 0.01, **** denotes P < 0.0001; one-way ANOVA was used for statistical analysis. Taken together, these results demonstrate that LecB is trafficked towards degradative compartments and suggest that ubiquitin may be the endocytic signal for the degradation of IGF-1R and potentially other LecB-interacting partners. LAP participates in targeting LecB–IGF–1R complexes to degradation As we saw a time-dependent increase in colocalisation between LC3 and LecB structures ( Fig 5A ), we decided to investigate this further. LC3 is an ubiquitin-like protein that upon autophagy induction is converted from a cytoplasmic form (i.e., LC3-I) to one associated with the autophagosomal membranes (i.e., LC3-II) through conjugation to phosphatidylethanolamine ( Kabeya et al, 2004 ). LC3-II can also be generated on the limiting membrane of endosomes during processes such as LC3-associated phagocytosis (LAP) ( Sanjuan et al, 2007 ; Florey et al, 2011 ). The fact that LecB was detected on single-membrane vacuoles (autophagosomes are double-membrane vesicles with cytoplasmic content) that are positive for RAB9 ( Figs 4D and 5B ), inferred in an activation of LAP-like pathway. Although induction of both autophagy and LAP induces the formation of LC3-II, only autophagy leads to the degradation of this conjugate ( Tanida et al, 2005 ). Indeed, we observed a time-dependent increase of LC3-II levels upon exposure of keratinocytes to LecB ( Fig 6A and B ). However, this protein was not turned over because no difference in LC3-II levels could be observed when LecB-treated keratinocytes were incubated with either bafilomycin A1 or cycloheximide to assess LC3-II stability ( Fig S6A–D ). To clarify the origin of LC3-II, we silenced ATG13 ( Fig S6E and F ), an essential component of the ULK kinase complex, which is required for macroautophagy ( Ganley et al, 2009 ) and dispensable for LAP ( Florey et al, 2011 ; Martinez et al, 2011 ). Our results revealed that LC3-II formation upon cell treatment with LecB is ATG13 independent ( Fig S6E and G ). Furthermore, when we performed LecB treatment in combination with cytochalasin D, an inhibitor of actin polymerization, we found a decrease in LecB uptake ( Fig S7A and B ), indicating that actin is important for LecB internalisation. Download figure Open in new tab Download PowerPoint Figure 6. IGF-1R is sorted to degradation by LecB. (A, B) NHKs were incubated with LecB (5 μg/ml) for the indicated time points and whole cell lysates were immunoblotted for LC3 and β-actin (A). (B) The levels of LC3-II were normalised to β-actin and depicted as fold change increase to the untreated condition (B). Error bars indicate means ± SEM of N = 5 independent experiments. * denotes P < 0.05, ** denotes P < 0.01; one-way ANOVA was used for statistical analysis. (C) Confocal micrographs of keratinocytes treated with 5 μg/ml of fluorescently labelled LecB (pink) and stained for LC3 (red) and IGF-1R (green). Panel indicated as “+LecB” refers to 12 h incubation. White arrows point at colocalisation among LecB, LC3, and IGF-1R. Scale bar: 10 μm. N = 3. (D, E) Representative images of NHKs treated with LecB and stained for (D) IGF-1R (green), LC3 (red), and (E) TfR (green). Intensity profiles of single cells, along the yellow line are shown. Scale bar: 10 μm. Expand view - Figure S6. LecB-mediated increase of LC3-II is ATG13 independent. Download figure Open in new tab Download PowerPoint Figure S6. LecB-mediated increase of LC3-II is ATG13 independent. (A, B, C, D) Representative blots and relative quantifications. The cells were treated with 5 μg/ml of LecB for 12 h ± 200 nM bafilomycin A1 (Bafi) or ±20 μg/ml cycloheximide (Cyclo). Whole-cell lysates were probed for LC3 and β-actin was used as loading control. (A, B, C, D) Graphs indicate the fold change of LC3-II levels, expressed as means ± SEM of N = 4 (A, B) and N = 6 (C, D) independent experiments. *** denotes P < 0.001, **** denotes P < 0.0001, ns denotes not significant; one-way ANOVA was used for statistical analysis. (E, F, G) ATG13 was silenced via siRNA and its expression quantified by Western blot (F). (G) Silenced cells were treated with LecB for 3 h and LC3-II levels were quantified (G). Graphs report the mean value ± SEM of N = 4 independent experiments. ** denotes P < 0.01, **** denotes P < 0.0001, ns denotes not significant; one-way ANOVA was used for statistical analysis. Expand view - Figure S7. Cytochalasin D reduces LecB uptake. Download figure Open in new tab Download PowerPoint Figure S7. Cytochalasin D reduces LecB uptake. (A) Confocal micrographs after 45-min incubation with 5 μg/ml of fluorescently labelled LecB (green) with or without cytochalasin D (0.5 μg/ml). Cells were stained for the early endosome marker EEA1 (red) and for the nuclei (blue). Scale bar: 10 μm. (B) Quantification of LecB intensity per cell. Graphs report the mean value ± SEM of N = 4 independent experiments. * denotes P < 0.05, unpaired t test was used for statistical analysis. Next, we investigated whether IGF-1R also colocalised with LC3-positive vesicles. Indeed, we found colocalisation between IGF-1R, LC3, and LecB ( Fig 6C ), but interestingly, we did not observe LC3 recruitment onto membranes when cells were treated exclusively with IGF-1 ( Fig S8 ). Finally, we examined another receptor, the transferrin receptor (TfR), which was not pulled down by LecB ( Fig S9 ). Unlike IGF-1R and other growth factor receptors, which can be internalised via clathrin-independent mechanisms ( Boucrot et al, 2015 ), TfR enters the cells via clathrin-mediated endocytosis and it is recycled back to the plasma membrane upon iron delivery ( Hopkins & Trowbridge, 1983 ). Interestingly, whereas we observed colocalisation between IGF-1R and LC3 ( Fig 6D ), we did not detect changes in TfR distribution nor LC3 colocalisation upon LecB treatment ( Fig 6E ). This is supported by the absence of colocalisation between LecB and RAB11 compartments as well ( Fig S5B ) and suggests that LecB preferentially commits receptors to degradation, thereby affecting their signalling. Expand view - Figure S8. IGF-1 does not affect LC3 distribution. Download figure Open in new tab Download PowerPoint Figure S8. IGF-1 does not affect LC3 distribution. Representative confocal images depicting NHKs treated with 100 ng/ml IGF-1 or 100 nM bafilomycin A1 for 12 h and stained for LC3 (green) and IGF-1R (red). Scale bar: 10 μm. Figure S9 . Western blot of eluted samples from pull-down assays with biotinylated LecB (5 μg/ml) with or without 30 mM L-fucose. Membranes were probed for TrR and LecB. M indicates the marker lane. Expand view - Figure S9. Transferrin receptor is not coprecipitated upon LecB treatment. Download figure Open in new tab Download PowerPoint Figure S9. Transferrin receptor is not coprecipitated upon LecB treatment. Taken together, these results demonstrate that LC3 recruitment onto LecB–IGF–1R complex–positive endosomes is independent from macroauthophagy. Discussion Bacterial lectins have predominantly been described as adhesion proteins, which promote the attachment of bacteria to cell plasma membrane through their interaction with host cell surface glycans ( Sharon, 1987 ). This study provides new insights into the role of P. aeruginosa lectin LecB as virulence factor that impairs growth factor receptor signalling and trafficking, thereby affecting keratinocyte fitness. Pull-down studies showed the co-isolation of LecB with several plasma membrane receptors in keratinocytes, among which IGF-1R and EGFR were two of the principal interactors. We further demonstrated that LecB depletes IGF-1R from the plasma membrane by inducing its internalisation. To modulate signalling, IGF-1R (as well as EGFR) needs to be activated by ligand binding and consequent autophosphorylation on three tyrosine residues (i.e., Tyr1135, Tyr1131, and Tyr1136) in its C-terminal tail ( Favelyukis et al, 2001 ; Sousa et al, 2012 ). Interestingly, unlike IGF-1, LecB triggers IGF-1R endocytic trafficking towards degradative compartments without receptor activation. PI3K/AKT and MAPK/ERK signalling cascades have been intensively investigated for their role in promoting cell and organismal growth upon growth factor signals. This is mainly achieved by AKT-mediated activation of mTOR, which is a key regulator of anabolic processes necessary during cell growth ( Nave et al, 1999 ; Dennis et al, 2001 ) and by the phosphorylation of ERK1/2, which results in cell proliferation ( Rossomando et al, 1991 ; Meloche & Pouysségur, 2007 ). We found that neither mTOR nor ERK1/2 is activated upon cell exposure to LecB. Specifically, we observed a decrease in the levels of phospho-ERK1/2 after 1 h of LecB treatment, which correlates with an increase of phospho-AMPK. Reciprocal feedback loops have been described to operate between the AMPK and the MAPK/ERK pathways and, whereas ERK1/2 activation was reported to be attenuated upon stimulation of WT MEFs with the AMPK activator A-769662, little effect on ERK1/2 phosphorylation was detected when AMPK-null MEFs were used ( Zheng et al, 2009 ; Shen et al, 2013 ). The suppression of cell survival signalling pathways can also clarify the arrest of the cell cycle in the G1/S transition observed upon LecB incubation. ERK1/2 activity is in fact required for both the cell cycle entry and the suppression of negative cell cycle regulators. Specifically, cyclin D1 expression is enhanced by the activation of the MAPK/ERK pathway ( Lavoie et al, 1996 ; Weber et al, 1997 ), via the transcription factors c-Fos ( Burch et al, 2004 ) and Myc ( Seth et al, 1991 ). Cells, entering into the S phase show low levels of cyclin D1, which is again induced and reaches high levels to ensure the progression into the G2 phase ( Yang et al, 2006 ). LecB stimulation mediates a strong reduction of cyclin D1, whose levels are not restored over time, resulting in the arrest of the cell cycle and subsequential induction of cell death. Therefore, we speculate that LecB, despite devoid of catalytic activity, may contribute in promoting tissue damage to facilitate bacterial dissemination and extracellular multiplication during wound infections. Furthermore, it is possible that LecB induces a similar fate to other cell types, including immune cells, thus further promoting the infection. Future work will provide new insights on the impact of LecB on the immune response during chronic wound infections. Interestingly, the cytotoxic effect of LecB was preceded by an extensive cytoplasmic vacuolation. Electron microscopy inspection of LecB-treated cells revealed the peculiar nature of these vacuoles that accumulate over time and display an increased number of intraluminal vesicles. We also show that these vacuoles probably originate from ruffle-like plasma membrane regions where LecB was also found to be enriched. In the case of other bacterial lectins, such as the B-subunit of Shiga toxin (StxB) from Shigella dysenteriae and LecA from P. aeruginosa , the sole interaction with the glycosphingolipid receptor globotriaosyl ceramide (Gb3) is sufficient to drive membrane shape changes leading to their subsequent uptake ( Römer et al, 2007 ; Eierhoff et al, 2014 ). In the case of LecB, however, pull-down experiments point to the existence of several receptors, also attributable to the fact that fucosylation is a very common modification. LecB exists as a tetramer and each monomer possesses a binding pocket with the highest affinity to L-fucose and its derivatives ( Garber et al, 1987 ; Sabin et al, 2006 ). Although the L-fucose dissociation constant is in the micromolar range (2.9 μM), an increase in avidity can be achieved by a higher degree of interactions. This implies that LecB might crosslink several different surface receptors, thus inducing a higher degree of membrane rearrangements that could explain the extensive alterations at the plasma membrane. Moreover, multiple interactions could provide the bacterium with additional resources for the initiation of host tissue colonisation. By following LecB trafficking, we sought to characterise the nature of LecB-mediated vacuoles, which, from a structural point of view, shows similarities with multivesicular bodies, as they contain numerous intraluminal vesicles. Immunofluorescence experiments revealed a time-dependent increase of colocalisation between LecB and Rab-9, LC3, and LAMP-1. Therefore, our data indicate that LC3 is recruited on LecB containing late endosomes, which may further favour their lysosomal degradation through a process that may be similar or identical to LAP. LAP can be triggered by several receptors. In addition to toll-like receptors, whose signalling during phagocytosis rapidly recruits LC3 to phagosomes, the phosphatidylserine receptor TIM4, or the C-type lectin dectin-1 can also induce LAP to efficiently clear dead cells and to facilitate antigen presentation, respectively ( Sanjuan et al, 2007 ; Martinez et al, 2011 ; Ma et al, 2012 ). LC3 can associate with single-membrane phagosomes even in the absence of pathogens or dead cells, such in the case of phagocytosis and degradation of photoreceptor outer segments by retinal pigment epithelium or during the secretion of mucins in goblet cells ( Kim et al, 2013 ; Patel et al, 2013 ). Our data indicate that IGF-1R colocalises with LecB and LC3, suggesting that LAP or an LAP-like pathway is involved in IGF-1R sorting for degradation, after LecB-induced receptor internalisation. The current model describes that tyrosine kinase receptors, upon ligand binding, are internalised and trafficked to early endosomes. From here, they can either be sorted to lysosomal degradation via multivesicular bodies or they can be recycled back to the plasma membrane. The equilibrium between IGF-1R degradation and recycling is essential to modulate receptor signalling ( Morrione et al, 1997 ; Monami et al, 2008 ). Our data demonstrate that IGF-1R trafficking is subverted upon cell treatment with LecB, which induces the sorting of this receptor towards degradative routes, without activating it. We do not know yet whether there is a direct or indirect interaction between LecB and IGF-1R, despite being co-isolated as complex. However, because IGF-1R is not the sole protein interacting with LecB in keratinocytes, we speculate that this “targeting for degradation” strategy can be valid for other plasma membrane receptors as well (e.g., EGFR) and that it can be used by P. aeruginosa to silence host cell receptors to favour tissue colonisation in wounds. Wound infections represent a socioeconomic burden for the health care system and, given that P. aeruginosa is very hard to eliminate with the available antibiotics, there is urgent need for the development of alternative therapeutic strategies ( Norberg et al, 2017 ; Sommer et al, 2018 ). Our findings shed new light on the P. aeruginosa lectin LecB, showing that it is capable of inducing a succession of cellular events, despite being devoid of catalytic activity and by virtue of its sole capability to bind to sugar moieties on the plasma membrane. This knowledge paves the way for a better understanding of P. aeruginosa wound infections. Materials and Methods Antibodies, inhibitors, and activators Used antibodies are listed in Tables S2 and S3. Aprotinin (0.8 μM), leupeptin (11 μM), and pefabloc (200 μM), used as protease inhibitors, and phosphatase inhibitor cocktail 3 (1:100) were from Sigma-Aldrich. To block LecB binding to the host cell membranes, L-fucose (Sigma-Aldrich) was used at a concentration of 30 mM. Cycloheximide (20 μg/ml) and staurosporine (500 nM) were both purchased from Sigma-Aldrich and used as a protein synthesis inhibitor and as positive control for cell death, respectively. Bafilomycin A1 was from InvivoGen and used at a concentration of 100 or 200 nM, whereas cytochalasin D was from Sigma-Aldrich and applied at a concentration of 0.5 μg/ml. Human IGF-1 was from Thermo Fisher Scientific and used as an IGF-1 receptor activator. Table S2 List of primary antibodies used. Table S3 List of secondary antibodies used. Cell culture NHKs, kindly provided by D Kiritsi (Department of Dermatology), were grown at 37°C and in the presence of 5% CO 2 in keratinocyte medium supplemented with bovine pituitary extract, epidermal growth factor, and 0.5% penicillin/streptomycin. Cells were seeded 24 h before each experiment. If not stated differently, all treatments were performed in complete medium. LecB production and in vitro labelling Recombinant LecB (UniProt ID: Q9HYN5_PSEAE) was produced from Escherichia coli BL21(DE3) containing pET25pa21 and purified via chromatography using a mannose-agarose column as previously described ( Mitchell et al, 2002 ); Chemani et al, 2009 ). The eluate was dialysed in water for 7 d and finally lyophilised. The obtained powder was dissolved in PBS (138 mM NaCl, 2.7 mM KCl, 8 mM Na 2 HPO 4 , and 1.5 mM KH 2 PO 4 , pH 7.4) and sterile-filtered. In addition, to exclude endotoxin contamination, LecB was further purified using an endotoxin removal column (Hyglos) and an LAL chromogenic endotoxin quantitation kit (Thermo Fisher Scientific) was carried out. The purity of the lectin was confirmed by SDS–PAGE. If not stated differently, LecB was used at a final concentration of 5 μg/ml (106 nM). LecB was fluorescently labelled using Cy3 (GE Healthcare) or Alexa Fluor488 (Thermo Fisher Scientific) monoreactive NHS ester and purified using Zeba Spin desalting columns (Thermo Fisher Scientific) according to the manufacturers’ instructions. Biotinylated LecB was obtained using sulfo-NHS-SS-Biotin (Thermo Fisher Scientific) according to the supplier’s instructions and dialysed two times for 1 h in water and one time overnight in PBS. Gene silencing Cells were incubated with a mix of siRNA against ATG13 (Dharmacon) and transfection reagent (Dharmacon) for 5–6 h. 24 h post-transfection lysates were checked for silencing efficiency. LecB stimulation was performed 36 h post-silencing. A control siRNA (Santa Cruz) was used to evaluate off-target effects. Mass spectrometry analyses Cells were seeded in T-75 flasks and stimulated with biotinylated LecB at a concentration of 5 μg/ml. Upon treatment, the cells were washed in PBS and lysed with 50 mM Tris–HCL, pH 7.5, 150 mM sodium chloride, 1% (vol/vol) IGEPAL CA-630, and 0.5% (wt/vol) sodium deoxycholate in water. The protein lysates were incubated overnight at 4°C with streptavidin agarose beads (Thermo Fisher Scientific) to precipitate LecB–biotin–protein complexes. The beads were washed four times in PBS and then resuspended in 8 M urea. Immunoprecipitates were predigested with LysC (1:50, w/w) followed by the reduction of disulfide bonds with 10 mM DTT and subsequent alkylation with 50 mM iodacetamide. Then samples were diluted 1:5 with ammonium bicarbonate buffer, pH 8, and trypsin (1:50, w/w) was added for overnight digestion at RT. The resulting peptide mixtures were acidified and loaded on C18 StageTips ( Rappsilber et al, 2007 ). Peptides were eluted with 80% acetonitrile (ACN), dried using a SpeedVac centrifuge, and resuspended in 2% ACN, 0.1% TFA, and 0.5% acetic acid. Peptides were separated on an EASY-nLC 1200 HPLC system (Thermo Fisher Scientific) coupled online to a Q Exactive HF mass spectrometer via a nanoelectrospray source (Thermo Fisher Scientific). Peptides were loaded in buffer A (0.5% formic acid) on in-house–packed columns (75-μm inner diameter, 50-cm length, and 1.9-μm C18 particles from Dr. Maisch GmbH). Peptides were eluted with a nonlinear 270-min gradient of 5–60% buffer B (80% ACN, 0.5% formic acid) at a flow rate of 250 nl/min and a column temperature of 50°C. The Q Exactive HF was operated in a data-dependent mode with a survey scan range of 300–1,750 m/z and a resolution of 60,000 at m/z 200. MaxQuant software (version 1.5.3.54) was used to analyse MS raw files ( Cox & Mann, 2008 ). MS/MS spectra were searched against the human Proteome UniProt FASTA database and a common contaminants database (247 entries) by the Andromeda search engine ( Cox et al, 2011 ). Immunofluorescence and confocal microscopy Between 4 and 5 × 10 4 cells were seeded on 12-mm glass cover slips in a 24-well plate and cultured for 1 d before the experiment. Keratinocytes were treated with LecB for the indicated concentrations and time points. Surface or whole cell staining was then performed. For surface staining, NHKs were fixed with 4% (wt/vol) formaldehyde (FA) for 10 min and quenched with 50 mM ammonium chloride for 5 min. The cells were blocked with 3% (wt/vol) of BSA in PBS and incubated overnight with the primary antibody diluted in blocking solution (see the details for the used antibodies in Table S2). The cells were then washed in PBS and incubated for 30 min with the dye-labelled secondary antibody. For whole cell staining, the cells were fixed with 4% (wt/vol) FA and quenched, as previously described. When methanol fixation was used instead of FA, the cells were incubated with ice cold methanol for 10 min and rinsed twice with PBS. After fixation, 10 min incubation with 0.1% (vol/vol) Triton X-100 in PBS was performed. The cells were blocked and subsequently stained with the primary and secondary antibody, as previously described, but using blocking solution supplemented with 0.05% Triton X-100. Nuclei were additionally stained with DAPI diluted in PBS with 0.1% (vol/vol) Triton X-100. After three additional washes, glass cover slips were mounted with Mowiol and imaged using an A1R Nikon confocal microscopy system with a 60× oil immersion objective (NA = 1.49). Z-stacks of at least three different areas per condition were acquired and analysed with Fiji ImageJ 1.0 software. Coloc2 Fiji’s plugin was used for colocalisation analysis. Western blot analyses NHKs were seeded in a 12-well plate at a density of 1.3 × 10 5 per well and stimulated with LecB for the indicated time points. After the treatment, the cells were washed with PBS and lysed in RIPA buffer (20 mM Tris–HCl, pH 8, 0.1% [wt/vol] SDS, 10% [vol/vol] glycerol, 13.7 mM NaCl, 2 mM EDTA, and 0.5% [wt/vol] sodium deoxycholate) in water supplemented with protease and phosphatase inhibitors. Protein concentration was measured using bicinchoninic acid assay assay kit (Thermo Fisher Scientific) and normalised. The samples were then separated via SDS–PAGE and subsequently transferred onto a nitrocellulose membrane. Membranes were blocked in 3% (wt/vol) BSA or 5% (wt/vol) milk powder in TBS supplemented with 0.1% (vol/vol) Tween 20 and incubated with the primary and horseradish peroxidase secondary antibodies in blocking solution (see antibody list in Table S2). Protein bands were visualised via chemiluminescence reaction using the Fusion-FX7 Advance imaging system (Peqlab Biotechnology). A densitometric analysis of at least three independent experiments was carried out using FIJI ImageJ 1.0 software. Cell viability assays To assess cytotoxicity and morphological changes upon treatment with LecB, NHKs were seeded in a 24-well plate at a density of 5 × 10 4 per well and treated for 24 h with different LecB concentrations ± fucose (30 mM). Images were acquired using Evos FL Cell Imaging Systems (Thermo Fisher Scientific) using 20× objective (NA = 0.45). MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) tetrazolium reduction assay kit (Merck) was additionally used to quantify cell viability upon LecB treatment. For this purpose, cells were grown into a 96-well plate and stimulated for 24 h with the indicated LecB concentrations ± L-fucose. The MTT solution was added for 4 h at 37°C with 5% CO 2 and the absorbance at 570 nm was measured. Ultrastructural analyses For conventional transmission electron microscopy, NHKs were treated or not with 5 μg/ml of LecB, for the indicated time periods. An equal volume of double-strength fixative (4% paraformaldehyde and 4% glutaraldehyde in 0.1 M sodium cacodylate buffer, pH 7.4) was then added to the cells for 20 min at room temperature before fixing the cells with one volume of single-strength fixative (2% paraformaldehyde and 2.5% glutaraldehyde in 0.1 M sodium cacodylate buffer, pH 7.4) for 2 h at room temperature. After washes with cacodylate buffer (pH 7.4), the cells were then scraped and embedded as previously described ( Verheije et al, 2008 ). Ultrathin 70-nm sections were cut using a Leica EM UC7 ultra microtome (Leica Microsystems) and stained with uranyl acetate and lead citrate as previously described ( Verheije et al, 2008 ). For the IEM analyses, NHKs were treated for 12 h with biotinylated LecB and fixed by addition of 4% PFA and 0.4% glutaraldehyde in 0.1 M phosphate buffer (pH 7.4) in an equal volume to the DMEM medium, before to be incubated for 20 min at room temperature. The cells were subsequently fixed in 2% PFA and 0.2% glutaraldehyde in 0.1 M phosphate buffer (pH 7.4) for 3 h at room temperature and then embedded for the Tokuyasu procedure before cutting into ultrathin cryosections and labelling them with immunogold, as previously described ( Slot and Geuze, 2007 ). Biotinylated LecB was detected using an anti-biotin rabbit antibody (100-4198; Rockland). The labelling was revealed with protein A conjugated with 10 nm gold (CMC). As labelling control, NHKs were incubated with non-biotinylated LecB. No immunogold labelling was detected in this sample. The cell sections were analysed using a CM100 Bio TEM (FEI). Immunofluorescence of chronic human wounds Fixed sections of human chronic wounds were kindly provided by D Kiritsi (Department of Dermatology, Medical Centre, Albert Ludwigs University, Freiburg). Antigens were retrieved with 0.05% pronase (Sigma-Aldrich). Sections were blocked with 3% (wt/vol) BSA in PBS and probed overnight with an anti-LecB or anti -P. aeruginosa antibody. After two washing steps with PBS supplemented with 0.1% Tween 20, the sections were stained with a dye-labelled secondary antibody, counterstained with DAPI, and mounted in fluorescence mounting medium (Dako). Images were acquired using Zeiss Axio Imager A1 fluorescence microscopy. Statistics All data were obtained from at least three independent experiments and are shown as the means ± SEM. Statistical analysis was performed using GraphPad Prism software. One-way or two-way ANOVA were chosen to assess significance in experiments with multiple conditions. Otherwise, when experimental data for one condition were compared with the relative control condition, multiple t test was applied. A P -value < 0.05 was considered as statistically significant. * denotes P < 0.05; ** denotes P < 0.01; *** denotes P < 0.001; **** denotes P < 0.0001; ns denotes not significant. Acknowledgements The research group of W Römer was supported by the German Research Foundation (BIOSS—EXC 294, CIBSS—EXC-2189—Project ID 390939984, GSC-4, and RO 4341/2-1), the Ministry of Science, Research and the Arts Baden-Württemberg (Az: 33-7532.20), and by a starting grant from the European Research Council (Programme “Ideas”, ERC-2011-Stg 282105-lec&lip2invade). The work in A Nyström’s group was supported by the German Research Foundation (NY 90/5-1). F Reggiori is supported by ZonMW VICI (016.130.606), ZonMW TOP (91217002), ALW Open Programme (ALWOP.310), and Marie Skłodowska-Curie Cofund (713660) and Marie Skłodowska Curie ETN (765912) grants. M Mari is supported by an ALW Open Programme (ALWOP.355). This publication is partially based upon work from COST Action CA18103 (INNOGLY), supported by COST (European Cooperation in Science and Technology). The article processing charge was funded by the German Research Foundation (DFG) and the University of Freiburg in the funding programme Open Access Publishing. We appreciate the support by the BIOSS Toolbox, especially P Salavei, in the purification of the bacterial lectins. Author Contributions A Landi: conceptualization, data curation, investigation, methodology, and writing—original draft, review, and editing. M Mari: conceptualization, data curation, investigation, and writing—review and editing. S Kleiser: data curation and investigation. T Wolf: investigation and methodology. C Gretzmeier: investigation. I Wilhelm: investigation. D Kiritsi: methodology and writing—review and editing. R Thünauer: investigation. R Geiger: supervision and methodology. A Nyström: conceptualization, supervision, and writing—review and editing. F Reggiori: conceptualization, supervision, methodology, and writing—review and editing. J Claudinon: investigation. W Römer: conceptualization, supervision, funding acquisition, and writing—review and editing. Conflict of Interest Statement The authors declare that they have no conflict of interest. Received May 10, 2019. Revision received October 15, 2019. Accepted October 16, 2019. © 2019 Landi et al. This article is available under a Creative Commons License (Attribution 4.0 International, as described at ). | Areas of skin that have been damaged by an injury are ideal niches for the concentration of Pseudomonas aeruginosa, a bacterium which impairs the healing process in tissue and creates favorable conditions for infections. Because of its resistance to most available antibiotics, this bacterium is in the "Priority 1 / CRITICAL" category of the World Health Organization's (WHO) global list of priority pathogens. Prof. Dr. Winfried Römer and his research team at the University of Freiburg have recently discovered a new function of the Pseudomonas aeruginosa lectin LecB: LecB can block the cell cycle, meaning that host cells can no longer divide and ultimately die. Wound healing is therefore considerably slowed down or halted altogether. The team of scientists recently published their research in the online scientific journal Life Science Alliance. LecB of Pseudomonas aeruginosa is a bacterial virulence factor that impairs signaling of growth factor receptors—in other words, proteins found on the surface of host cells that transmit signals to promote the growth and reproduction of tissues. As a result, the cell cycle is blocked. Lectins are proteins that bind to sugar residues on surface receptors and are not catalytically active, meaning they do not accelerate chemical processes. Many bacteria use lectins to bind effectively to the epithelium and endothelium layers of cells coating the body's surface area, thus facilitating the colonization of tissue. The researchers discovered that the bacterial lectin LecB is present in chronically infected human wounds, therefore making it possible for Pseudomonas aeruginosa to remain in those wounds. They also determined that LecB is capable of inducing the internalization—in other words, the uptake into the cell's interior—and the degradation of growth factor receptors in keratinocytes, a type of cell in the epidermis. Normally, the binding of growth factors to growth factor receptors activates so-called downstream signaling pathways, which accelerate tissue growth. "We were surprised that LecB does not contribute to the activation of growth factor signaling pathways, but that it triggers this silent internalization of receptors without activation," says Alessia Landi, one of the researchers from the University of Freiburg and the paper's first author. The scientists also demonstrated that LecB blocks the cell cycle and causes cell death. This process is preceded by intense vacuolization, which is the development of several larger, enclosed sacs (vacuoles) within the host cell. These vacuoles, which display unique morphological characteristics, are formed by structures of plasma membranes in which LecB is enriched. "Although lectins are not catalytically active, LecB appears to interfere with important host cell processes, like the cell cycle, in a way we don't yet understand," says Römer, adding, "This likely occurs through a restructuring of the cell membrane triggered by LecB." The discovery of this new function of the Pseudomonas aeruginosa lectin LecB has motivated the team to pursue more research. As Römer says, it is possible that LecB also has this function of promoting the further damage of tissue and facilitating the spread of bacteria in other cell types as well, including immune cells. Future studies will therefore focus on the effects of LecB on immune responses in chronic wound infections. | 10.26508/lsa.201900422 |
Other | Compound eyes: The visual apparatus of today's horseshoe crabs goes back 400 million years | Brigitte Schoenemann et al, Insights into the 400 million-year-old eyes of giant sea scorpions (Eurypterida) suggest the structure of Palaeozoic compound eyes, Scientific Reports (2019). DOI: 10.1038/s41598-019-53590-8 Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-019-53590-8 | https://phys.org/news/2019-12-compound-eyes-visual-apparatus-today.html | Abstract Sea scorpions (Eurypterida, Chelicerata) of the Lower Devonian (~400 Mya) lived as large, aquatic predators. The structure of modern chelicerate eyes is very different from that of mandibulate compound eyes [Mandibulata: Crustacea and Tracheata (Hexapoda, such as insects, and Myriapoda)]. Here we show that the visual system of Lower Devonian (~400 Mya) eurypterids closely matches that of xiphosurans (Xiphosura, Chelicerata). Modern representatives of this group, the horseshoe crabs (Limulidae), have cuticular lens cylinders and usually also an eccentric cell in their sensory apparatus. This strongly suggests that the xiphosuran/eurypterid compound eye is a plesiomorphic structure with respect to the Chelicerata, and probably ancestral to that of Euchelicerata, including Eurypterida, Arachnida and Xiphosura. This is supported by the fact that some Palaeozoic scorpions also possessed compound eyes similar to those of eurypterids. Accordingly, edge enhancement (lateral inhibition), organised by the eccentric cell, most useful in scattered light-conditions, may be a very old mechanism, while the single-lens system of arachnids is possibly an adaptation to a terrestrial life-style. Introduction Eurypterids, popularly known as sea scorpions, possess conspicuously large compound eyes. Indeed, Jaekelopterus rhenaniae (Jaekel, 1914) (Fig. 1a ) from the Early Devonian of Germany was perhaps the largest aquatic arthropod ever with a body length approximating 2.5 meters 1 . A fully grown Jaekelopterus (Fig. 1a ) was a giant even when compared to other large arthropods such as the well-known Cambrian Anomalocaris (~1 m long 2 ), or the meganeurid griffenflies (~75 cm wing span) of the Permo-Carboniferous 1 , 3 , 4 , or Hibbertopterus , another eurypterid, which probably was about 1.60 m long 5 . It was only the Late Carboniferous giant millipede Arthropleura , which perhaps attained comparable proportions 6 . Figure 1 Examples of eurypterid and limulid morphology and comparison of Jaekelopterus lateral eyes to other arthropod compound eyes. ( a ) Reconstruction of Jaekelopterus rhenaniae (Jaekel, 1914) (modified from Braddy et al . 2008; drawing by S. Powell, with permission. ( b ) Tachypleus gigas (Müller, 1785), Limulida [© Subham Chatterjee/CC BY-SA 3.0 (via Wikimedia Commons)]. ( c–e ) Examples of specimens (c: GIK 186, d: GIK 188, e: GIK 190). ( f – h ) Compound eye of a hornet Vespa crabro germana Christ, 1791. ( i–k ) Compound eye of Carcinoscorpius rotundicauda (Latreille, 1802), Limulida, exuviae. ( l ) Compound eye of the trilobite Pricyclopyge binodosa (Salter, 1859), Ordovician, Šárka formation, Czech Republic. ( m ) Ommatidium of the trilobite Schmidtiellus reetae Bergström 1973, base of the Lower Cambrian, Lükati Fm., Estonia. ( n – p ) Cones (lens cylinders) of the internal side of the compound eye of C. rotundicauda (exuvia). ( q ) Impressions of the exocones in Jaekelopterus rhenaniae (Jaekel, 1914), (GIK 186a). ( r – t ) Exocones of J. rhenaniae (GIK 186), note the regular arrangement compared to ( i ). ( u ) Schematic drawing of the ommatidium of an aquatic mandibulate (crustacean), and of the ommatidium of a limulid. ( w ) schematic drawing of several ommatidia of a limulide. ( x ) Schematic drawing of crossections of ( v ), the limulide after 73 . ( y ) Visual unit of a limulide (redrawn and changed after 51 ). cc, crystalline cone; ec, eccentric cell; exc exocone; l, lens; p, pigment; pc, pigment cells; rc, receptor cells; rh, rhabdom; su sensory unit. Full size image Eurypterids appeared during the Middle Ordovician (Darriwilian, 467.3 Mya 7 ), but probably originated earlier, and having been already severely affected by the late Devonian environmental changes, they vanished before the end of the Permian (252 Mya) 8 ), when up to 95% of all marine species died out 9 , 10 . Most eurypterids were predators, as indicated by morphological attributes such as gnathobasic coxae. Some species were equipped with spinose legs and/or highly effective grasping chelicerae or claws. In J. rhenaniae these claws reached an impressive size of 46 cm 1 . Forms lacking such specialized morphology, probably nectobenthic, walking on or swimming close to the sea floor. As opportunistic feeders, they dug in the mud as do modern horseshoe crabs, grabbing and shredding whatever they found to eat 11 . While earlier eurypterids were marine, among the later, younger forms some were living in brackish or even fresh water (e.g. 12 , 13 , 14 , 15 ). In addition to book gills comparable to those of modern Limulus , ancillary resiratory organs known as Kiemenplatten 16 may have allowed the animals to make short visits to the land 17 , but work currently in progress suggest that these may have aided gas exchange in aquatic settings 18 . Some eurypterids were ambush predators, e.g., the large pterygotid Acutiramus , based on analyses of functional morphology of its chelicerae 19 and its visual capabilities 20 . On the other hand, the ecological requirements of eurypterids were apparently quite diverse and results based on analysis of a particular taxon (or species) may not apply to other clades 21 , 22 . At least some eurypterids were effective predators, and most of such predators are, and were, equipped with an excellent visual system. Not much, however, has so far been understood about eurypterid eyes. In systematic terms, Tollerton 23 defined nine different shapes of eurypterid eyes, ranging from small lunar shaped examples to domed ovoid types, of which some even had a frontally overlapping visual field (e.g., J. rhenaniae ). The latter is typical for predators, allowing stereoscopic vision, which is imperative for the estimation of distances, volumes etc. Assuming that there is an optimal trade-off in compound eyes working at threshold perception between maximal acuity (demanding a high number of facets) and a high sensitivity (requiring large lenses) it has been possible to assign different eurypterid species to their light-ecological environments 20 , 21 , 22 . The results show a transition in lateral eye structure in eurypterids as a whole, and furthermore reflect a niche differentiation in co-occurring (Rhenish) eurypterids, thus avoiding competition for food in their marginal to delta plain habitats 22 . Nothing, however, has been documented about the internal structure of the eurypterid visual system, which might tell us about function and phylogenetic context. Systematic Position of Eurypterida Burmeister, 1843 Soon after the first eurypterid fossils were described, they were regarded as close relatives of xiphosurans 24 and the two groups were united in the taxon Merostomata Dana, 1852 25 . Traditionally, the major arthropod clade Chelicerata Heymons, 1901 has been divided into the aquatic Merostomata (Xiphosura + Eurypterida) and the terrestrial Arachnida, but today, most arthropod workers consider Merostomata a paraphyletic grade of aquatic chelicerates 26 , 27 , 28 and the term is no longer used. On the other hand, eurypterids have long been considered as closely allied to arachnids 27 , 28 , 29 , and scorpions have been regarded in some analyses as the closest relatives of eurypterids 30 , 31 , 32 . For further morphological evidence of the close relationship between scorpions and eurypterids, a recent study of cuticle microstructure found similarities between scorpions, eurypterids, and horseshoe crabs 33 . Versluys and Demoll 34 emphasized the similar body segmentation in scorpions and eurypterids, and Dunlop & Selden 29 later pointed out that the 5-segmented postabdomen can be regarded as a synapomorphic trait shared by scorpions and eurypterids. Kamenz et al . 35 interpreted organs found in exceptionally well-preserved specimens of Eurypterus as structures equivalent to spermatophore-producing organs in the genital tracts of some modern arachnids. In concert with the supposed mating behaviour of eurypterids via transfer of a spermatophore 36 , these findings offered support for a Eurypterida + Arachnida clade, for which the name Sclerophorata has been proposed 34 . Following the Sclerophorata concept, Lamsdell 37 again supported the view that the eurypterids are more closely related to arachnids than to xiphosurans, which would necessitate their removal from Merostomata sensu Woodward 24 and ultimately renders the latter taxon a junior synonym of Xiphosura. However, the question of whether eurypterids are more closely allied to arachnids or to xiphosurans is still far from settled (see discussion in 11 ). Recent phylogenomic analyses of chelicerate relationships even strengthen the concept that the Arachnida are polyphyletic, and also a nested placement of Xiphosura within Arachnida ( 38 and references therein). Thus, understanding the internal structure and function of eurypterid eyes not only sheds light on the origins of chelicerate eyes, but may also offer further insights into the ecology, behaviour, and relationships of these extinct invertebrates. The Eyes of Mandibulates, Arachnids and Horseshoe Crabs The oldest compound eye known at present, is that of the lower Cambrian trilobite Schmidtiellus reetae Bergström, 1973 Figs. 1m , 2e ), which has a typical apposition compound eye (Fig. 1f–h,m,v,x ), not dissimilar to that of modern bees, dragonflies and many crustaceans 39 , and there is strong evidence for a common origin of insect and crustacean eyes 40 , 41 , 42 . These eyes consist of repeated identical visual units, the ommatidia, appearing externally as facets. Each ommatidium contains ~8 receptor cells, grouped around a central axis, the rhabdom. The rhabdom is part of the sensory cells, and contains the visual pigments. With light energy these are changed in their sterical configuration, and a small electrical signal is sent to the nervous system. The incident light is focused onto the tip of this rhabdom by a dioptric apparatus, consisting of a cuticular lens and a cellular crystalline cone, while in modern aquatic visual systems the latter typically takes over the function of refraction. All ommatidia are isolated optically from each other by screening pigment cells, and thus over the whole compound eye a mosaic-like image is formed. In the ancient system of the trilobite S. reetae , however, a lens and a crystalline cone are not very evident, and the sensory unit sits in a kind of basket, isolating the ommatidia against each other. Principally, among other factors, the acuity of vision depends on the number of facets, and the apposition eye is typically found in animals active when the light is bright 43 . Adaptations of such eyes to dimmer light conditions, systems such as superposition eyes, are not known before the Devonian 44 . Figure 2 The ommatidium of Jaekelopterus rhenaniae (Jaekel, 1914). ( a – d ) The specimen, where the first ommatidium was discovered (red arrow), d: brown: rhabdom with eccentric cell dendrite in the centre, yellow: receptor cells, black spots: possibly screening pigments. ( b ) shows the rosette of the ommatidium in crossection under different contrasts. Note the bright spot of the presumed eccentric cell in the centre of the rhabdom. ( c , d , f ) Rosette of ( a ) magnified, and its schematic drawings. ( e ) Ommatidium of the trilobite Schmidtiellus reetae Bergström 1973 and its homogenous rhabdom. ( g – j ) Cross sections of ommatidia of Limulus (quoted from 69 Figs. 2 and 3)), [white circle indicates the rosette, yellow circle the pigmental periphery, blue arrow the situation of the rhabdom, ( j ). ( k–o ) SEM of the compound eye of Jaekelopterus rhenaniae (Jaekel, 1914), total aspect ( k ) to one rosette ( m – o ) and individual rhabdom ( o ), [white circle indicates the rosette of receptor cells, yellow circle the pigmental periphery, yellow arrow the situation of the rhabdom, ( n )]. ( p–u ) Different sensory rosettes of J. rhenaniae (Jaekel, 1914) and their interpretative drawings. Note the bright patch in the centre of the rhabdom, comparable to ( a – d ), and ( g – j ). ( q–r ) In black and white to show contrasts different from ( s–u ) in colour, [white circle indicates the rosette of receptor cells, yellow circle the pigmental periphery, blue arrow the situation of the rhabdom]. p, q GIK 188; ( r – t ), w, u GIK 190; ec eccentric cell; exc exocone, erc, element of receptor cell forming the outer part of the rhabdom; p, pigment cells; rc, receptor cells; rh, rhabdom; su sensory unit. Blue arrows indicate the dark rhabdomeric ring with the relics of the presumed dendrite of the eccentric cell inside. Full size image Differences in the ontogenetic development of the compound eyes in xiphosurans and Pancrustacea/Tetraconata (crustaceans and hexapods including insects) exclude a common origin for both systems; they evolved convergently 45 . The xiphosurans such as Limulus are well known as descendants of a bradytelic lineage (evolving at a rate slower than the standard), exhibiting only little morphologic change since the Mesozoic 46 and with roots that can be traced back in time at least to the Early Ordovician 47 . Even, if horseshoe crabs have changed their ecological requirements numerous times in the course of their evolution 48 , the extant species still seem to retain an ancient visual system. Ommatidial facets of living xiphosurans (Figs. 1b,i–k,n–u,v–y , 2g–j ), namely Limulus , are up to 320 microns 49 in diameter (among the largest known in the animal kingdom 49 ), make the eye a superb light collector 49 , and shows characteristic differences to those of Tetraconata. The dioptric apparatus of the xiphosurans is formed by a cuticular lens cylinder 50 , while in Tetraconata a cuticular lens focuses the light through a cellular crystalline cone. The latter, in water, takes over, by physical reasons, the function of refraction. The number of receptor cells per ommatidium in Limulus varies from 4 to 20, (8 to 15) ( 51 , 52 ). As in many apposition compound eyes of tetraconates the rhabdomeres of the receptor cells are fused in the centre. In xiphosurans the retinular cells are electrically coupled to one another, especially by the so-called eccentric cell, which sits at the base of the sensory unit. This configuration is not found in the tetraconate system. The rhabdom, which is built by microvillar arrays along the central part of the receptor cells appears like an asterisk with a central ring occupied by the dendrite of the eccentric cell when viewed along the light path (ref. 52 , p. 112). The eccentric cell takes over first processings among the receptor cells, and coupled to its neighbours it contributes to processes such as lateral inhibition 53 , enhancing edges and boundaries in the perceived visual image. This is beneficial especially in the scattered light-conditions of the water column. In contrast, the fused tetraconate rhabdom is uniform (Figs. 1m,x , 2e ). The visual units of both, Tetraconata and xiphosurans, are embraced by pigment cells, screening them against each other and isolating them optically. Thus the mode of the dioptric apparatus and the structure of the rhabdom is the crucial point here for distinguishing the eurypterid ommatidium, and to confirm that it is an ancient chelicerate system and similar to those of xiphosurans, rather than a tetraconate system, or even of a totally different structure. In spiders and their terrestrial relatives (e. g., phalangids and extant scorpions) the eyes are not compound eyes but all are of the simple corneal type (ref. 54 , p. 527, for a detailed review see 55 ). True spiders (Araneae) usually have 6–8 eyes, the so-called principal eyes (main eyes), homologous to the mandibulate median eyes and looking forward, being adapted to look at objects nearby; the lateral eyes (secondary eyes) cover the more peripheral fields of view. Based on the rhabdomeric pattern of the retina Paulus 56 supposed that the scorpion eye has evolved from the facet eye by fusion of corneae, and interpreted the secondary eyes of Arachnida as dispersed and reduced former compound eyes (ref. 56 , p. 311–313). This idea has been taken up to be investigated with modern molecular methods by Morehouse 57 . Miether & Dunlop 55 documented compound lateral eyes of fossil scorpions and other arachnids. The Triassic (about 220 Mya) scorpions Mesophonus opisthophthalmus Wills, 1947 and Mesophonus perornatus Wills, 1910 bear at least 28 and 35 facets respectively, the Late Carboniferous (~309 Mya) scorpion Kronoscorpio danielsi Petrunkevitch, 1913 (~309 Mya) shows 25–29 individual facets. Also revealing is a specimen from the Lower Devonian Rhynie Chert of Scotland (~410 Mya, Palaeocharinus sp.), which belongs to the Trigonotarbida, an extinct group of spider-like arachnids, and possibly closely related to scorpions 58 . It shows three larger lenses, and a horizontal row between them of 10–11 smaller facets - a system which may indicate a transition from the compound to a single-lens eye. Thus the single-lens eyes of modern arachnids including scorpions appear to be highly derived systems, their ancestral structure, however, remains unclear. Arthropodean eyes of Cambrian age (541–485 Mya, Palaeozoic) are of the compound type 59 , 60 and were found e.g. in radiodonts and in great appendage or megacheiran arthropods. While in a single radiodont eye at least 16,000 hexagonally packed facets may be observed 59 , the pattern of the fossil scorpion eyes is much less regular and the single less numerous. The arrangement resembles those of modern xiphosurans. One may keep in mind that the most optimal packing of numerous originally round elements as a matter of principle is that of a hexagonal arrangement, as evident in honeycombs for example. Thus a hexagonal pattern would be expected in high resolution compound eyes with many facets, while for less acute compound eyes the pattern may become more or less irregular, as can be observed in almost all smaller compound or aggregate eyes. Thus it is not the external pattern that may be crucial for phylogenetic discussion, because it is so constructed for functional reasons. This functional pattern from an evolutionary point of view can be developed very quickly, as the now classic model by Nilsson & Pelger 61 shows. It is the internal structure, namely the organisation of the dioptric and sensory apparatus that is relevant here, or the organisation of the optical lobes (for this see 60 ). In principle, under each facet of a compound eye any kind of structure might reside, any mode of ommatidium, a small retina, a singular receptor cell or anything else. The phylogenetic context, however, gives certain constraints. Likewise, it has been discussed that the myriapod ocellar single-lens eyes had arisen by disintegration of former compound eyes 62 . Each unit consists of a big lens floored by up to several hundreds of cells 63 , 64 . Harzsch and colleagues showed that during eye growth in Myriapoda new elements are added to the side of the eye field, extending rows of previously generated optical units, which is suggested to contradict the above assumption 64 . Only one group of Chilopoda, the Scutigeromorpha, shows hexagonal facets and their ommatidia possess cone cells 65 , similar to those of Pancrustacea/Tetraconata (hexapods + crustaceans). Their compound eyes, however, must be considered as secondarily reorganized lateral myriapodan ocelli 62 . Results To decide to which type the eyes of eurypterids belong, one has to consider two questions. The first concerns the structure of the dioptric apparatus. If the eye is constructed similarly to that of an aquatic tetraconate as in marine crustaceans, one would expect a distal thin cuticle, not functioning as a lens, which would be typical just for terrestrial arthropods. Instead one might expect a distinct crystalline cone forming the dioptric apparatus. In contrast, in the xiphosuran compound eye, less ordered than a typical tetraconate compound eye, the dioptric apparatus consists of cuticular, cone-shaped lens-cylinders (exocones), which form a pattern of separated domes, visible if the cuticle is removed from the receptors (Fig. 1n–p ). Exocones in eurypterid eyes Figure 1r–u clearly shows the relicts of a eurypterid visual surface, appearing much more regular than that of a xiphosuran (Fig. 1j,k,n–p ). The units are regularly arranged in a squared pattern. Specimen GIK 186 clearly shows the lower surface of the cuticle covering the eye (Fig. 1c,r–u ), exposing clear exocones, very similar to those of xiphosurans (Fig. 1n–p ). In the detached part of the fossil they are indicated by tapering cavities (GIK 186a, Fig. 1q ). These exocones, like their negatives, remain quite separated from each other, rather than being densely packed as are the facets of a honey bee, hornet, a dragon fly or of many trilobites (Fig. 1f–h,l,m ). Our results clearly show that the typical shape of exocones in the eurypterid eye is similar to those of Limulus (Fig. 1n–u ) and these exocones may have functioned in the same way as lens cylinders. Mode of preservation and receptor system The second question that has to be considered is the construction of the receptor system. This is a much more difficult problem, because the relics of soft (labile) tissues, such as nervous or sensory cells, are very rarely found in the fossil record, and then only as a result of special kinds of preservation, such as phosphatisation, or at particular Konservat-Lagerstätten sensu Seilacher et al . 66 . Such exceptional preservation of soft tissue has not hitherto been known from eurypterid fossils. The grain size of the host sediment, in most cases a clayey siltstone, lies between silt (0.002–0.063 mm) and clay (<0.002 mm), in the range that would preserve the traces of cellular structures very well because the size of the receptor cells is larger than the grain-size. There is another adverse effect: In many eurypterid fossils no facets can be observed, the eyes appear as covered by relicts of a pellucid membrane, probably the cuticle 21 . Thirdly, most eurypterid fossils are moults 13 and it is only in dead individuals (carcasses) that preservation of the internal structure of fossil eyes can be expected. Sublensar rosettes Only a few examples of microstructures of visual systems in fossils have been reported so far 39 , 67 , 68 , 69 . Thus the following analysis should be understood as just a first attempt to explore the sensory concept and likely in consequence the relationship of the eurypterids to other arthropod groups based on the comparison of visual systems. This may provide insights into the evolution of compound eye systems in general. All specimens illustrated here (Fig. 1c–e ) reveal a typical pattern of an ommatidium in cross section, which results from an overhead view, when the overlying parts of the eye have been lost during the course of time. In most of these a zone of former pigment (cells) can be distinguished round the periphery, sometimes even having a dark appearance, possibly due to relicts of the carbon formerly contained in the pigment cells. More centrally traces of sensory cells can be made out, each arranged like a rosette and of different number among the systems (5–9 cells, Fig. 2a–d,f,k–u ). In diameter of ~70 µm the units are very similar in size to the receptor cells of adult Limulus (~70 µm 70 , 71 ). The most important part, the centre, repeats the pattern mentioned above in describing the Limulus rhabdomeric structure. Though the fine individual microvillar structures themselves have not been preserved, we find a comparable wide dark ring with a more or less circular bright structure in the centre. It is quite remarkable to find such a central structure in a fossilised rhabdom, but very clearly it is evident in Fig. 2a–f,o,p–r , and indicated as a central dot inside of the rhabdom in Fig. 2s–u . There is a good case for assuming (see below) that the dark ring represents the relics of the rhabdomeric/microvillar arrays, the bright centre the dendrite of an eccentric cell. This is a recurrent pattern in all examples we found, illustrated in Fig. 2 . It strongly suggests that the eurypterid system is of the xiphosuran type. Discussion The results above clearly show that in terms of their ordering, the lens cylinders (exocones) and the internal structure of the underlying visual unit (variable number of receptor cells, very probably the existence of an eccentric cell as an element of the ommatidium), the eurypterid eyes are almost identical to xiphosuran compound eyes. While in highly resolving mandibulate apposition compound eyes the facets are ordered normally in a hexagonal pattern, in Limulus the ommatidia are irregularely positioned 50 . The xiphosurans investigated here show a regular, squared pattern of the facets. Squared arrangements of lenses are typical for superposition eyes of mandibulates, namely among decapod crustaceans 54 . To find a squared pattern in J. rhenaniae indicates an autonomous character, perhaps suggesting a specialised visual system. There is an important issue to discuss at this juncture, the relatively enormous size of the receptors in the compound eyes of Jaekelopterus rhenaniae . Figure 2m–u show the conspicuously large relics of the eurypterid receptor cells, which reach sizes of ~70 µm. In human retinas, the receptor size lies at the limit of any physically possible dimension - at about 1 µm, the lower limit of arthropod receptor diameters consequently also lies at 1–2 µm, while the diameter of normal ommatidia as a whole achieves between 5–50 µm 72 . The diameter of rhabdoms often reaches between 1.5–3.5 µm 43 , consequently the size of a normal receptor cell is smaller or around 10 µm. In particular, the photoreceptors of Limulus , however, are among the largest in the animal kingdom 73 . Limulus , as the benchmark needed here, the size of the rhabdom lies at about 60 µm (diameter 69 ), equal to those as found in J. rhenaniae (~60 µm, Fig. 2o ), the size of the entire ommatidium at 170–~250 µm 73 , 74 , are identical to the diameters of our material (~250 µm, all examples of Fig. 2 ). Thus, the sizes of the ommatidia of J. rhenaniae match absolutely those of Limulus . Consequently, the size of these photoreceptors is the ~70-fold of the human or common arthropodan receptor size. At this point some more references may be cited to support this important issue. Firstly, the present classic reference about the morphology of the Limulus compound eye is that of Fahrenbach 1969, p. 252 70 . He describes how the body of the ommatidium is formed by primary receptor cells with 70 µm in diameter, the retinula. They are grouped like orange slices about the central tapering dendrite of the secondary receptor, the eccentric, cell. The eccentric cell, a modified receptor cell, also measures about 70 µm. Fahrenbach notes that the number of retinula cells per ommatidium averages between 10 and 13 for different individuals, the observed range covering 4 to 20 cells. ( 70 , p. 252), in the eurypterid we find 5 to 9 cells. These descriptions of the Limulus -eye were confirmed by other authors, such as Batelle 2016, p. 810 75 : [photoreceptor cells: about 150 µm long and 70 µm wide in adults, furthermore 74 , p. 261 76 , p. 416, Fig. 2 , and many others.] The enormous size of the Limulus photoreceptors gave rise to the first electrophysiological intracellular recordings in the nineteen-sixties - seventies (e.g. 77 , 78 , and other research) discovering the ionic mechansims of transduction and the discrete waves (quantum bumps) as responses to single photons ( 79 , 80 , 81 , 82 , and others). As a whole, these investigations, based on the exceptionally large photoreceptors of Limulus established an important part of modern electrophysiology. Since Exner in 1891 50 described the fascinating physical properties of the lens cylinders in the compound eye of the xiphosuran Limulus , this system has been subject of intense research, morphologically and physically (for an overview see 52 , 83 , 84 ). It is now evident that this eye is, not just in consideration of its dioptric apparatus, very different from any tetraconate compound eye, such as that of a bee or dragonfly, but also in its entire morphology. In the eye of the horseshoe crab the receptor cells are coupled directly in the rhabdom, namely by the central dendrite of the eccentric cell. The function of the eccentric cells, probably among others, is to enhance contrasts and edge-perception, important especially for an organism active at low light conditions and in the scattered light conditions of the water column. The similarly structured rhabdom in the eurypterid Jaekelopterus rhenaniae investigated here, allows to assume that the same function already existed at least since the Lower Devonian. The earliest horseshoe crab-fossils come from the Early Ordovician (more than 470 Mya old) 47 , and the family Limulidae, comprising all extant species, dates back to the Triassic, about 240 Mya 85 . Today there exist four species of three genera. Many of the common characteristics of eurypterids and xiphosurans have been considered to be symplesiomorphic or convergent due to a shared aquatic life style. As mentioned before, the eyes of modern arachnids, however, are not compound eyes, but single-lens eyes, and probably developed by partition of compound eyes and fusion of their ommatidia 55 , 56 , 57 . They must be understood as highly derived systems, while their ancestral structure, for example the existence of eccentric cells, can no longer be perceived. As mentioned above, Palaeozoic and Mesozoic scorpions, also possessed compound eyes, in outer appearance not dissimilar to eurypterid eyes 55 . Very probably, according to the results presented here, their internal compound eye-structures also may have been similar. Most recently Fleming et al . 86 analysed a large-scale data set of ecdysozoan opsins; comparing this with morphological analyses of key Cambrian fossils with preserved eye structures. They found that multi opsin vision evolved in a period of 35–71 million years through a series of gene duplications. They show that while Chelicerata have 4 opsins, Pancrustacea (crustaceans and hexapods) already possess 5 85 . Consequently this may indicate an earlier origin of the chelicerate visual system, because to develop the fifth opsine takes time (principle of genetic clocks). Bitsch and Bitsch 87 , referring to Fahrenbach 51 , 70 , 88 , 89 considered the eccentric cell to be autapomorphic to xiphosurans (ref. 87 , p. 188). If, however, the eurypterids possessed an eccentric cell as well, this structure may be plesiomorphic to Chelicerata. Fahrenbach described that double eccentric cells are not uncommon in Limulus (ref. 70 , p. 452). A single eccentric cell, however, is typical, and a further reduction during evolution in the context of changing ecological constrains and along phylogeny can be imagined easily. This element may not belong to the oldest form of any compound eye, its form and occurrence lies still in the dark. Functionally, with several thousands of facets (>3545 facets/eyes and >747.235 mm² eye area) 22 , and probably being equipped with a contrast enhancing neuronal system due to the eccentric cells, the giant sea scorpion Jaekelopterus rhenaniae from the Early Devonian of Germany had a very effective visual capacity for high acuity and sensitivity, excellently adapted to an efficient, visually guided, aquatic predatory life-style. In summary our results suggest a close similarity between the xiphosuran and the eurypterid eye, confirming the basal phylogenetic connection between both forms and lending support for a close relationship as indicated by some phylogenetic analyses (e.g. 90 ). Convergent eye evolution, generating more or less identical sophisticated visual systems in xiphosurans and eurypterids as found here, seems rather improbable. This means, that the visual system equipped with lens cylinders and an eccentric cell with all its functions, is very basal. If so, this furthermore implies that the Palaeozoic arachnid visual system also possessed an eccentric cell. This primordial arachnid eye may then have given rise to the eyes of modern arachnids in the course of terrestrialisation. This is also supported by the notion that the oldest known arachnids, the Silurian scorpions, had compound lateral eyes 58 , 91 , 92 , 93 , while the lateral eyes of extant crown group scorpions consist of two to five pairs of lateral lenses 86 . Perhaps in relation to a ground-dwelling terrestrial life-style extant arachnids mostly seem to rely on non-visually sensory systems such as mechanoreceptors (trichobothria, pectines and highly sensitive setae) or chemosensory systems, which seem to be metabolically much less expensive than visual systems 94 , 95 ; so eyes may become reduced or disappear. One may remember here, that many of the arthropods appearing with the Cambrian explosion, were benthic, requiring a visual system quite sensitive to light, and taking advantage of any contrast enhancing system such as lateral inhibition, which is tied up to the eccentric cells. Even if the evidence of the lens cylinders in the eurypterid eye, as strong indications of the existence of eccentric cells, represented by their dendrites in the centres of the rhabdoms, is given here, future findings of better preserved material may confirm these findings by the use of synchrotron radiation or computer tomography. Appropriate material is, to our knowledge, still lacking, and thus the application of these methods is not possible, perhaps will never be, and consequently the evidence given here may remain all that is attainable at present. Material and Methods The macrophotograps of Figs. 1c–l,n–u , 2q–u were taken with a Keyence digital microscope ( VHX-700F, objective VH-Z20T and at the Institute of Biology Education (Zoology), University of Cologne. The specimens were not submerged in isopropanol in order to retain the contrasts of the edges. The pictures were processed and arranged to figures with Adobe Photoshop CS3 and Adobe Photoshop Elements. The macrophotographs of Fig. 2a–d were taken with the specimens submerged in isopropanol using a Canon EOS 600D SLR camera equipped with a Canon MP-E 65 mm macro lens. Free image stacking software (CombineZM by Alan Hadley) was then employed to produce composites with enhanced depth of field using photographs with differing focal planes. These were processed and arranged into figures using Adobe Photoshop CS3. The SEM photographs were taken at the Biocentre Cologne, (FEI -Quanta FEG 250, Thermo Fischer Scientific). The eurypterid eyes figured in this contribution are stored in the Geological Institute of the University of Cologne, leg. E. Evangelou (Figs. 1c–e,q–u , 2 ). They were collected from Siegenian (possibly lowermost Emsian in terms of the international stratigraphic frame) strata of the Jaeger Quarry near Frielingshausen/Bergisches Land, Germany. | The eyes of the extinct sea scorpion Jaekelopterus rhenaniae have the same structure as the eyes of modern horseshoe crabs (Limulidae). The compound eyes of the giant predator exhibited lens cylinders and concentrically organized sensory cells enclosing the end of a highly specialized cell. This is the result of research Dr. Brigitte Schoenemann, professor of zoology at the Institute of Biology Didactics at the University of Cologne, conducted with an electron microscope. Cooperation partners in the project were Dr. Markus Poschmann from the Directorate General of Cultural Heritage RLP, Directorate of Regional Archaeology/Earth History and Professor Euan N.K. Clarkson from the University of Edinburgh. The results of the study "Insights into the 400 million-year-old eyes of giant sea scorpions (eurypterida) suggest the structure of Palaeozoic compound eyes' have been published in the journal Scientific Reports. The eyes of modern horseshoe crabs consist of compounds, so-called ommatidia. Unlike, for example, insects that have compound eyes with a simple lens, the ommatidia of horseshoe crabs are equipped with a lens cylinder that continuously refracts light and transmits it to the sensory cells. These sensory cells are grouped in the form of a rosette around a central light conductor, the rhabdom, which is part of the sensory cells and converts light signals into nerve signals to transmit them to the central nervous system. At the centre of this 'light transmitter' in horseshoe crabs is a highly specialized cell end, which can connect the signals of neighbouring compounds in such a way that the crab perceives contours more clearly. This can be particularly useful in conditions of low visibility under water. In the cross-section of the ommatidium, it is possible to identify the end of this specialized cell as a bright point in the centre of the rhabdom. Brigitte Schoenemann used electron microscopes to examine fossil Jaekelopterus rhenaniae specimens to find out whether the compound eyes of the giant scorpion and the related horseshoe crabs are similar or whether they are more similar to insect or crustacean eyes. She found the same structures as in horseshoe crabs. Lens cylinders, sensory cells and even the highly specialized cells were clearly discernible. "This bright spot belongs to a special cell that only occurs in horseshoe crabs today, but apparently already existed in eurypterida," explained Schoenemann. "The structures of the systems are identical. It follows that very probably this sort of contrast enhancement already evolved more than 400 million years ago," she added. Jaekelopterus most likely hunted placoderm. Here, its visual apparatus was clearly an advantage in the murky seawater. Sea scorpions, which first appeared 470 million years ago, died out about 250 million years ago, at the end of the Permian age—along with about 95 percent of all marine life. Some specimens were large oceanic predators, such as Jaekelopterus rhenaniae. It reached a length of 2.5 meters and belonged to the family of eurypterida, the extinct relatives of the horseshoe crab. Eurypterida are arthropods, which belong to the subphylum Chelicerata, and are therefore related to spiders and scorpions. Among the arthropods there are two large groups: mandibulates (crustaceans, insects, trilobites) and chelicerates (arachnid animals such as sea scorpions). In recent years, Schoenemann has been able to clarify the eye structures of various trilobite species and to make decisive contributions to research into the evolution of the compound eye. "Until recently, scientists thought that soft tissues do not fossilize. Hence these parts of specimens were not examined until not so long ago,' she concluded. The new findings on the eye of the sea scorpion are important for the evolution of the compound eyes not only of chelicerates, but also for determining the position of sea scorpions in the pedigree of these animals and for the comparison with the eyes of the related group of mandibulates. | 10.1038/s41598-019-53590-8 |
Biology | First comprehensive network of wild crop species will help breeders tackle food insecurity | Holly Vincent et al, Modeling of crop wild relative species identifies areas globally for in situ conservation, Communications Biology (2019). DOI: 10.1038/s42003-019-0372-z | http://dx.doi.org/10.1038/s42003-019-0372-z | https://phys.org/news/2019-05-comprehensive-network-wild-crop-species.html | Abstract The impact of climate change is causing challenges for the agricultural production and food systems. More nutritious and climate resilient crop varieties are required, but lack of available and accessible trait diversity is limiting crop improvement. Crop wild relatives (CWR) are the wild cousins of cultivated crops and a vast resource of genetic diversity for breeding new, higher yielding, climate change tolerant crop varieties, but they are under-conserved (particularly in situ), largely unavailable and therefore underutilized. Here we apply species distribution modelling, climate change projections and geographic analyses to 1261 CWR species from 167 major crop genepools to explore key geographical areas for CWR in situ conservation worldwide. We identify 150 sites where 65.7% of the CWR species identified can be conserved for future use. Introduction Ensuring global food security now and for the future is one of the greatest challenges of our time. One in nine people worldwide suffer from chronic hunger 1 , and with the human population projected to rise to 9.7 billion by 2050—meaning an extra 2.2 billion mouths to feed 2 , the pressure on food production systems is likely to increase dramatically 3 . Developing new crop varieties able to withstand climatic extremes, endure altered or increased exposure to pests and diseases, and be more resource efficient requires access to as broad a range as possible of plant genetic resources, and a far greater range than exists today 4 . Crop wild relatives (CWR), the wild and weedy plants closely related to cultivated crops, are a rich source of novel genetic diversity for crop breeding 5 . Despite their value for food and agriculture, globally CWR are poorly represented ex situ in gene banks 6 , although systematic effort to improve ex situ coverage has begun 7 . Further, only a handful of genetic reserves for active in situ conservation exist 8 , despite the generally accepted requirement for complementary conservation 9 and the particular need to develop CWR in situ activities that enable the conservation of geographical partitioned genetic diversity which retains potential for local environmental-evolutionary adaptation 10 . Furthermore, existing in situ reserves do not meet the required management standards to maintain CWR populations and their genetic diversity long-term 11 . The most effective means of systematically ensuring in situ CWR conservation would be to establish a global network of in situ populations actively managed to maintain genetic diversity 12 . Here, we tackle the in situ CWR conservation deficit and, to our knowledge, for the first time address which sites and CWR populations might most effectively form the foundation for a global network of reserves for priority CWR in situ conservation. The selection of such sites and CWR populations needs to consider climate change resilience, maximize potential CWR taxonomic and genetic diversity inclusion and where feasible, use the existing global network of protected areas to avoid the expensive establishment of reserve sites and minimize the impact of human habitat modification associated for example with agriculture, forestry and urbanization. Addressing these challenges will contribute to achieving globally agreed goals on biodiversity and sustainable development. CWRs are explicitly mentioned in Target 13 of the Convention on Biological Diversity’s Aichi Targets 13 and UN Sustainable Development Goal 2 – Ending Hunger, Target 2.5 “maintain genetic diversity of… cultivated plants... and their related wild species…” 14 . Results Modeling global CWR richness We identified a total of 1425 CWR species, related to 167 crops, as priority CWR for improving food security and income generation (supplementary data 1 ). Some CWR species belong to more than one crop genepool, for example, Brassica cretica Lam. belongs to the secondary genepool of both kale and oil seed rape. A total of 164 species (of the 1425 species—11.5%) had no occurrence records, leaving a total of 1261 CWR species related to 167 crops to analyze. In total, we gathered 136,576 CWR occurrence records with unique coordinates. We modelled the distributions of 791 CWR using MaxEnt, but 67 of these models did not meet our model adequacy criteria. We therefore produced a circular buffer of 50 km around occurrence records for such cases and for the remaining 537 CWR that had fewer than 10 occurrence records to produce an adequate distribution model. Current CWR distributions are predicted to occur across most of the temperate, tropical and subtropical regions (excluding polar and extreme arid areas) (Fig. 1 ). CWR species are concentrated in the Mediterranean basin, previously identified as a global hotspot, with the highest concentration globally predicted to occur in a single 100 km 2 cell on the northeast Lebanese/Syrian border 15 . Other areas of species richness include the Caucasus, Indochina, eastern USA, western coast of USA, the Andes and central and eastern South America, confirming previous species richness patterns 6 . Regions of high CWR species richness are largely coincident with areas of biodiversity richness 16 , particularly in Indochina, western coastal USA, the Andes and the Mediterranean. Fig. 1 CWR species richness map. This map shows the overlapping distributions of 1261 species related to 167 crops in the world. Orange to red colours indicate high CWR species overlap, while blue to green colours indicate low overlap of CWR Full size image Modeling in situ gap analysis Table 1 summarizes the in situ gap analysis results for each crop genepool, summarized by crop types 17 . Numbers of CWR species per crop type ranged from 15 for citrus fruits to 264 for root, bulb, or tuberous vegetables, which contains crops with large genepools, such as potato and cassava. The number of CWR projected to lose 50% or more of their current ranges by 2070 under 726 CWR/adaptive climate change scenarios were totaled for each crop type; the root, bulb, or tuberous vegetables have the most CWR facing potential substantial distribution loss, with 20 CWR facing over 50% current range loss, followed by cereals with 19 and leguminous crops with 17 CWR. No modelled CWR from grape crops or citrus fruits were found to lose more than 50% of their current distribution. Of CWR that are set to lose more than 50% of their current potential substantial distribution, those of spice crops are the most vulnerable, with 26.7% of all modelled CWR losing distribution by 2070, followed by sugar CWR (14.3%), cereals CWR (13.7%) and beverages (13.6%). Under the consolidated crop types, CWR are not well covered by the existing global protected area network, with grape CWR having the least coverage at 14.7% and CWR of leafy or stem vegetables having the most protected area coverage at 32.8% on average (Table 1 ). However, the results for loss of current distribution by 2070 show that most crops will be impacted by climate change, losing ~20% of current protected area coverage on average per CWR. The crops least affected appear to be citrus fruits, with only 4.6% loss, and the most affected being sugar crops with 31.4%. Table 1 Consolidated in situ gap analysis results for different crop types Full size table The current proportion of potential CWR genetic diversity based on Ecogeographic Land Characterization (ELC) diversity within the existing protected areas was recorded for each species, then summarized under each crop type. Figure 2 highlights the average proportion of potential CWR genetic diversity covered by existing protected areas and the predicted losses of genetic diversity within these areas under projected climatic changes in 2070. CWR of all crop types have at least 70% of averaged potential CWR genetic diversity within the existing protected areas, with the highest being 91.9% for berries and the lowest 70.7% for other crops categories 17 . In terms of predicted loss of genetic diversity in protected areas, berries and spice crops are expected to experience the least loss, with only 6.5% reduction of genetic diversity, whilst other crops are expected to lose 31.2% of genetic diversity within protected areas, followed by fruit-bearing vegetables at 19.8% and leguminous crops at 19.5%. Fig. 2 Current and projected loss of potential genetic diversity in protected areas for CWR grouped by crop type. Blue bars indicate average current coverage of genetic diversity per CWR in protected area and magenta bar indicates average loss of genetic diversity per CWR in protected area in 2070 Full size image Individual CWR in general were found to be well represented in current protected areas; only 35 (2.5%) of the studied species related to 28 crops were distributed exclusively outside of protected areas (supplementary data 1 ). These included seven CWR from primary genepools, such as wild Pennisetum glaucum (L.) R.Br., related to pearl millet; Prunus argentea (Lam.) Rehder, related to almond, and Prunus sibirica L., related to apricot. The top five CWR found to have the highest proportion of distribution in protected areas were: Coffea costatifructa Bridson related to coffee, Ficus glareosa Elmer related to fig, Manihot alutacea D.J. Rogers & Appan related to cassava, Beta patula Aiton and Beta nana Boiss. & Heldr. Both related to beet. If a threshold of 50% or more of CWR genetic diversity within protected areas is considered adequate for genetic conservation, then 112 of the assessed CWR are under-conserved and 91% of CWR are well represented by existing protected areas. However, this existing in situ conservation is likely to be passive, meaning that currently CWR populations located in protected areas are not being actively managed and monitored to maintain their diversity; more active conservation is recommended for these populations to ensure their genetic diversity is conserved 18 . In terms of future climate projections, only two of the 724 modelled CWR species— Vicia hyaeniscyamus Mouterde and Zea perennis (Hitchc.) Reeves & Mangelsd.—are likely to lose all their current predicted distribution by 2070. However, a further 83 species are predicted to lose more than 50% of their current range by 2070 and Arachis batizocoi Krapov. & W. C. Greg., Arachis appressipila Krapov. & W. C. Greg., Manihot gabrielensis Allem, Vigna keraudrenii Du Puy & Labat and Oryza nivara S. D. Sharma & Shastry are all predicted to lose over 80% of their current potential distribution. Regarding potential CWR genetic diversity in 2070 based on ELC zonation, 15 CWR are projected to lose over 50% of their current genetic diversity by 2070 through distribution loss and 39 CWR are expected to lose over 50% of their genetic diversity that is currently passively conserved in protected areas. Further details on the in situ gap analysis results for individual CWR are available (supplementary data 1 ). While when identifying sites for global in situ conservation of CWR, the top 150 sites (Fig. 3 ) covering 2000 km 2 worldwide where 829 CWR species related to 157 crops can be systematically conserved in situ. This analysis used both adaptive and pragmatic scenarios, adaptive resulting from individual CWR ELC analysis based on the native country range of each CWR clustered using non-collinear edaphic, geophysical and climatic sets of variables and pragmatic, which used the same approach but prioritizes sites containing protected areas (to maximize use of existing protection) and in a complementary fashion considers additional sites outside protected areas where there are CWR/adaptive scenarios combinations not identified within protected areas. The top 10 sites listed in Table 2 contain a combined total of 270 unique CWR (21.4% of assessed CWR) and 726 CWR/adaptive scenarios (5.1% of all genetic diversity), all contained within protected areas. Five of the top 10 sites are found in the Mediterranean basin and mainland Europe (in Spain, Greece, Italy, Austria and Turkey); additionally, two sites are in East Asia (in China and Myanmar), one in Southeast Asia (in Malaysia), one site in North America (in USA) and one site in South America (in Brazil). The protected areas that overlap the top 10 sites in Fig. 3 cover a range of regional and global designations including: Special Protection Areas (under the European Union’s Birds Directive)—Spain; Scenic areas (IUCN’s Management Category VI—China; Provincial/Regional Nature Reserves (IUCN V)—Italy; Sites of Community Importance (under the European Union’s Habitats Directive)—Greece; World Heritage sites—China; and, Indigenous Areas—Brazil. The top 10 sites outside protected areas listed in Table 3 complement the 100 sites in protected areas selected in the pragmatic scenario, and contain a combined total of 283 unique CWR (22.4% of total assessed CWR) and 836 CWR/adaptive scenario combinations (5.8% of total genetic diversity) from 106 crop gene pools; however, they only add 205 (16.3% of assessed CWR) species and 531 CWR/adaptive scenario combinations (3.7% of total genetic diversity) from 89 crop gene pools to the existing 100 sites in protected areas. Five of the sites listed in Table 3 are in the Fertile Crescent and Caucasus region; additionally, two are found in Central and North America, one in South America, one in Spain and one in Afghanistan. Effectively conserving the top 10 sites inside protected areas and the top 10 sites outside protected areas defined in the pragmatic scenario, would only require active management of ~2000 km 2 globally and would protect 475 CWR species, and 1257 unique CWR/adaptive scenario combinations. Meanwhile, only 0.01% of the world’s total terrestrial area would be required to conserve the top 150 sites presented. Fig. 3 Top 150 global sites for CWR in situ conservation under the pragmatic scenario, with the enclosed map shows the priority sites in the Fertile Crescent and Caucasus. The top 10 sites within existing protected areas are shown in magenta triangles, the remaining 90 priority sites within protected areas are in blue triangles; the top 10 sites outside of existing protected areas are in yellow circles, with the remaining priority 40 sites outside of protected areas in turquoise circles Full size image Table 2 Details of the top 10 CWR sites inside existing protected areas in the pragmatic implementation scenario Full size table Table 3 Details of the top 10 CWR sites outside of protected areas in the pragmatic implementation scenario Full size table Discussion Our results identify 150 sites covering ~2000 km 2 worldwide where 829 CWR species related to 157 crops can be systematically conserved in situ. One hundred of these sites are in current protected areas, so theoretically are under some form of existing conservation management, though that management is unlikely to be focused on the genetic conservation of the CWR populations present. The active in situ management of these 150 sites to maintain genetic diversity and their incorporation into a global CWR in situ conservation network would substantially improve the genetic breadth of CWR conservation and substantially contribute to global food security and poverty alleviation as required by the international conservation policy 8 , 12 . The approach presented here prioritizes sites for conserving multiple CWR within existing protected areas to optimize overall cost/benefit. Protected area management can be adapted for CWR genetic in situ conservation, but the current global grid has gaps and 35 CWR occur solely outside protected areas, such as Vicia hyaeniscyamus found on the Lebanese/Syrian border. Actions that can contribute to in situ CWR conservation including establishing new protected areas or less formal in situ management sites is required. Unlike existing protected areas managed to preserve unique habitats or rare and threatened taxa, these would be genetic reserves, where the goal is to maintain or enhance the genetic diversity of the priority CWR, rather than species presence alone, irrespective of levels of intra specific genetic diversity. If establishing genetic reserves in existing protected areas, existing site/population management plans would require amendment and/or preparation to specifically address the requirement to maximize genetic diversity maintenance 19 . Other less formal in situ conservation approaches may also be employed, such as conservation easements, that establish voluntary agreements between conservation agencies and landowners restricting or limiting the development of a site 20 , 21 . This approach could aid the conservation of many CWR, particularly ruderal legumes and grasses, which are often found in disturbed habitats, such as roadside verges, and are often absent from conventional protected areas established to preserve pristine environments. Our results indicate that the predicted impacts of climate change on CWR distributions vary widely among CWR, even within crop gene pools; therefore, it is important conservation strategies are adapted to individual taxon requirements. CWR such as Zea perennis and Vicia hyaeniscyamus , which are predicted to lose 100% of their current distribution by 2070, and Arachis appressipila , Vigna keraudrenii and Manihot gabrielensis , which are predicted to lose over 50% of their existing range by 2070, should be prioritized for ex situ conservation or even introduction to climate-proofed sites, such as those identified in our analysis. Further work is required to analyze the level of fragmentation CWR distributions are likely to face in the future as this would affect in situ conservation requirements and increase the need to plan for corridors or stepping stones between established reserves to promote migration and to maintain gene flow between populations. It should also be remembered, as recommended in the CWR genetic reserve quality standards 11 , that all CWR populations conserved in situ should be backed-up in ex situ collections, particularly given that until the global CWR in situ network is established, most CWR users will gain access to diversity via gene bank collections. The methodology applied in this analysis is innovative, prioritizing sites found within existing protected areas, maximizing overall environmental (and thus potential genetic) diversity coverage and long-term site viability in view of climate change. Genetic diversity data per CWR in the analysis was estimated using environmental diversity as a proxy. Further study and experiments are required to test whether this approach is truly appropriate for such a wide range of taxa. It is noted that the increasing power and decreasing costs of direct measures of genetic diversity will be useful in refining conservation priorities 4 , but it seems unlikely such techniques will be practical in the near term for planning the conservation of as many as 829 CWR taxa throughout their global range. Incorporating actual genetic diversity and characterization data for individual occurrences into conservation planning on such a scale may be possible eventually, but in the meantime the approach taken here offers a practical methodology that can be applied widely in wild plant species conservation planning. The occurrence dataset used in this analysis highlighted that many CWR are poorly represented in gene banks and herbaria and occurrence databases worldwide, with 164 CWR having no occurrence records at all and a further 470 CWR having fewer than 10; this reinforces the recommendations for increased CWR surveying and conservation 5 . The process of targeted global ex situ CWR collection which has begun 7 will itself generate additional data for subsequent in situ CWR conservation planning, especially for rare and under-collected taxa. Some crop genepools are particularly under represented, such as citrus fruits, tea and tropical and sub-tropical fruit-bearing trees, possibly due to unresolved taxonomy in the case of Citrus or difficulty in collecting and conserving recalcitrant seeds in the case of tropical fruit trees. It may also be advisable in subsequent in situ gap analyses to weight target taxa, using relative gene pool level 22 or IUCN Red List assessment 23 of the CWR species to ensure those taxa easiest to cross with the crop or most threatened in the wild are prioritized during analysis. Our results identify 150 sites where active in situ management might be most effective in maximizing CWR species and intra CWR genetic diversity inclusion and offer the highest chance of persistence under climate change. However, these are not the only sites worthy of active in situ management globally. Here, the basis of the analysis were the priority list of CWR taxa related to 167 global major crops 24 , the restriction to include only global major crops highlights Europe and the Middle East as the main centre of diversity, but the CWR diversity of minor crop of regional or local importance not included in this analysis also require active in situ maintenance. Therefore, analysis of these minor crop of regional or local importance and their CWR diversity is required to identify additional complementary to add to the 150 sites identified here. Finally, we propose that the CWR in situ conservation sites, along with additional CWR sites of minor crop importance, can most efficiently be managed as a global CWR in situ conservation network that coordinates practical in situ conservation management, fostering stronger partnerships at national, regional and global levels, demonstrates benefits that directly support the ultimate custodians of agrobiodiversity, the local communities found in and around the included sites, and ultimately safeguard for perpetuity this critical resource for use either directly by farmers or by plant breeders and other scientists in crop improvement. Catalyzing better linkages between conservation and sustainable use of agrobiodiversity for the benefits of current and future generations is required 25 . However, the establishment of such a network is complex. For example, several of the 150 prioritized sites identified are located on country borders: Israel-Lebanon-Syria, Lebanon-Syria, Armenia-Azerbaijan and China-Myanmar; and two of the highest priority sites are in current conflict regions: Syria and Crimea. Therefore, although we highlight here a global matrix of 150 priority sites for in situ CWR conservation, the establishment of a global CWR in situ conservation network will need to be an incremental effort, starting pragmatically with a few sites/populations, and building upon those with time. Previous published work has identified global priority CWR taxa and broadly where they are located 24 , and how populations should be managed in situ 11 , 19 , from our analysis, we now know which combination of sites and CWR populations can form an effective network to maximize in situ CWR diversity conservation. It is now an urgent priority to identify existing and novel mechanisms to finance and govern the proposed network, the network that will provide a fundamental basis for ensuring our future food security. Failure to address this challenge is likely to have a devastating outcome for food production and agriculture, further the implications for rural people on low incomes in developing countries could be catastrophic, therefore action to prevent these outcomes is required immediately. Methods Selection of target CWR and occurrence data compilation We selected the wild relatives of 167 crops of global importance for food security and farmer income generation, primarily based upon their ability to successfully cross with cultivated taxa and produce fertile offspring, and their known or potential use in plant breeding. We used the gene pool concept 26 to determine the ability of CWR to successfully cross with crops and produce fertile offspring. Surrogate concepts were used where there was no hybridization data available 24 . Occurrence records for all target CWR species were obtained from a global CWR data set of ecogeographic records that is available online at 27 . CWR were recorded at the species level due to low availability of occurrence records for intraspecific levels. We removed non-target species, records of cultivated species, occurrence records outside their reported native range, records without coordinates and records with highly uncertain coordinates (i.e., more than 10 km uncertainty). Information of native species ranges were obtained from the GRIN-Global and the Harlan and de Wet portals. CWR taxonomic nomenclature in the occurrence record database was standardized to match GRIN nomenclature 28 . Current and future species distribution modelling We chose the MaxEnt algorithm to model the potential distributions of CWR species due to its strong performance against other modelling algorithms, particularly when using small occurrence datasets, its ability to work with presence-only data, and its wide use in biodiversity conservation studies 29 . MaxEnt requires a background area and background points when absence data is not available. Moreover, defining the extent of the background area is key to reduce model overfitting and therefore to improve the performance of species distribution models produced with the algorithm 29 . We used the native geographic range of each CWR species to determine the background extent of each distribution model and produced ten thousand random background points within this. For environmental drivers, we selected an initial set of 27 edaphic, geophysical and climatic variables for input use in MaxEnt (Supplementary Information Table 1 ). We assessed whether high multicollinearity existed among the initial set of environmental drivers per CWR species by measuring the variance inflation factor (VIF). Variables with a VIF value ≥10 were removed from the final set of variables. We projected each potential distribution model to baseline data for the period of 1960–1990 (Worldclim v.1.4; ). For climate change projections, we selected thirty global circulation models (GCMs) produced by the Coupled Model Intercomparison Project Phase 5 (CMIP5) (Supplementary Information Table 2 ). We chose a stringent emissions scenario (Representative Concentration Pathway 4.5 – RCP 4.5) for the period of 2060–2089 to regions highly likely to remain climatically stable, and thus adequate for long-term in situ conservation. All future climate data were obtained from . All environmental variables had a grid cell resolution of 5 km × 5 km at the equator. Only CWR species with ten or more unique occurrence records were considered for modelling, due to unreliability and poor performance of distribution models for species with smaller datasets 30 . Models were trained using a five-fold cross-validation technique to maximize the use of all occurrence records. Once trained, all models were first projected onto baseline variables. We assessed the performance of models produced by following standard adequacy criteria 30 . We produced binary distribution maps with the models that met the adequacy criteria, by applying the maximum training sensitivity plus specificity (MAXTRSS) logistic threshold, as this thresholding method has been found to consistently outperform other techniques 30 . For the CWR models that did not meet the adequacy criteria, or for species with fewer than 10 unique occurrence records, we opted to produce a 50 km circular buffer surrounding each individual georeferenced occurrence record to represent the potential distribution 31 . Only the current climate species models that met the validation criteria were projected onto each individual GCM. Then, we averaged all GCMs per species to produce a future ensemble model. The MAXTRSS threshold was again used to produce binary presence/absence distribution maps. We compared each future distribution against the current distribution model to obtain maps of geographical areas that are likely to remain climatically stable per species, and thus can be considered as suitable areas for long term in situ conservation of CWR. Assessment of potential genetic diversity The genetic diversity of individual populations must be considered to ensure maximum coverage in protected areas, prevent genetic erosion in the wild, and ultimately to effectively conserve CWR in situ for future utilisation. Given the limited availability of molecular data for the CWR species selected in the study, we created an ELC map for each CWR, using species distribution models as a proxy to estimate potential genetic diversity 32 . We created an ELC map of the native country range of each CWR by clustering the non-collinear edaphic, geophysical and climatic sets of variables and combining each resulting cluster value to produce a map containing unique ELC categories (also referred to as adaptive scenarios). We then overlaid the ELC maps with the current potential CWR distributions to determine the breadth of adaptive scenarios per species, and thus potential genetic diversity. In situ gap analysis We estimated the number of current and future distributions of CWR that are encompassed within established protected areas worldwide, to estimate the status of CWR species under passive in situ conservation, and the likely geographical losses under a stringent climate change scenario. For this, we obtained a comprehensive spatial dataset containing the geographical location of the world’s protected areas from (Downloaded 25/05/2016). Protected areas represented as point data were discarded and individual protected area polygons were transformed to produce a global presence/absence raster of protected areas. Then, we estimated the proportion of current and future potential distributions, including corresponding adaptive scenarios, within protected areas for each CWR and then summarized these per crop type. Finally, we compared the proportion of CWR area within protected areas and current adaptive scenarios that are likely to be lost due to climate change in the future. Prioritisation of areas for in situ conservation We used Marxan, a widely used conservation planning software, to determine the most effective global reserve network to conserve all target CWR species and their individual adaptive scenarios. To run Marxan, we prepared the following input files: (1) The planning unit file was created by producing a terrestrial global grid (5 km × 5 km grid cell resolution at the equator), where each grid cell was assigned a unique identifier. We assigned a planning unit cost of 10 to grid cells that overlapped protected areas, and 50 to grid cells that did not overlap with protected areas. Lower cost units are prioritized as conservation units. Furthermore, we prioritized every planning unit grid cell that overlapped a protected area to ensure the final network maximised the inclusion of existing protected areas. (2) The species file was created by listing all CWR species and adaptive scenario combination as a concatenated string and assigning each a unique identifier. We set Marxan targets to achieve at least one of every CWR species and adaptive scenario combination in the final network. We calibrated the species penalty factor, which allows prioritization of biodiversity elements for selection in Marxan, using a standard technique 33 , resulting in a final species penalty factor of one for all species, to ensure equal chance of selection. (3) We created the planning unit versus species representation file by overlapping the distribution maps of each CWR/adaptive scenario combination with the planning unit file. We used the current distribution models for species with predicted full loss of current range due to climate change, or where a valid MaxEnt model was not produced. For species with a valid current and future MaxEnt distribution model, we used only the geographical distribution that is predicted to remain climatically stable in the future. (4) For the boundary file, we listed the numerical identifiers of the horizontal and vertical neighbours of each terrestrial planning unit cell. We added this file to improve the spatial clumping of selected sites, as it is often easier and more cost effective to conserve closely located sites rather than dispersed ones. (5) For the input parameters file, we set the Marxan scenario to perform 100 runs of 100,000,000 iterations. The boundary length modifier, which helps to produce spatially clumped network of potential conservation sites, was calibrated using the standard technique 33 and set to 0.001. The resulting Marxan solutions were then ranked by fewest number of planning units followed by lowest cost. We chose the top ranked solution as the most suitable overall solution. We further prioritized the planning units in the top ranked Marxan solution by using the complementarity ranking algorithm 34 to maximise taxonomic and genetic diversity in the reserve network. This algorithm initially selects the site with highest richness count for CWR/adaptive scenario combination, and then chooses the second site that will be most complementary to the first, i.e. the site which will most increase the net number of CWR/adaptive scenario combinations. We used pragmatic scenario prioritizes sites containing protected areas (to maximise use of existing protection) and in a complementary fashion considers additional sites outside protected areas where there are CWR/adaptive scenarios combinations not identified within protected areas. It is advisable to identify site both inside and outside of existing protected areas because CWR taxa are commonly found in disturbed anthropogenic environments and less often found in the climax community often designated as conventional protected areas 35 . All CWR were given equal weighting in the algorithm and it was run until all CWR/adaptive scenario combinations were represented in the final solution at least once. The top 150 priority sites within the top ranked Marxan network were then mapped, the top 100 sites inside protected areas, along with the top 50 complementary sites outside protected areas. Reporting summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Data availability Interactive maps displaying occurrence data coordinates, potential distribution models are available at distribution-map/ . Occurrence data used for this analysis are available at Further information on expert evaluations of the gap analysis are available at expert-evaluation/. The entire dataset collated and used for the analysis is available from is available online at . | The first comprehensive network of sites where crop wild relatives are found has been developed by researchers at the University of Birmingham. The network will help breeders develop more resilient crops and tackle challenges of global food security, as well as being important for nature conservation. The resource includes details of the 150 'hotspot' sites worldwide where some 1,261 so-called crop wild relative (CWR) species are concentrated. Crop wild relatives are cousins of cultivated crops and a vast resource of genetic diversity. Plant breeders will be able to use the diversity conserved within these sites to transfer adaptive traits from the wild relatives to domestic crops to produce new varieties that will produce higher yields and are more resistant to climate change or pests. The sites identified by the Birmingham team are mostly within existing conservation areas, which will help to ensure these valuable wild species can be conserved well into the future. They contain relatives of 167 of the most important crops in the world as listed by the UN Food and Agriculture Organisation (FAO). Identifying the 150 sites will mean the species can be conserved 'in situ', to complement and extend the diversity already held in ex situ gene banks, which cannot alone preserve the full range of diversity found in nature. Interestingly, the most hotspots identified were in the so-called 'fertile crescent', which includes countries in the Middle East such as Lebanon, Egypt, Syria and Turkey—the area of the world where modern-day agriculture first began. "Ensuring global food security is one of the greatest challenges of our time as population growth puts ever more pressure on our food production and natural eco-systems," explains senior author Dr. Nigel Maxted, of the University of Birmingham's School of Biosciences. "Finding new, more resilient crop varieties that can withstand climactic extremes, be more resistant to pests or require less water to thrive is a huge priority for plant breeders. To find that diversity, you need to go back to the 'ancestral' species—the wild crop relatives—from which our crops were domesticated. The resource we have developed will enable breeders to identify where to obtain the necessary crop traits from which to breed vital new climate smart varieties." Having identified the key 150 global populations, the next step for the team will be to develop a unifying management network, through the FAO, to manage these sites to sustain the genetic diversity of CWR populations for the future benefit of humankind. The research was published in Nature Communications Biology, and was funded by a grant from the Government of Norway via the Global Crop Diversity Trust. Project partners include the Global Crop Diversity Trust, the Royal Botanic Gardens Kew and The International Center for Tropical Agriculture. | 10.1038/s42003-019-0372-z |
Earth | Scientists study impact of sediments and nutrients from Conowingo Dam on Chesapeake Bay | Cindy M. Palinkas et al, Influences of a River Dam on Delivery and Fate of Sediments and Particulate Nutrients to the Adjacent Estuary: Case Study of Conowingo Dam and Chesapeake Bay, Estuaries and Coasts (2019). DOI: 10.1007/s12237-019-00634-x | http://dx.doi.org/10.1007/s12237-019-00634-x | https://phys.org/news/2019-11-scientists-impact-sediments-nutrients-conowingo.html | Abstract Dams impact the magnitude and nature of material transport through rivers to coastal waters, initially trapping much material in upstream reservoirs. As reservoirs fill, trapping decreases and bottom sediments can be scoured by high flows, increasing downstream delivery. This is the case for the Conowingo Dam, which historically has trapped much of the sediment and particulate nutrients carried by the Susquehanna River otherwise bound for Chesapeake Bay but has now reached dynamic equilibrium. While previous studies primarily focus on either delivery of river inputs or their fate in the Bay, this study synthesizes insights from field observations and modeling along the Reservoir-Bay continuum to evaluate potential impacts of infilling on Bay biogeochemistry. Results show most Susquehanna sediment and particulate nutrient loading occurs during high-flow events that occur only ~ 10% of the time. While loading during these events has increased since the late 1970s, consistent with a decreasing scour threshold for Reservoir sediments, loading during low-flow periods has declined. Loads entering the estuary are largely retained within the upper Bay but can be transported farther downstream during events. Reservoir sediments are highly refractory, and inputs of reservoir-like organic matter do not enhance modeled sediment-nutrient release in upper Bay sediments. These findings and an emerging literature highlight the Bay’s resilience to large sediment loads during events (e.g., Tropical Storm Lee in 2011), likely aided by ongoing restoration efforts and/or consistently low-moderate recent inflows (2012–2017). Thus, while events can have major short-term impacts, the long-term impact to Bay biogeochemistry is less severe. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction Human influences are pervasive throughout the river-estuary continuum. For example, river channelization, diversions, and levee building alter fluvial morphology and hydrology (Brookes et al. 1983 ; Gregory 2006 ; Hudson et al. 2008 ). Expanded agriculture and/or urbanization can deliver excess sediments and nutrients to estuaries, leading to widespread eutrophication (Barmawidjaja et al. 1995 ; Kemp et al. 2005 ; Paerl 2006 ). Perhaps the greatest human impact on the timing and magnitude of material fluxes from rivers to adjacent receiving basins occurs via dam construction (e.g., Ibàñez et al. 1996 ; Palinkas and Nittrouer 2006 ; Vericat and Batalla 2006 ). Dams initially starve downstream ecosystems of both sediments and particulate nutrients through trapping in upstream reservoirs. Eventually, however, these reservoirs fill (assuming no human intervention), increasing the delivery of sediment and nutrients to downstream ecosystems (Fan and Morris 1992 ; Yang et al. 2006 ). Moreover, sediments stored in upstream reservoirs can be scoured during storm events, increasing loads delivered downstream (e.g., Zabawa and Schubel 1974 ; Palinkas et al. 2014 ). The increase in dam construction following World War II has resulted in numerous dams that are rapidly aging and/or filling, prompting much interest in management intervention (e.g., Palmieri et al. 2001 ). Developing effective management strategies requires holistic consideration of the river-estuary continuum that links upstream actions to downstream ecosystem impacts. For example, large-scale dam removal on the Elwha River dramatically increased fluvial sediment loads and resulted in extensive coastal geomorphological change (Gelfenbaum et al. ( 2015 ) and references therein). On the other hand, many questions remain regarding downstream impacts of reservoir infilling. The upper Chesapeake Bay serves as an excellent natural laboratory within which to address these questions. The Bay and its watershed have experienced many human-induced changes over time, especially since European colonization in the 17–18th centuries (Brush 2009 ). In particular, land-use changes in the watershed, such as increased deforestation, agriculture, and urbanization, increased sediment and nutrient loads delivered to the Bay. By the mid-1980s, the Bay was receiving 7 times more nitrogen and 16 times more phosphorus than before colonization (Boynton et al. 1995 ), degrading Bay water quality through eutrophication and decreased water clarity. In response, the Chesapeake Bay Program (CBP) was established in 1983 to identify the sources and extent of pollutants entering the Bay and implement restoration activities to reduce pollutant loads (NRC 2011 ). More recently, total maximum daily loads (TMDLs) and accompanying watershed implementation plans that include best management practice (BMP) installations have been developed for the Bay and its tributaries (Linker et al. 2013 ) in an effort to decrease sediment and nutrient loading to the Bay. While quantifying the effectiveness of BMPs can be challenging and depends on specific management goals (Liu et al. 2017 ), evaluating temporal trends in fluvial loads can lend insight into their performance. Numerous dams exist along the main tributary to the Bay, the Susquehanna River, including a series of three dams that ends just before the river’s confluence with the Bay. The last and largest of these dams, the Conowingo Dam, was constructed in 1928 (Langland and Hainly 1997 ). While data are scarce for the initial trajectory of filling immediately after construction, plentiful data exist for the infill period for both the Reservoir and upper Bay (Hobbs et al. 1992 ; Reed and Hoffman 1997 ; Langland and Cronin 2003 ; Langland 2009 ; Russ and Palinkas 2018 , among others). Recent work indicates that the Reservoir has reached dynamic equilibrium (net inputs equal net outputs averaged over long time scales) and that particulate loading to the Bay has increased (Hirsch 2012 ; Zhang et al. 2016 ). This increased loading is concerning given that the current TMDL requirements, intended to improve Bay water quality, were developed with models that do not include a Reservoir at dynamic equilibrium (Cerco et al. 2013 ). Increased sediment and nutrient loads from Reservoir infilling could further degrade water quality through eutrophication and reduced bottom oxygen concentrations (Kemp et al. 2005 ; Kemp et al. 2009 ; Testa et al. 2014 ). In addition, large storm events can scour large amounts of sediment from the Reservoir bottom, with potentially deleterious downstream ecosystem impacts (Schubel 1972 ; Zabawa and Schubel 1974 ; Orth and Moore 1984 ). The timing, magnitude, and mechanisms of material (sediment and its associated nutrients) delivery from the Susquehanna to tidal Chesapeake Bay likely differs between relatively low, “normal” flows and large storms events, especially given human control of the flow at Conowingo Dam, as does its transport and fate in the upper Bay. However, little research has evaluated these differences with a holistic approach that considers the entire river-estuary continuum. This paper is the result of a coordinated, interdisciplinary study that takes this approach and addresses these questions: (1) how has sediment loading to the Bay changed over the last 40 years? (2) are sediments in Reservoir biogeochemically different from those in the upper Bay, and how might they influence Bay biogeochemistry? (3) what controls the transport and fate of Conowingo sediment in the Bay? and (4) what are the likely impacts of watershed and reservoir-derived particulate material on the Bay’s biogeochemistry? These questions are evaluated by synthesizing field observations and model results, as well as long-term monitoring data, as shown in Fig. 1 . The specific data used to examine each question are (1) river discharge and suspended-constituent (sediment, particulate nitrogen and phosphorus) monitoring data, (2) field observations of sediment biogeochemical characteristics in the Reservoir and upper Bay, (3) transport modeling and field observations of upper Bay sedimentology, and (4) biogeochemical modeling and field observations in the upper Bay. Ultimately, the results of this study can help develop effective management strategies throughout the river-estuary continuum. Fig. 1 Conceptual model of the various methods used in this study and their relationships to components of the Conowingo Reservoir-Chesapeake Bay continuum. See “Methods” for details of individual methods Full size image Physical Setting This paper explores the connection of material inputs from the lower Susquehanna River to ecosystem processes in the upper Chesapeake Bay. This connection is highly influenced by a series of three hydroelectric dams that occur along the river from the US Geological Survey (USGS) gauging station at Marietta, Pennsylvania, to its confluence with the Bay (Fig. 2 ). The reservoirs upstream of the first two dams (Safe Harbor and Holtwood; installed in 1931 and 1910, respectively) filled rapidly after installation and reached dynamic equilibrium (no net change in sediment storage averaged over several years) during ~ 1950 and 1960, respectively (Hainly et al. 1995 ; Langland and Hainly 1997 ; Reed and Hoffman 1997 ; Langland 2009 ). The last and largest reservoir lies upstream of the Conowingo Dam (installed 1928) and has filled more slowly; however, recent work suggests that it has also reached dynamic equilibrium (Hirsch 2012 ; Zhang et al. 2013 ; Zhang et al. 2016 ). Fig. 2 Site locations in the lower Susquehanna River and upper Chesapeake Bay. Locations of hydroelectric dams on the Susquehanna River are given by black bars. Brown circles show locations of sediment sampling sites in Conowingo Reservoir (geochronology and sediment character reported in this study and in Palinkas and Russ ( 2019 ), sediment-water fluxes in this study). Red circles indicate locations of long-term water quality monitoring data used in this study Full size image Infilling of Conowingo Reservoir was most rapid in the upper portion after installation (Langland 2009 ) but is now focused largely in the lower portion (Palinkas and Russ 2019 ). As a result, surficial bottom sediments generally grade from sands in the upper portion to muds in the lower portion. Sediment deposition within the reservoir is inherently linked to delivery from the Susquehanna River, which is highest during the spring freshet and minimal during summer, punctuated by extreme flood events (Hirsch 2012 ; Cheng et al. 2013 ; Zhang and Blomquist 2018 ) that can scour significant amounts of bottom sediment from the reservoir. Significant scour occurs when river discharge exceeds ~ 11,300 m 3 /s (400,000 cfs; Hainly et al. 1995 ), but this threshold has likely lowered over time and fine sediments are mobilized at lower flows (Hirsch 2012 ). Floods of this size generally occur every ~ 5 years, with notable past occurrences in 1972 (Tropical Storm (TS) Agnes), 1996 (winter ice jam), 2004 (Hurricane Ivan), and 2011 (TS Lee). The highest recorded Susquehanna River discharge was associated with TS Agnes, exceeding 28,317 m 3 /s (1,000,000 cfs) and scouring 13.5 × 10 6 t of bottom sediment (Langland 2015 ). The second highest discharge was associated with TS Lee, exceeding 16,990 m 3 /s (600,000 cfs) and scouring 4 × 10 6 t of bottom sediment (Cheng et al. 2013 ; Palinkas et al. 2014 ). The resulting sediment load delivered to the Bay, composed of both watershed and scoured Reservoir sediments, resulted in a sediment plume that appeared to extend at least halfway down the Bay in satellite images. The fate of TS Lee sediments in the Bay was investigated through both field (Palinkas et al. 2014 ) and modeling (Cheng et al. 2013 ) approaches, finding that most sediment was retained in the upper Bay, but fine sediment was more widely dispersed, resulting in a thin drape of sediment on the bottom extending to mid-Bay. The second largest storm during the past 15 years was Hurricane Ivan (2004), which had peak discharge of ~ 15,000 m 3 /s. Ivan produced heavy precipitation over the Chesapeake Bay watershed, with maximum accumulation of 25 cm. A satellite image showed a sediment plume spreading over the upper and mid parts of Chesapeake Bay, but the resulting sediment deposit was not sampled. Conowingo Dam is run as a peak-production hydroelectric plant, with daily high/low river discharges through the outlet. This variability is integrated on longer time scales, such that discharge patterns are similar to those at Marietta. During high flows, the first flood gate is typically opened at ~ 2446.5 m 3 /s (86,400 cfs; Velleux and Hallden 2017 ), which is the discharge used to define “events” in this paper. Several recent modeling studies have focused on trends in sediment and nutrient delivery over the Dam, especially with regard to trapping in the Reservoir. Conowingo Reservoir historically trapped much of the sediment and nutrient load to the Chesapeake. However, recent studies indicate that discharge of these materials from Conowingo has remained relatively steady or perhaps even increased, despite declines at the reservoir inlet from watershed reductions (Hirsch 2012 ; Zhang et al. 2016 ). This implies reduced trapping within the reservoir, consistent with results from repeat reservoir bathymetric surveys (Langland 2015 ). In the uppermost Bay, Susquehanna discharge has formed a subaqueous delta, referred to as the Susquehanna Flats, a shallow, sandy region colonized by submersed aquatic vegetation (SAV). SAV beds on the Flats were historically dense but disappeared after TS Agnes (Bayley et al. 1978 ). They made a resurgence in the early 2000s, due to improved water quality from a combination of resource management actions and several dry years (Gurbisz and Kemp 2014 ), and have been present ever since, even during extreme events (TS Lee; Gurbisz et al. 2016 ). These beds modulate sediment input from the Susquehanna River to the upper Bay, trapping sediment during the growing season (typically ~April–October) but allowing sediment bypass over winter (Russ and Palinkas 2018 ). Sedimentation rates in the upper Bay have varied throughout time, responding to changes in land use and storms. Rates increased dramatically after European colonization and related land clearance, but they decreased after 1930 due to farm abandonment and soil conservation (Brush 1989 ; Brush 2001 ), as well as construction of Conowingo Dam. The signatures of large storms and hurricanes are preserved in sediment cores, especially after TS Agnes and Lee. The thickest deposits are located upstream of the estuarine turbidity maximum (ETM), with maximum thickness of 20–30 cm after TS Agnes (Zabawa and Schubel 1974 ) and 4–5 cm after TS Lee (Palinkas et al. 2014 ). The ETM is the dominant driver of sediment transport dynamics in upper Chesapeake Bay. First reported in the late 1960s (e.g., Schubel 1968 ), upper Bay ETM dynamics have been studied (Elliott 1978 ; Sanford et al. 2001 ; Cronin et al. 2003 ; North et al. 2004 ) and modeled (Park et al. 2008 ; Cerco et al. 2013 ) by numerous researchers since then. ETMs are very efficient traps for suspended particles carried into estuaries with the river flow. The upper Bay ETM results from convergent near-bottom transport of settling particles due to asymmetrical tidal resuspension near the limit of salt (Sanford et al. 2001 ). The efficiency of ETM trapping increases as particle settling speeds increase due to flocculation and agglomeration of fine riverine particles, caused by increases in both electrochemical and biogeochemical stickiness as fresh river waters encounter and mix into salt water (Schubel and Kana 1972 ; Sanford et al. 2005 ; Malpezzi et al. 2013 ). ETMs are dynamic features, rapidly migrating downstream due to pulses of river flow and down-estuary winds while rebounding almost as quickly as the downstream forcing dissipates, albeit with a scale-dependent lag (Nichols 1977 ; Elliott 1978 ; North et al. 2004 ). The upper Bay ETM is a very efficient sediment trap in the long term, likely due to the large scale of the system. Particles deposited over the shallow shoals adjacent to the channel are easily resuspended due to wind-wave forcing (Sanford 1994 ), likely focusing back into the deep shipping channel. Particles that escape downstream in moderately large events tend to be transported back upstream by the combination of tidal and estuarine circulations (Nichols 1977 ). Maintenance dredging of the upper Bay shipping channel likely removes a large fraction of the accumulating sediment (Sanford et al. 2001 ). The net result may approach near complete riverine sediment trapping (Donoghue et al. 1989 ), with some unknown but small fraction lost to the mid-Bay during extreme freshwater flow events. The eutrophication of Chesapeake Bay has been well documented (Hagy et al. 2004 ; Kemp et al. 2005 ) and is associated with elevated nutrient inputs from its large watershed (166,530 km 2 ) that spans several states in the mid-Atlantic region of the USA. Increased degradation of the Bay was documented in the 1970s and 1980s, following the identification of large-scale declines in submersed aquatic vegetation (SAV) (Kemp et al. 1983 ) and mapping of extensive low-oxygen areas in the mainstem of the estuary (Officer et al. 1984 ). While declines in commercial finfish and wild bivalve extraction were early identified as features of the Bay’s decline, elevated inputs of dissolved nitrogen and phosphorus from the watershed beginning in the late 1960s and early 1970s were identified as causative agents for many of the lost habitats in the estuary. Following more than three decades of extensive monitoring of dissolved oxygen, nutrient concentrations, SAV coverage, and watershed inputs, it has become apparent that several features of the Bay’s ecosystem have began to transition toward a less-eutrophic state (Orth et al. 2017 ; Testa et al. 2018 ; Zhang et al. 2018 ). Methods This paper synthesizes field observations, model results, and long-term monitoring data as conceptualized in Fig. 1 . Details of specific methods are presented below. Inputs to Estuary River Discharge and Suspended Sediment Concentrations (SSC) Susquehanna River discharge has been measured at the Conowingo Dam outlet by the USGS ( ; station 01578310) since October 1967. Corresponding suspended sediment concentrations (SSC) have been measured since 1978, with variable frequency throughout the years. Generally, data were available for at least 1 day per month and more frequently during high-flow events; however, there are some gaps in the records. For consistency, this study considered discharge and SSC data only between 1 Jan 1978 and 31 December 2017. Rating curves (SSC versus corresponding river discharge) were developed for 5-year intervals (i.e., 1968–1972, 1973–1977, etc.), following the approach of Warrick ( 2015 ). Because relatively few SSC measurements are available for each year, 5-year intervals were chosen as a compromise between temporal resolution and robustness of the data set. The main difference between the Warrick ( 2015 ) approach and more traditional approaches (e.g., Syvitski and Morehead 1999 ) is the use of discharge-normalized data (Q GM ) in the regression between log-transformed river discharge (Q) and SSC (C): $$ C=\hat{a}{\left(Q/{Q}_{GM}\right)}^b, $$ (1) where â is the vertical offset parameter and has units of mg/L, equivalent to the SSC of the middle of the sample distribution, b is the unitless rating parameter found from regression (Syvitski and Morehead 1999 ), and Q GM is the geometric mean of all Q values in the entire record (uniform for all time intervals). These curves were calculated for three cases: (1) all flows, (2) “normal” or non-event flows, and (3) high flows during flood events. A discharge of 2446.5 m 3 /s was used to separate normal (< 2446.5 m 3 /s) and event (> 2446.5 m 3 /s) flows, corresponding with opening of the first crest gate (Velleux and Hallden 2017 ) and the 90th percentile of flows past Conowingo from 1968 to 2017. These rating curves were used to calculate daily SSC values from river discharge measurements. Daily sediment loads (product of SSC and river discharge) were then calculated and summed over individual years and 5-year periods. Particulate Phosphorus (PP) and Nitrogen (PN) Particulate phosphorus (PP) and nitrogen (PN) data were obtained from data associated with Zhang et al. ( 2015 ) and archived by Zhang and Ball ( 2014 ). This archive contains raw particulate phosphorus and nitrogen concentrations from the USGS River Input Monitoring Program (USGS 2013 ). Like SSC, these concentrations were not continuously measured and were assumed to represent average daily conditions. Comparison of these concentrations with their corresponding river discharge showed wide variability, precluding establishment of statistically robust relationships from which daily loads could be calculated. Instead, daily loads of both PP and PN were obtained from the WRTDS model (Zhang et al. 2015 ), which accounts for variability in these parameters with both time and discharge. These data were available only prior to April 2013, excluding the 2013–2017 time period from further consideration. Particle Settling Velocities All other things being equal, particle settling velocity is the most important factor determining the transport distance of suspended particles (Mcnair and Newbold 2001 ). Prior to this study, however, particle settling velocities had never been directly measured at the Conowingo Dam. Samples were collected for particle settling velocity experiments during three moderately high flow events in 2015 and 2016, over a total of seven sampling days. On each sampling day, suspended particles were collected at the turbine outlets on the downstream side of Conowingo Dam, where historical USGS samples were collected, and from a stilling well located on the upstream side of the dam between two spill gates. At both locations, 5-L sample bottles were filled for settling experiments; additional samples were collected at the downstream site for standard disaggregated particle size analysis by the USGS (Poppe et al. 2005 ). Settling velocity experiments usually occurred within an hour of collection; samples were refrigerated in the event of any short delay. These experiments were carried out on-site using a settling tube apparatus based on the classic Owen tube (Owen 1976 ), modified for field work in upper Chesapeake Bay (Malpezzi et al. 2013 ), and then modified again for this study. The settling tube apparatus consisted of a pair of 5-L Niskin bottles attached vertically to an aluminum frame. The bottom stoppers were machined to a funnel shape internally, with a sampling port attached at the lower end of the funnel. The top stoppers were attached flexibly to allow water and suspended sediment samples to be introduced quickly and cleanly. A jacket of reflective bubble wrap around each tube minimized the development of internal circulations due to contrasts between inside and outside temperatures. At the beginning of each experiment, water samples were shaken gently to resuspend any settled particles and poured into the settling tubes, completely emptying the sample bottles to avoid missing any rapidly settling particles. A timer was started for each tube, with staggered starts to allow sampling at matched intervals after time 0. Ten water samples were withdrawn from the bottom port of each tube into prewashed 0.5-L sample bottles at nine geometrically spaced time intervals (two bottles at the last time interval). Analysis procedures used for bottom withdrawal settling tube experiments were first described by Owen ( 1976 ). A spreadsheet implementation of these techniques was used (Malpezzi et al. 2013 ), as well as a Matlab© curve-fitting implementation (Malarkey et al. 2013 ). Both techniques yielded similar estimates of the settling distribution of suspended-sediment mass (Fig. S1 ). Based on these results, all settling experiment results were divided into four categories of settling speeds: < 0.01 mm/s (the last sample bottle), 0.01–0.2 mm/s, 0.2–2 mm/s, and > 2 mm/s; mass fractions were calculated for each category for all experiments. Settling velocities were also estimated for the samples collected simultaneously and analyzed by USGS for disaggregated particle size distributions. We used Stokes settling velocity equation for clays and silts and the approximate large particle expression of Soulsby ( 1997 ) for sand-size particles. These data were then divided into the same four settling velocity categories as above for direct comparison with the settling experiment data. Equivalent particle size categories were calculated for each of the four settling velocity categories using these same expressions, resulting in equivalent particle size bins of < 0.004 mm, 0.004–0.016 mm, 0.016–0.052 mm, and > 0.052 mm, respectively. We also obtained data from 32 samples collected by USGS on 19 dates between 1979 and 2015 for which disaggregated particle size data were available, along with corresponding river discharge and SSC data. The USGS data were stored as cumulative percent distributions (total percent finer than each of ten sizes). We calculated the percent of suspended sediment mass between successive sizes by difference. We then binned the mass fractions into the four size intervals defined above and calculated a characteristic mass-weighted settling speed for each size range. Fate of Sediment in the Estuary Sedimentology Four box cores were collected in August 2015 in the upper Bay; gravity cores were collected at these sites and three additional sites in April 2016 (Fig. 3 ). Both 2015 and 2016 reflected conditions during “normal” years (i.e., no major flood events); core locations were co-located with those in Palinkas et al. ( 2014 ) to discern differences between normal and flood conditions. All cores were sectioned immediately after recovery and transported to the lab for further analyses. All cores were analyzed for grain size and 7 Be (half-life 53.3 days); gravity cores were also analyzed for 210 Pb (half-life 22.3 years). Grain-size measurements were made by wet-sieving samples at 63 μm to separate the mud (silts and clays; < 63 μm) and sand (> 63 μm) components. The mud fraction was then analyzed with a Sedigraph III (Coakley and Syvitski 1991 ), and the sand fraction was dry sieved from 64 to 500 μm via a standard set of 13 sieves. All data were combined to calculate the median diameter of sediment. Event- and seasonal-scale sedimentation was examined with 7 Be, which is produced by cosmic rays in the atmosphere and attaches to terrestrial sediments during wet (rainfall) and dry deposition (Olsen et al. 1986 ). Because nearly all of the 7 Be is associated with particulates (Kaste et al. 2002 ), the presence of 7 Be in aquatic sediments indicates that they had been on land within ~ 250 days (4–5 half-lives, assumed limit of detectability). 7 Be activities were measured via gamma spectroscopy of the 477.7 keV photopeak, using a calibrated Canberra germanium detector and following the procedure of Palinkas et al. ( 2014 ). Depth-integrated activities were used to calculate sediment deposition rates as in Palinkas et al. ( 2005 ). Decadal-scale sediment accumulation rates were determined with 210 Pb (half-life 22.3 years), measured via alpha spectroscopy, assuming a constant supply of unsupported 210 Pb to the sediment and steady-state sedimentation (Appleby and Oldfield 1978 ). Accumulation rates were reported in Russ ( 2019 ) and Russ and Palinkas ( 2018 ). Fig. 3 Locations of sediment cores (black circles; labeled as Leex) and monitoring stations (gray circles) used in this study. Note that core Lee5 and the monitoring station at Still Pond are co-located. Box cores were collected at Lee7, Lee5, Lee2.5, and LeeS2; gravity cores were collected at all core locations Full size image Transport Modeling The Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system (Warner et al. 2008 ; Warner et al. 2010 ) was used to configure a model for Chesapeake Bay and its adjacent shelf. COAWST consists of a mesoscale atmosphere model, a regional ocean model, a model for simulating surface waves, a sediment transport model, and a dynamic coupler to exchange data fields between the sub-models. It has been used in a number of studies on sediment dynamics in coastal oceans, during both storm and non-storm conditions (e.g., Harris et al. 2008 ; Ganju et al. 2009 ; Olabarrieta et al. 2011 ; Cheng et al. 2013 ; Sclavo et al. 2013 ; Feddersen et al. 2016 ). In this implementation of COAWST (Xie et al. 2018 ), observed wind speeds at buoys and weather stations throughout Chesapeake Bay and its surrounding land were used instead of the hindcasts from a regional atmosphere model, and the regional ocean model was based on Regional Ocean Modeling System (ROMS) (Haidvogel et al. 2000 ; Shchepetkin and McWilliams 2005 ; Shchepetkin and McWilliams 2009 ). The ROMS model for Chesapeake Bay has been validated against observational data (e.g., Li et al. 2005 ; Li et al. 2006 ; Zhong and Li 2006 ). In this study, we used a finer-resolution version of this model (Cheng et al. 2013 ; Xie et al. 2018 ), with 240 × 160 horizontal grids and 20 vertical layers. The model was forced by freshwater inflows at river heads, tidal and non-tidal flows at the offshore boundary, and winds and heat exchanges across the water surface. At the upstream boundary of the eight major tributaries, freshwater inflows at USGS gauging stations were prescribed. The wave model was Simulating WAves Nearshore (SWAN) (Booij et al. 1999 ), which simulates wind-wave generation and propagation in coastal waters, including the processes of refraction, diffraction, shoaling, wave–wave interactions, and dissipation. SWAN was configured to have the identical horizontal grids as ROMS. The SWAN model was forced by historical Wave Watch 3 data (archived at ftp://polar.ncep.noaa.gov/pub/history/waves/incident ) at the offshore boundary and by the observed winds at the sea surface. The sediment modeling component was the Community Sediment Transport Modeling System (CSTM) (Warner et al. 2008 ), which includes algorithms for suspended sediment and bedload transport due to current and wave–current forcing, enhanced bottom stress due to surface waves, and a multiple bed model to track stratigraphy and morphology. Sediments can be introduced into the model domain through rivers and erosion from seabed. For fluvial sediment, we considered only the Susquehanna River, which is the only river discharging sediment directly into the main stem of the Bay (sediments from other tributaries are largely entrapped within them; Biggs 1970 ; Schubel and Carter 1977 ). Fluvial sediments were divided into three classes (clay, fine silt, and coarse silt), each represented by a grain size and settling speed corresponding to the settling velocity analyses described above. Because our study focused on fluvial sediment, the seabed is simplified and initialized with uniformly distributed silt with a single grain size of 0.022 mm (North et al. 2004 ). Resuspension of bottom sediment acted as the background for the suspended sediment in the Bay. For high SSC, the effect of suspended sediment to water density was included by treating the water as a water-sediment mixture. Relevant parameters of the sediment module are listed in Table 1 . Table 1 Parameters for the sediment-transport model. Particle settling velocities and grain sizes are representative of the classes observed in the settling velocity experiments. The flow-dependent fractions f 1 -f 4 are described in the results and shown in Fig. 5 Full size table Sediment Biogeochemistry and Exchange with Water Column Sediment-Water Fluxes in the Reservoir and Estuary The sediment exchange of oxygen, nitrogen, and phosphorus was determined in the Conowingo Reservoir on five dates (May, July, September 2015; April 2016), in Lakes Clarke and Aldred in April 2016 and in the upper Bay in August 2015 and April 2016. Reservoir cores were collected at 6–13 sites using a Soutar-style plastic box corer in fine-grained deposits and a pole corer in shallow coarse-grained deposits (Cornwell et al. 2014 ). Bay cores were collected with a HAPS corer (KC Denmark), sub-coring the stainless steel tube (13.6 cm diameter) for smaller flux cores. Sediments were collected for incubation in 6.3-cm diameter, 30-cm tall acrylic flux cores that were filled with ~ 15 cm of sediment. At each reservoir station, surface- and deep-water measurements of conductivity/salinity, temperature, and dissolved oxygen were made using a YSI multiparameter sonde. Water was collected via pump from two reservoir locations for use in sediment incubations and from multiple locations in the Bay. After collection, cores were placed upright in large insulated containers full of site water until placement in a temperature-controlled room later that day. Core-incubation procedures are described in detail elsewhere (Owens and Cornwell 2016 ) and briefly described here. Cores were submersed in site water and bubbled overnight in the dark at field temperatures. At the beginning of the incubation phase, stirring lids were attached to the cores and a time course of overlying water chemistry was determined initially under dark conditions for 4–6 h. Additional site-water-only “blank” incubations were set up from each aerobic coring site to correct for biogeochemical processes occurring in the water. Water analyses included gas ratios (O 2 /Ar, N 2 /Ar) via MIMS (membrane inlet mass spectrometry; Kana et al. 1994 ) and nutrients (nitrate plus nitrite (NO x − ), ammonium (NH 4 + ), and soluble reactive phosphorus (SRP)) using conventional colorimetric methods (Parsons et al. 1984 ; García-Robledo et al. 2014 ). Gas and nutrient flux rates were calculated from core area and volume, and the slope of solute/gas versus time. Characterization of Sediment Composition and Reactivity The reactivity of particulates with respect to the potential bioavailability of phosphorus (P) and nitrogen (N) was assessed by chemical characterization (P) and time courses of anaerobic ammonium production. Sulfate concentrations from increasing salinity enhance the sulfate respiratory pathway and the resultant hydrogen sulfide converts Fe oxides into iron-sulfide minerals (Cornwell and Sampou 1995 ). Iron-sulfide minerals adsorb soluble reactive P poorly relative to Fe oxides, and as a result, P is often released to solution (Roden and Edmonds 1997 ) and to overlying water (Lehtoranta et al. 2009 ). A sulfide-reactive pool was determined by the addition of hydrogen sulfide (Vulgaropulos 2017 ). Both sediments and suspended particulates were characterized for rates of anaerobic NH 4 + production using sediment slurries and particulates filtered from the water column, with a time course used to determine rates (e.g., Burdige 1991 ). Biogeochemical Modeling A sediment biogeochemical model (SFM) was used to evaluate rates and controls on nutrient storage, biogeochemical transformation, and release for reservoir and upper Bay sediments. SFM is a two-layer representation of sediment biogeochemical processes that simulates carbon, nitrogen, phosphorus, oxygen, silica, and sulfur dynamics. SFM has been successfully utilized in diverse Bay environments under different conditions (e.g., temperature, salinity, oxygen, and depth) to understand sediment responses to particulate-matter deposition (Brady et al. 2013 ; Testa et al. 2013 ). SFM numerically integrates mass-balance equations for chemical constituents in two functional layers: an aerobic layer near the sediment-water interface of variable depth (H 1 ) and an underlying anaerobic layer that is equal to the total sediment depth (10 cm) minus the depth of H 1 . Details of the model and its implementation can be found in other recent publications (Brady et al. 2013 ; Testa et al. 2013 ; Testa et al. 2014 ). SFM simulations were executed at 13 stations in the reservoir and one station in the upper Bay (Still Pond; Fig. 3 ) where sediment-water flux experiments were conducted (see above; Testa et al. 2013 ). Model simulations were run for the 1985–2015 period using the following schemes to estimate boundary conditions. For the overlying water, we generated a climatology of water-column nutrient and oxygen concentration measurements from 1985 to 2015 at the Conowingo Dam outlet (CB1.0; Fig. 3 ). Concentrations at CB1.0 were assumed to be representative for the reservoir; where possible, concentrations from CB1.0 were compared to those from the 2015 field campaigns with good agreement (Testa and Kemp 2017 ). To estimate the depositional fluxes of bulk pools of organic carbon, nitrogen, and phosphorus, we tested three different schemes for estimating matter deposition rates. In the first scheme, we assumed a constant deposition rate to Conowingo and Upper Bay sediments based upon previous estimates made for the Upper Chesapeake Bay (Brady et al. 2013 ). In the second scheme, we generated a seasonal cycle of deposition that followed the local, historically observed chlorophyll-a pattern in time but whose magnitude was similar to direct sediment trap estimates made previously in the Conowingo Reservoir or previous model simulations (Fig. S2 ; Brady et al. 2013 ; Boynton et al. 1984 ; Testa and Kemp 2017 ). We repeated this annual cycle for each year in the simulation period. In the third scheme for the Reservoir only, we used estimates of TN input to the reservoir made at Marietta, Pennsylvania (upstream of the 3 reservoir system), assumed a constant fraction of the load was particulate in nature, assumed a fixed C/N/P ratio based on prior reservoir measurements (Boynton et al. 1984 ), and divided these inputs by the area of the reservoirs. In this scheme, we assumed that deposition occurred uniformly in the Reservoir and we averaged the input data over a 90-day period to yield constant 3-month periods of deposition (Fig. S3 ). In effect, we used the time-series of inputs derived at Marietta to provide the temporal variability and scaled the overall input to values constrained by simulations using the first two schemes. We estimated organic-matter reactivity in the depositing material by simultaneously re-estimating the magnitude of organic matter deposition and the relative fraction of the three reactivity pools within the deposits. ‘G1’ indicates labile organic material that reacts at the timescale of 30 days, ‘G2’ is refractory material that decays on the time scale of 18 months, and ‘G3’ is very low reactivity organic matter that decays at very long timescales. Model simulations in each suite of the simulations were analyzed to maximize agreement between observed and modeled sediment-water nutrient and oxygen fluxes and sediment organic matter nutrient and carbon fractions. We obtained estimates of bottom sediment carbon, nitrogen, and phosphorus content from Edwards ( 2006 ) and estimates of carbon, nitrogen, and phosphorus content of water-column particles from Boynton et al. ( 1984 ). We did not use the sediment percent carbon data, given the observed presence of coal in many sediment cores (Edwards 2006 ). Results Inputs to the Estuary River Discharge and Suspended Sediment Concentrations (SSC) Mean daily Susquehanna River discharge (flow on a given day averaged over the entire record) from 1978 to 2017 was highest during the spring freshet (~ 2000–2600 m 3 /s), lowest during the summer and early fall (< 500 m 3 /s), and intermediate in winter (~ 1000–1500 m 3 /s); however, discharge on individual days varied over several orders of magnitude, from a minimum of 20.3 m 3 /s on 2 Nov 1980 to a maximum of 20,064.7 m 3 /s on 9 Sept 2011. Concurrent SSC measurements were made on 998 days out of the 14,610 days from 1978 to 2017. Sampling frequency of these measurements varied over the years but was biased toward higher discharges, such that the annual average discharge for days with SSC measurements was twice as high as that which includes the entire record. Even so, linear temporal trends were similar between the two data sets (see Table S1 for statistical results), whether discharge was averaged over 1- or 5-year periods. Specifically, average river discharge during events decreased over time, increased for non-events, and had no significant trend for all flows together. SSC ranged from a minimum of 1 mg/L, which may have reflected the lower measurement limit and occurred on 5 days (12 Feb 1980, 15 Jan 1986, 2 Feb 2000, 10 Dec 2012, and 4 Feb 2003), to 3680 mg/L on 20 August 2004. While there were no consistent temporal trends in annual-average SSC for all flows or event flows, years with large scour events were notable outliers, particularly 2004 (annual average SSC 235 ± 888 mg/L) and 2011 (annual average SSC 245 ± 601 mg/L). These extremes were not as apparent in the 5-year averages, which significantly increased for event flows after 1982 (first interval; note SSC data were unavailable for events in 1978 or 1982). To minimize the potential confounding effect of scour, separate regression models were built for event flows below the nominal scour threshold of 11,300 m 3 /s (400,000 cfs), with no trends in annual-average SSC but a significant increase in the 5-year average after 1982 ( p = 0.06, R 2 = 0.53). Trends for flows above the scour threshold were not evaluated, since there were only 13 SSC observations during these conditions. Both the 1- and 5-year averaged SSC decreased significantly for non-event flows. The relationship of SSC to river discharge was first evaluated simply by calculating their ratio, focusing on changes over time rather than the absolute values, which minimizes the influence of climatic variability (i.e., higher particulate loads during wet years) (Fig. 4 ; Table S1 ). Both 1- and 5-year average ratios significantly decreased for all flows and non-event flows but had no trend for event flows. Years with large scour events (e.g., 2004, 2011) were notable outliers; annual-average ratios for event flows below the scour threshold did not have a significant relationship with time, but 5-year average ratios showed a significant increase after 1982. Alternatively, changing relationships of SSC and river discharge were assessed through a rating-curve analysis (Eq. 1 ). For this analysis, individual curves were calculated for each 5-year time period for event and non-event flows; changes in the rating parameters â (vertical offset) and b (slope) of these models were then evaluated. For all flows, values of â significantly decreased over time ( p = 0.02, R 2 = 0.61), indicating that recent SSC values are lower than in the past for a given flow, but the value of b had no trend. This was also true for non-event flows—significant decrease in â ( p = 0.001, R 2 = 0.84) but no trend for b . Neither â nor b had a significant temporal trend for events. Fig. 4 Ratios of suspended sediment concentration (SSC) and nutrient concentrations to river discharge for (top to bottom): non-event flows, event flows, event flows below the scour threshold. For each row, the left hand panel shows annual averages of SSC ratios, the middle panel shows 5-year average SSC ratios, and the right hand panel shows 5-year averages of particulate nitrogen (PN; red diamonds) and particulate phosphorus (PP; yellow squares) concentrations. Significant linear regression fits are shown; see Table S1 for associated statistical parameters Full size image Gaps in the SSC measurement record were filled using the corresponding rating curve for the year and flow conditions; daily sediment loads were calculated by the product of SSC and river discharge, then summed for each 1- and 5-year interval. Total annual sediment loads (all flows) varied from 0.22 × 10 6 t (2001) to 11.5 × 10 6 t (2011). For individual years, event flows contributed between 12% (2009) and 98% (2011) of the total load, with an average contribution of 62.0 ± 23.7%. Five-year total sediment loads (all flows) varied from 2.1 × 10 6 t (2013–2017) to 14.5 × 10 6 t (2008–2012), with an average event contribution of 72.3 ± 17.5%. For comparison, event flows occurred on roughly 10% of the days from 1978 to 2017 (1338 days out of 14,610 total days in the record). Particulate Phosphorus (PP) and Nitrogen (PN) Observed daily PP concentrations varied from 0.002 mg/L (2 Sep 1992) to 2.3 mg/L (8 Sep 2011), with highest values occurring during large scour events. Annual average PP varied from 0.02 ± 0.02 mg/L (1997) to 0.22 ± 0.46 mg/L (2011), with notably high values in 2004 (0.12 ± 0.3 mg/L) and 1981 (0.13 ± 0.11 mg/L). Correspondingly, 5-year average PP was highest in 1978–1982 (0.08 ± 0.09 mg/L) and 2008–2012 (0.09 ± 0.03 mg/L). However, the highest annual-average ratio of PP concentrations to river discharge occurred in 1998, which did not include a scour event, and the lowest annual-average ratio was in 1996, which did include a scour event. Annual-average ratios decreased for all flows and non-event flows (Table S1 ). For event flows, these ratios decreased significantly for the first ~ 10 years (1979–1987; p = 0.04, R 2 = 0.53; note that PP observations were not made during events in 1978 or 1982), then increased significantly for the rest of the record (1988–2017; p < 0.001, R 2 = 0.34). Ratios averaged over 5 years showed no consistent temporal trends for all flows but significantly decreased for non-event flows. For event flows, 5-year average ratios significantly increased ( p = 0.06, R 2 = 0.55) but only after 1982, which had a relatively high ratio. Observed PP concentrations varied widely with river discharge, regardless of flow condition, limiting confidence in regression models. Instead, daily values were obtained from the WRTDS model (Zhang et al. 2015 ), which accounts for PP variability with both time and discharge; these data were available only prior to April 2013, excluding the 2013–2017 time period from further consideration. The total annual PP load was highest in 2011 (15.1 × 10 3 t) and lowest in 2001 (0.63 × 10 3 t). Event contributions varied from 8.4% (2009) to 92.9% (2011), averaging 50.2 ± 0.23% on the annual scale. Averaged over 5 years, the average event contribution was 61.0 ± 15.1%. The 5-year total PP load was highest for 2008–2012 (20.9 × 10 3 t) and lowest for 1998–2002 (5.4 × 10 3 t). There were no obvious temporal patterns for the 1-year total PP loads; 5-year total loads increased for all flows and event flows but had no consistent trend for non-event flows. Observed daily PN concentrations varied from 0.001 mg/L (20 May 1997, 2 Apr 2003) to 7.6 mg/L (8 Sep 2011). Annual-average PN varied from 0.10 ± 0.05 mg/L (2009) to 0.70 ± 1.5 mg/L (2011) and showed a clear decrease over time, even including an anomalously high value in 2011. Annual-average PN also decreased over time for non-event flows but showed no consistent temporal trend for event flows, whether or not flows above the scour threshold were included. The 5-year average PN concentration significantly decreased for all flows and non-event flows. Five-year average PN significantly decreased for event flows but only when 2008–2012 was excluded ( p < 0.001, R 2 = 0.94); this was also true for event flows below the scour threshold ( p < 0.001, R 2 = 0.89). The annual average ratio of PN to river discharge significantly decreased over time for all flows and non-event flows but showed no evident temporal patterns for event flows, even after excluding large scour events as potential outliers. The 5-year average ratio had no consistent trends for all flows but declined significantly for non-events. Five-year average ratios decreased for events but only for those flows below the scour threshold. Only 14 PN observations were made during flows above the scour threshold. Like PP, the wide variability of PN with river discharge precluded robust rating-curve calculations, and PN loads were calculated with WRTDS-derived concentrations (Zhang et al. 2015 ). The annual total PN load varied from 3.1 × 10 3 t in 1979 to 38.6 × 10 3 t in 2011. While there was no consistent linear trend over time, annual loads significantly increased from 1978 to 1996 ( p = 0.01, R 2 = 0.32), had similar and low values until 2001, increased to 2004, then decreased to 2012. The post-2004 decrease is statistically significant if 2011 is excluded ( p = 0.03, R 2 = 0.65). Events contributed an average of 41.8 ± 21.9% to the annual loads, highest in 2011 (90.7%) and lowest in 2008 (8.3%), with no apparent temporal trends. There was a significant increase in the 5-year total loads, with an average event contribution of 49.4 ± 16.8%. Event contributions generally increased over time, except for relatively low contributions in 1988–1992 and 1998–2002. Particle Settling Velocities The range of flows sampled for the direct settling velocity measurements was small, between 3030 and 4842 m 3 /s. The range of suspended sediment concentrations (SSC) was larger, between 11 and 118 mg/L. While there was a tendency for SSC to increase with increasing flow, there was significant variability in SSC values at very similar flows, revealing the myriad other factors that affect instantaneous SSC. Mass fractions in different settling speed categories varied only slightly across all dates and locations sampled. There was a tendency for the fraction in the slowest settling category to decrease with increasing flow, accompanied by a slight increase in the mass fractions of the middle two settling categories. For the range of flows sampled, approximately 70% of the SSC settled slower than 0.01 mm/s, 25% settled between 0.01 and 0.2 mm/s, 4% settled between 0.2 and 2 mm/s, and 1% settled faster than 2 mm/s. Mass fraction estimates in different settling speed categories based on disaggregated particle sizes from simultaneous USGS samples were consistent with both Owen tube methods, tending to split any differences between them (Fig. S4 ). The settling velocity estimates using all three techniques were statistically indistinguishable, indicating that settling velocity estimates based on USGS disaggregated particle sizes under a broad range of flows are likely representative of actual settling velocity distributions. This in turn implies that particles passing through or over the Dam face are effectively disaggregated by the energetic turbulent flow conditions found there. Based on the good agreement between our Owen tube settling experiments and the disaggregated particle size estimates of settling velocity at Conowingo, we used USGS National Water Information System (NWIS) data from Conowingo covering a much broader range of flows and dates to extend our analysis. Over the entire record, sampled flows ranged from 419 to 16,774 m 3 s −1 , SSC from 13 to 2980 mg/L, and settling velocity distributions from almost entirely in the slowest settling category to more evenly distributed across categories at very high flows. Figure 5 summarizes observed changes in settling velocity distributions with increasing flow. Linear fits to the mass fraction in each category sum to 1 across all flows, as required to conserve mass. The trends of decreasing mass fraction with increasing flow in the slowest settling category and increasing mass fraction with flow in all other categories are much more apparent here than in our direct settling velocity measurements, primarily because the USGS particle-size data cover a much greater range of flows. Denoting each of these linear fits as f i , Fig. 5 Mass fractions in different settling velocity classes as a function of flow speed, from USGS particle size observations at Conowingo Dam. The linear least-squares fits for mass fraction in the different settling velocity classes sum to one across all river flows Full size image $$ {\displaystyle \begin{array}{c}{f}_1=-1.938x{10}^{-5} Flow+0.867\\ {}{f}_2=0.997x{10}^{-5} Flow+0.116\\ {}{f}_3=0.773x{10}^{-5} Flow+0.014\\ {}\ {f}_4=0.168x{10}^{-5} Flow+0.003\end{array}} $$ (2) These flow-dependent changes in mass fraction within different settling velocity classes were used directly in the numerical model to more accurately simulate suspended sediment input characteristics across a wide range of river flows. Fate of Sediment in the Estuary Sedimentology Mud content of surficial sediments (uppermost 1 cm in cores) in the upper Bay generally increased downstream during the low-flow years 2015 ( p = 0.10; R 2 = 0.71) and 2016 ( p > 0.10). In contrast, mud content decreased with distance downstream after TS Lee (Fig. 6a ). Correspondingly, sediment at the most upstream site (Lee7; ~ 20 km from the Susquehenna River mouth; see Fig. 3 ) was much coarser after non-event flows (~ 60–70% mud) than after TS Lee (~ 90% mud), and sediment at the most downstream site (LeeS2; ~ 120 km from the Susquehanna River mouth) was much finer after non-event flows (nearly 100% mud) than after TS Lee (~85% mud). Averaged across sites sampled in all years ( n = 4), average mud content was higher after TS Lee (89.9 ± 4.0%) than after non-event flows (lowest in 2015; 84.2 ± 14.1%), but the difference was not statistically significant. 7 Be inventories at each site indicate the amount of watershed sediment deposited within the previous 77–250 days (see “Methods”). While average sedimentation rates can be calculated by extrapolating the inventory over this time period, sediment likely was delivered relatively quickly after TS Lee but more gradually under non-event conditions. Inventories during 2015 and 2016 were highest at Lee2.5 (~ 55 km from the Susquehanna River mouth; Fig. 6b ), but the maximum inventory after TS Lee occurred at Lee7 and decreased linearly downstream ( R 2 = 0.78, p = 0.005). For the sites sampled in all years ( n = 4), the average 7 Be inventory was highest after TS Lee (2.6 ± 2.1 dpm/cm 2 ) and lowest in 2016 (0.9 ± 0.7 dpm/cm 2 ); this difference was statistically significant ( p = 0.04; paired t test). Thus, the most obvious differences between non-event and event flows occurred at the most up- and downstream sites, with much more deposition and finer sediments upstream, and less deposition and coarser sediments downstream, after TS Lee. Fig. 6 a Mud content of surficial sediments and b total 7 Be inventories at sites in the upper Chesapeake Bay plotted versus distance downstream from Lee7 (arbitrarily set at 5 km) for 3 different time periods denoted by differing symbology Full size image Over longer, decadal time scales, sediment accumulation rates were variable throughout the upper Bay, from 0.26 cm/year at LeeS2 to 1.2 cm/year at Lee5 (~ 30 km from the Susquehanna River mouth; Russ and Palinkas in review; Russ 2019 ). A precise accumulation rate could not be calculated for Lee2, because the regression fit required by the CFCS model was not statistically significant; instead the minimum rate of 0.81 cm/year was determined by noting the presence of excess 210 Pb (~ 100 year) at the base of the core (81 cm). At all other sites, regression models were statistically significant, implying dominance of steady-state sedimentation, rather than event sedimentation, over longer time scales. Down-core grain-size profiles were generally uniform, supporting the interpretation of steady-state sedimentation and also indicating that no major changes in sediment character have occurred in the upper Bay over the last ~100 y. Transport Modeling The COAWST model simulated sediment dynamics in Chesapeake Bay between 1 May 2015 and 30 June 2016, a ~ 1-year period that includes the time of field observations (August 2015, April 2016). The lack of event flows during this period, and the absence of major storms, resulted in trapping of most suspended sediments within the upper Bay around the ETM zone (Fig. 7 ). Nevertheless, there were clearly seasonal differences in SSC, with elevated SSC (10 mg/L) in the upper Bay during the 2016 spring freshet, as well as transport of small amounts of clay and silt to the mid-Bay. Ultimately, most fluvial sediments were deposited in the Susquehanna Flats, with thicknesses < 1 cm per month (Fig. 8 ). Fig. 7 Along-channel distribution of monthly averaged suspended sediment concentration under normal flow conditions experienced during the field campaigns Full size image Fig. 8 Monthly average of thickness of sediment deposits (in cm) in the upper Bay for (left to right) July 2015, January 2016, and June 2016 Full size image Event conditions were simulated with hindcasts of TS Lee (2011) and Hurricane Ivan (2004), two recent flood events that had largest impacts on sediment loading into Chesapeake Bay. While previous work with TS Lee assumed fixed percentages of clay, silt, and sand (40%, 50%, and 10%, respectively; Cheng et al. 2013 ) for Susquehanna River sediment, this hindcast of TS Lee used flow-dependent percentages for the sediment classes from the observations (i.e., Fig. 5 and Table 1 ). While the total amount of flood-discharged sediment was unchanged (6.7 × 10 6 t), there were major differences in the size distribution of fluvial sediments. In Cheng et al. ( 2013 ), only 0.6 × 10 6 t of sand was discharged to the Bay versus 6.1 × 10 6 t of clay and fine silt. In contrast, the new model showed discharge of 1.5 × 10 6 t of coarse silt and sand, 2 × 10 6 t of fine silt, and 3.2 × 10 6 t of clay (Fig. S5 ). At these extreme river flows, larger amounts of coarser sediment components were scoured from the Reservoir bed and delivered to the Bay. The sediment plume after Hurricane Ivan showed a similar temporal evolution as that observed during TS Lee (not shown). Depositional patterns were also similar to TS Lee: most coarse silts and sands were deposited in the upper reaches of the Susquehanna Flats with a maximum thickness of 3 cm; fine silts were deposited everywhere in the upper Bay but with highest deposition (1 cm) in the Susquehanna Flats; clays were widely dispersed in the upper and mid-Bay regions with a thickness < 0.3 cm (Fig. S5 ). Although the processes of sediment transport and deposition were quite similar between Hurricane Ivan and TS Lee, the magnitude of sediment flux and deposition was quite different, responding non-linearly to Susquehanna River discharge. Peak discharge during TS Lee was ~ 1.5 times higher than during Hurricane Ivan (2.2 × 10 4 m 3 /s for Lee; 1.5 × 10 4 m 3 /s for Ivan), but the total sediment load delivered to the Bay was ~ 4.5 times higher after TS Lee than Hurricane Ivan (6.7 × 10 6 t for Lee; 1.5 × 10 6 t during Ivan) (Fig. S6 ). While part of this difference might be related to the longer flood duration for TS Lee, a more likely explanation is the nonlinearity of the loading curve. Sediment Biogeochemistry and Exchange with Water Column Sediment-Water Fluxes in the Reservoir and Estuary The overall rates of sediment oxygen uptake (Fig. 9 ) within the three lower Susquehanna River Reservoirs and the upper Chesapeake Bay were compared in spring 2016. Because temperatures were changing rapidly, all rates were adjusted to 20 °C following Schnoor ( 1996 ). There were no significant differences between the stations within the Reservoir or between the Reservoir and the nearby upper Bay (Fig. 9 ). The sediment-water exchange of soluble reactive phosphorus was low and often directed into the sediment (Fig. S7 ). Overall, the dominant efflux of nitrogen was as N 2 -N with ammonium showing highly variable rates and average nitrate plus nitrate concentrations directed into the sediment. In May of 2015, sediment-water NH 4 + effluxes and NO 23 − influxes were elevated, but rates measured at all other times of year were small (Fig. S7 ). Fig. 9 Sediment oxygen demand as a function of latitude, including upper bay sediments and all 3 reservoirs in the lower Susquehanna River. The data were all collected in April 2016, with temperatures of 9.6 °C in Conowingo Reservoir, 14.5 °C in the upper Chesapeake Bay, and to 18 °C in Lake Clarke and Lake Aldred. All plotted data were adjusted to 20 °C following Schnoor ( 1996 ) to allow a more direct comparison. The error bars represent standard deviation and there was no significant difference in the different segments Full size image Characterization of Sediment Composition and Reactivity Rates of anaerobic ammonium production were generally low, with higher rates from fluvial sediments collected at the Dam outflow than observed from a survey of 13 sediment stations. Anaerobic nitrogen remineralization rates were determined from sediment and suspended sediments using time course incubations (Fig. 10 ). Observations from surficial sediments (0–2 cm depth) averaged 15% of rates from the water column. Long-term incubations of deeper sediments suggested extremely low rates of N remineralization, with the low rates making measurements difficult, even over the course of > 180 days of incubation. These rates reflect a mix of terrestrial and algal organic matter inputs and suggest that surficial sediments quickly lose much of their reactivity after deposition. Fig. 10 Plot of ammonium concentrations on a dry mass basis for 16 anaerobic sediment incubations from May 2015 and 7 anaerobic water-column incubations from February 2016. The error bars are standard errors, and the slopes are 0.05 and 0.223 μmol g −1 day −1 for sediment and water column Full size image Biogeochemical Modeling Biogeochemical model simulations and diagenesis experiments indicated that depositing organic material in the Conowingo Reservoir has moderate/low reactivity relative to phytoplankton-derived organic material (26% G1; 20% in G2; 54% in G3 in Reservoir versus 65% G1; 20% G2 and 15% G3 for phytoplankton) and that 94% of the sediments that accumulate in the Reservoir were refractory. Consequently, sediment-water fluxes of dissolved inorganic nitrogen and phosphorus were low and contributed a small fraction (< 0.1%) of the export flux of nitrogen and phosphorus from the Reservoir (Fig. S7 ). Using model-derived estimates of sediment P and N content of the sediment and assuming that scour could remove either the top 5 cm or 10 cm of the sediment, the potential relative contribution of scoured reservoir sediments to this export flux during events is much higher for phosphorus than for nitrogen. For phosphorus, scouring bottom sediments to a depth of 10 cm would represent 131% of the annual TP export from the Reservoir, while for nitrogen, it would account for only 7.3%. This reservoir scour estimate is half of the TP load delivered during TS Lee, but only 12% of the TS Lee TN input. The potential biogeochemical impact of depositing scoured Reservoir sediments in the upper Bay was explored via numerical experiments. Specifically, the sensitivity of sediment-water fluxes to altered deposition rates and composition of depositing organic material (scoured Reservoir sediments versus more typical phytoplankton detritus) was tested (see “Methods”). While changes in deposition rates yielded expected proportional changes in sediment C:N:P content and sediment-water fluxes (Fig. 11 ), increased deposition rates only resulted in better representation of dissolved O 2 fluxes. In addition, while all three simulations represented sediment nitrogen content well (model = 0.19 %N, data = 0.23 ± 0.05 %N), they under-predicted %C (model = 1.9, data = 3.9 ± 1.3), and over-predicted %P (model = 0.14, data = 0.06 ± 0.02). We also compared three simulations with different formulations for the estimated organic matter deposition rates. In addition to the “Base” scenario, we deposited the same organic material as in the “Base” case but with reactivity fractions matching the Conowingo simulation; we also estimated organic-matter deposition rates from overlying-water chlorophyll-a, which has increased slightly over time (Fig. 11 ). These simulations revealed that dissolved O 2 and ammonium fluxes were better represented by the chlorophyll-a based deposition rates, but that phosphorus pools and sediment-water fluxes were overestimated (Fig. 11 ). Fig. 11 Modeled (lines) and/or observed (circles) diagenesis rates, sediment carbon/nutrient content, and sediment-water fluxes for the year 2015 under the baseline simulation (black lines; Brady et al. 2013 ), a simulation based upon the baseline POM deposition rates but with Conowingo Reservoir sediment-like material (red lines; G1 = 0.26, G2 = 0.2, G3 = 0.54), and deposition calculated from algal G fractions and observed water-column chlorophyll-a time-series (blue lines; G1 = 0.65, G2 = 0.2, G3 = 0.15) Full size image Synthesis of recent sediment-water fluxes measurements with historic observations indicates that sediment-water fluxes of nitrogen and phosphorus in the Conowingo Reservoir are similar to upper Chesapeake Bay and are low relative to mesohaline Bay sediments. Measurements made in 2015 indicate that sediment-water P fluxes in the Conowingo Reservoir and upper Bay typically range from low rates of net uptake (~− 15 μmol P m −2 h −1 ) to low levels of net release (~ 2–15 μmol P m −2 h −1 ), compared to warm-season maxima of 30–90 μmol P m −2 h −1 at low-oxygen, mid-Bay stations (Testa et al. 2013 ). Similarly, recently observed ammonium fluxes range from 0 to 600 in the Conowingo and Upper Bay (with the majority of fluxes < 200 μmol N m −2 h −1 , compared to rates consistently in the range of 300–800 μmol N m −2 h −1 at low-oxygen, mid-Bay stations (Brady et al. 2013 ). While the low N and P fluxes typical of upper Bay and Reservoir sediments have been measured in shallower, well-oxygenated sediments, these sediment-water flux rates are elevated in adjacent deeper sediments, which cover a limited area in the upper Chesapeake Bay (Boynton and Rohland 1998 ). Clearly, despite consistently high deposition rates in these low-salinity regions of Chesapeake Bay and the upstream Conowingo Reservoir (see above), the sediments in these habitats efficiently retain nutrients. Discussion This paper synthesizes data from a variety of methods, each of which have their own limitations and uncertainties (see “Methods” and Fig. 1 ). However, all of these data resulted from a coordinated interdisciplinary study, such that observations and model results represent similar spatial and temporal scales. For example, sedimentological and biogeochemical field sampling was simultaneous in the upper Bay, and transport and biogeochemical modeling covers the period of these observations. Additional transport-model runs for 2004 and 2011 captured event conditions that were not present during the field study. The 2011 model run is the same as Cheng et al. ( 2013 ), but with updated parameterization provided by the particle-settling velocity experiments, and compared to field observations of the same event by Palinkas et al. ( 2014 ). Because we were interested in changes over time, we obtained river discharge and suspended-constituent data for as many years as possible. We chose to run these analyses through 2017 to have 4 full decades of data. The sediment biogeochemical model was run from 1985 to 2015 to validate the model against observations made at various times over the past three decades; analysis of model output focuses on 2015 when most contemporary observations were made. As such, the dataset for this paper is unique in its spatial and temporal scope, capturing physical and biogeochemical processes along the entire river-estuary continuum. In particular, these data reveal that the character and magnitude of particulate dynamics throughout the Susquehanna River-upper Chesapeake Bay continuum are quite different for non-event and event flows. Thus, it is useful to consider the two conditions separately. Non-event Flows Non-event flows occur most of the time, ~ 90% of the days since 1978, and thus represent “every-day” conditions. In the reservoir, sediment deposition is driven by a balance of sediment supply and physical energy, following expectations from most river-delta systems and reservoirs around the world (Alexander et al. 1991 ; Pirmez et al. 1998 ; Palinkas 2009 ). In particular, most sediment delivered at the upstream reservoir boundary remains in suspension due to higher physical energy and is transported to the more quiescent middle region, where it can rapidly deposit. Near the downstream boundary, sediment supply has been depleted and the energy is likely higher due to the Dam turbines, inhibiting deposition. Suspended sediment and attached nutrients are thus transported over the Dam and are largely composed of watershed material, since flows are typically well below any estimates of the scour threshold. Suspended-sediment, and attached phosphorus and nitrogen, loads delivered to the Bay past Conowingo have declined since 1978 for equivalent non-event river flows. This decrease occurred even though river discharge on the same subset of days increased and likely reflects impacts of BMPs in the watershed. For the Susquehanna, most trend analyses of loads since the 1980s indicate increasing loading of particulate forms of N and P (Hirsch 2012 ; Zhang et al. 2013 ), but these studies include all river flows together. For the low, non-event flows that occur most of the time, our analyses show just the opposite—decreasing loads over time that likely reflect efforts in the watershed to reduce these loads through BMP installation. Event Flows Event flows are stochastic and have significant, non-linear effects on river discharge, sediment dynamics, and geochemistry. In the Reservoir, event flows redistribute sediment, eroding temporary stores from channels and the mid-Reservoir region and transporting them downstream near the Dam, which facilitates net accumulation by its physical presence, and over the Dam into the upper Bay. Flows that exceed the scour threshold are particularly effective at delivering large amounts of Reservoir sediment to the upper Bay, and there is currently much interest in defining the threshold value. The often-cited 400,000 cfs (11,326.7 m 3 /s) value originated from Gross et al. ( 1978 ), cited by Lang ( 1982 ), and was based on a 1-year comparison of sediment loads at Harrisburg (PA, upstream of the Marietta gauge) and Conowingo, assuming that the threshold occurs when loads at Harrisburg are lower than at Conowingo. This comparison necessarily assumed no sediment inputs/outputs between these two gauges, ignoring several small tributaries and perhaps more importantly the two reservoirs upstream of Conowingo. More recent work suggests that the scour threshold has decreased with Reservoir infill and now could be as low as 175,000 cfs (4955.4 m 3 /s; Hirsch 2012 ). Our analyses of event flows between the typical opening of the first flood gate at 86,400 cfs (2446.5 m 3 /s) and 400,000 cfs showed increasing amounts of suspended materials for an equivalent river discharge over time, consistent with a decreasing scour threshold. However, there is another effect of reservoir infill that has received much less attention—decreasing deposition of watershed sediment as it passes through the reservoir. Decreasing deposition accompanies infilling because the decrease in cross-sectional area due to infilling increases flow speed and bottom stress, which keeps sediment in suspension. Both a lower scour threshold and decreasing deposition are likely active and drive the observed increased in suspended loads during moderately large flows. Event flows > 400,000 cfs occur infrequently (every ~ 5–7 years), and have few observations of SSC and particulate nutrients ( n ~15 for the entire 1978–2017 period), preventing robust trend analyses. Recent attention has been focused on the potential impacts on Chesapeake Bay of elevated particulate N and P inputs associated with more frequent scour events within the Conowingo Reservoir (e.g., Cerco 2016 ). Our synthesis suggests that the potential biogeochemical impacts of these elevated inputs are limited in time and space for several reasons. First, despite the fact that scour events likely occur even more frequently than indicated by the 400,000 cfs scour threshold, model analyses of reservoir sediments suggest that a substantial scour event (top 5 cm of the entire reservoir) would contribute 20% of P loads in a TS Lee-like storm and only 6% of N loads. The scoured particulate N and P loads that do enter the Chesapeake Bay are also highly refractory (turnover time » 1 year). Second, particulate forms of N and P that enter Chesapeake Bay are efficiently retained in the upper Bay, especially near the Susquehanna River mouth, due to high sinking rates or trapping within the ETM (Sanford et al. 2001 ). Our finding that delivered particles coarsen and associated settling speeds increase as flow rates increase further amplifies upper Bay sediment trapping. Third, the tidal fresh/oligohaline region where the majority of sediments deposit has typically low rates of sediment-water N and P fluxes, as a result of high rates of denitrification (Testa et al. 2013 ), effective phosphorus retention in iron-enriched, oxidized sediments (Hartzell et al. 2017 ), and low reactivity of the organic material (Fig. 10 ). Furthermore, any scoured material that is regenerated in the upper Bay enters a highly enriched water column that is rarely nutrient limited (Fisher et al. 1999 ). Consequently, model simulations of scour events within Conowingo Reservoir have only shown marginal impacts on dissolved oxygen (Cerco 2016 ). Over longer, decadal time scales, event sediments in this region are effectively redistributed such that their signal is not obvious in sediment cores. However, unlike non-event flows, event flows are capable of transporting fine sediment downstream of the ETM as evidenced by model results and preservation of event-sediment signatures in cores. When sediment reaches the mid-Bay region, it encounters saltier, mesohaline waters that in the Chesapeake ecosystem are typically hypoxic or anoxic during summer (Testa and Kemp 2014 ). Low oxygen conditions, in combination with high concentrations of sulfate that eventually lead to sulfide accumulation, allow for high rates of sediment-water efflux of phosphorus and ammonium, especially during warm months (Cowan and Boynton 1996 ; Testa and Kemp 2012 ). While we do not have model simulations or measurements to track the potential relocation of particulate nutrients from their initial deposition in the upper Bay to more seaward waters, prior analysis in mesohaline Chesapeake Bay sediments has shown clear relationships between recently deposited chlorophyll-a and N and P fluxes (Cowan and Boynton 1996 ), indicating that local phytoplankton production drives these fluxes. It is important to keep in mind that, while events can deliver enormous amounts of sediment to the Bay, they occur infrequently (~ 10% of the time). Moreover, sediment deposition in the mesohaline region is relatively small in magnitude (e.g., only ~ 1 cm after TS Lee), minimizing potential impacts to Bay biogeochemistry. In fact, the Bay has been remarkably resilient to recent storm events. For example, SAV beds in the upper Bay experienced some erosion during TS Lee but were able to mostly withstand the event (Gurbisz et al. 2016 ). In the years following, most indicators of Bay health show improving water quality and expansion of SAV in low-salinity regions (Lefcheck et al. 2018 ; Testa et al. 2018 ; Zhang et al. 2018 ). However, note that no other large events occurred after TS Lee until July 2018 when the Susquehanna River at Conowingo crested at 376,000 cfs (10,647.1 m 3 /s; ). Between those two events, the highest flow (except for 1 day in Feb 2013) was during the 2017 spring freshet when the highest flow was 177,870 cfs (5036.7 m 3 /s; ). This gap between large events likely aided the Bay’s recovery, similar to the string of dry years that likely aided recovery of SAV on the Susquehanna Flats (Gurbisz and Kemp 2014 ). Prior investigations into the impacts of reservoir construction on the transport of material to the coastal zone and the subsequent response have often differed from our discussion of the lower Susquehanna River reservoirs. While the primary problem identified with the infilling of Conowingo Reservoir is the potential increase in particulate nutrient inputs, the focus of other studies has often examined dam impacts on nutrient load reductions . In part, this discrepancy reveals the contrast between reduced sediment trapping in mature reservoirs (Conowingo) versus increased sediment trapping in young reservoirs. For example, the once highly productive fisheries of Mediterranean waters near the outflow of the Nile River appeared to degrade after the construction of the Aswan Dam, which severely reduced riverine sediment and nutrient inputs (Nixon 2003 ), but fisheries recovered once anthropogenic nutrient inputs increased. The construction of the Three Gorges Dam on the Yangtze River reduced the silica to nitrogen ratio and nutrient inputs overall, which was associated with phytoplankton productivity declines in the East China Sea (Gong et al. 2006 ). Long-term reductions in the silica to nitrogen ratio have been described for other large rivers (e.g., Mississippi; Turner and Rabalais 1991 ), and in some cases, these altered rations have been associated with reduced diatom productivity (e.g., Danube River and Northwestern Black Sea; Humborg et al. 1997 ). Thus, the nature of the history and geology of a given dam (age, trapping capacity) is critical to understanding its role in the productivity and biogeochemistry of receiving waters. Conclusions This study synthesized field observations, model results, and long-term monitoring data along the reservoir-estuary continuum to evaluate potential impacts of Conowingo Reservoir infilling on Chesapeake Bay biogeochemistry (see Fig. 1 ). Results show that, for equvialent river discharges, sediment loading has decreased during non-event flows but increased during event flows (question 1 in the Introduction). The potential biogeochemical impacts of these elevated inputs is limited, because scoured particulate nitrogen (N) and phosphorus (P) loads that do enter the Bay are highly refractory (turnover time » 1 year) and would contribute a relatively small fraction of loading in an extreme storm like Tropical Storm Lee (question 2). Also, these sediments are efficiently retained in the upper Bay due to high sinking rates or trapping in the ETM but can be transported downstream during events (question 3). Thus, while large precipitation and riverine flow events are significant and can generate a substantial short-term impact on receiving waters in Chesapeake Bay, the estuary is remarkably resilient to storms (question 4). This recovery potential is likely aided by long time lags between major events and an underlying improvement in watershed management that is evident during low flow periods. The maturation of dams (i.e., infilling) over time shifts these constructed ecosystems from net nutrient and sediment sinks to sources, which changes their effect on downstream waters from that of a nutrient and sediment sink to that of a source. The Chesapeake Bay will be negatively influenced by continued infilling of reservoirs and the loss of an unintended watershed BMP, but the scale of the potential impact of elevated particulate nutrient inputs on the mainstem Chesapeake Bay is likely small compared to ongoing reductions in dissolved nitrogen and phosphorus in many regions of the watershed. | University of Maryland Center for Environmental Science researchers have completed a study on the impact of Conowingo Dam on water quality in Chesapeake Bay. Scientists synthesized field observations, model results, and long-term monitoring data to better understand the potential impacts of nutrient pollution associated with sediment transported from behind the Dam to the Bay. "This synthesis is important for bringing the best science to Bay management decisions by considering the entire Susquehanna-Conowingo-upper Bay system and integrating insights from several related studies," said Peter Goodwin, president of the University of Maryland Center for Environmental Science. "Since most rivers around the world are dammed, understanding potential impacts to adjacent estuaries is highly relevant to international scientific and management communities." Dams initially starve downstream ecosystems of both sediments and particulate nutrients by trapping them in upstream reservoirs. Eventually, however, these reservoirs fill, increasing the delivery of sediment and nutrients to downstream ecosystems, especially during storm events when stored sediments can be scoured. Since its construction in 1928, Conowingo Dam has trapped most of the Susquehanna River watershed sediment and associated particulate nitrogen and phosphorus before they enter Chesapeake Bay. However, its storage capacity has significantly decreased, raising questions of potential impacts to Bay ecosystems. Scientists found that most sediment and particulate nutrient impacts to the Bay occur during high-flow events, such as during major storms, which occur less than 10% of the time. Loads delivered to the upper Chesapeake Bay during low flows have decreased since the late 1970s, while loads during large storm events have increased. Most of these materials are retained within the upper Bay but some can be transported to the mid-Bay during major storm events, where their nutrients could become bioavailable. "While storm events can have major short-term impacts, the Bay is actually really resilient, which is remarkable," said the study's lead author Cindy Palinkas, associate professor at the University of Maryland Center for Environmental Science. "If we are doing all of the right things, it can handle the occasional big input of sediment." Sediment and particulate nutrient loads have decreased since the late 1970s for normal river flows and increased for storm flows. During non-event flows, most sediment delivered past Conowingo comes from the Susquehanna watershed. Sediment and attached nutrient loads have declined since 1978 (first complete year of monitoring data) for non-event river flows. This decrease reflects efforts to reduce watershed loads through BMP installation. During event flows, sediment and attached nutrient loads have increased over time, consistent with a decreasing scour threshold in the reservoir. This is also consistent with decreased trapping of watershed sediment as it passes through the Reservoir. Both a lower scour threshold and decreased trapping probably drive the observed increase. The potential impact of reservoir sediments to Bay water quality are limited due to the low reactivity of scoured material, which decreases the impact of total nutrient loading even in extreme storms. Most of this material would deposit in the low salinity waters of the upper Bay, where rates of nitrogen and phosphorus release from sediments into the water are low. However, event flows can transport fine reservoir sediment to the mid-Bay region, where waters are saltier and lower in oxygen during summer. These conditions could allow for higher rates of nutrient releases from sediments. Most sediments are deposited in the upper Bay with minimal transport to the mid-Bay possible only during storm events. Increased flows during major storm events can transport some material into the mid-Bay region, but these events are redistributed over longer time scales. While large events can have significant short-term impacts, the Bay is resilient over the long run due to ongoing restoration and time gaps between events. Major storm events can deliver enormous amounts of sediment to the Bay, but they occur infrequently (less than 10% of the days since 1978). Sediment delivery to the mid-Bay region, where waters are saltier and more conducive to nutrient releases from sediment, is relatively small in magnitude, minimizing potential impacts to Bay water quality. This synthesis project was supported by Maryland Sea Grant, the Grayce B. Kerr Fund, and Exelon through the Maryland Department of Natural Resources. | 10.1007/s12237-019-00634-x |
Physics | Energy decay in graphene resonators | Johannes Güttinger et al. Energy-dependent path of dissipation in nanomechanical resonators, Nature Nanotechnology (2017). DOI: 10.1038/nnano.2017.86 Journal information: Nature Nanotechnology | http://dx.doi.org/10.1038/nnano.2017.86 | https://phys.org/news/2017-05-energy-graphene-resonators.html | Abstract Energy decay plays a central role in a wide range of phenomena 1 , 2 , 3 , such as optical emission, nuclear fission, and dissipation in quantum systems. Energy decay is usually described as a system leaking energy irreversibly into an environmental bath. Here, we report on energy decay measurements in nanomechanical systems based on multilayer graphene that cannot be explained by the paradigm of a system directly coupled to a bath. As the energy of a vibrational mode freely decays, the rate of energy decay changes abruptly to a lower value. This finding can be explained by a model where the measured mode hybridizes with other modes of the resonator at high energy. Below a threshold energy, modes are decoupled, resulting in comparatively low decay rates and giant quality factors exceeding 1 million. Our work opens up new possibilities to manipulate vibrational states 4 , 5 , 6 , 7 , engineer hybrid states with mechanical modes at completely different frequencies, and to study the collective motion of this highly tunable system. Main Energy decay is central in many fields of physics, including acoustics 1 , non-equilibrium thermodynamics 2 and the quantum mechanics of dissipative systems 3 . A dissipative system is typically assumed to be coupled to a given thermal environmental bath. The rate of the coupling is often considered to be independent of the system energy. Recently, nonlinear phenomena associated with dissipation have attracted considerable interest. These include measurements on radio-frequency superconducting resonators, where the energy-dependent decay rate is attributed to the saturation of two-level defect states 8 . In addition, measurements on nanomechanical resonators have been described by a phenomenological nonlinear decay process 9 , 10 , 11 , 12 , 13 . However, the physical mechanism responsible for the nonlinear dissipation remains elusive. Understanding the origin of dissipation in mechanical resonators is crucial 14 , 15 , 16 , 17 , 18 , because it is a key figure of merit for many applications, such as mass, force and spin sensing 19 , 20 , 21 , 22 . Here, we report energy decay measurements on mechanical resonators that are radically different from what is usually observed, and that allow us to reveal the nature of the nonlinear energy decay process. We first recall the way by which energy stored in a vibrating guitar string is dissipated. In essence, the sound coming from the guitar originates from the coupling of the vibrating string with the air in the room, which acts as the environmental bath. To increase the intensity of the sound, and to quickly tone the sound down, the string is coupled resonantly to some cavity modes of the guitar body. As the string and these cavity modes vibrate at similar frequencies, the vibrational energy is transferred efficiently. The coupling is said to be linear, in a sense that the coupling energy H c is proportional to the product of the vibration amplitude of the string q 1 by the vibration amplitude q 2 of the guitar body, H c ∝ q 1 q 2 . For strongly coupled modes, hybridization occurs, and the two modes decay in unison with an average decay rate ( γ 1 + γ 2 )/2. However, when the coupling is weak, the two modes are decoupled and decay with rates γ 1 and γ 2 , respectively. In this Letter, we consider the case where the coupling between modes is nonlinear, and we show that this results in nonlinear energy decay processes in graphene resonators. Nonlinear couplings have the peculiarity of enabling energy transfer between vibrational modes even if resonant frequencies are far apart. For this to happen, the ratio of resonant frequencies ω 2 / ω 1 has to be close to an integer n (refs 23 , 24 ). Although the modes are in the classical regime, this nonlinear mode coupling can be easily understood by considering the energy ladders of harmonic oscillators. Figure 1a illustrates an energy conserving process (here n = 3) annihilating simultaneously three quanta in mode 1, while creating one quantum in mode 2. The lowest-order nonlinear coupling term in the Hamiltonian that can achieve this exchange is . Nonlinear coupling is particular, as the coupling strength is energy dependent. At high vibrational amplitude, the modes hybridize, and decay in unison. At low amplitude, the coupling strength is weak, so that the two modes are decoupled and decay with rates γ 1 and γ 2 , respectively ( Fig. 1b ). As a result, the flow of energy takes different paths during an energy decay measurement of mode 1. At high amplitude, the energy of the vibrations is transferred into the bath of mode 1 directly, and into the vibrations and the bath of mode 2. At low amplitude, the dissipation channel to the bath of mode 2 is closed. This abrupt variation of the mechanical decay has to our knowledge neither been predicted nor been measured. The associated nonlinear energy decay process is generic and should be relevant to a large variety of systems, such as mechanical, optical and electrical resonators. Figure 1: Nonlinear energy decay process, mode hybridization and graphene resonator. a , Energy diagram showing an energy exchange process between two harmonic oscillators. In the top part, we depict the flow of energy when driving mode 1 strongly. b , Energy decay measured at the frequency of mode 1. Because of the nonlinearity, the effective coupling strength depends on energy. At high energy, the modes are hybridized and decay in unison. As energy decays, the modes decouple, resulting in a change of the decay rate for mode 1. c , Measurement set-up with schematic cross-section of a circular graphene drum vibrating as cos ( ω 1 t + ϕ ), where ϕ is the phase relative to the capacitive driving force. The motion is detected with the superconducting microwave cavity. After cryogenic amplification, the output signal of the cavity is recorded and digitally computed to obtain the energy and the frequency of the vibrations as a function of time. The application of the static gate voltage V g between the graphene drum and the cavity electrode allows the mechanical resonance frequencies to be tuned; the oscillating voltage V d is used to drive the resonator. d , Frequency of the fundamental mode as a function of gate voltage V g . Full size image Graphene-based resonators are well suited for observing such a nonlinear energy decay process ( Fig. 1c,d ). Indeed, their resonant frequencies can be widely tuned by electrostatic means 6 , 7 , 9 , 25 , 26 , 27 , 28 allowing us to set the frequency ratio of two modes to an integer. Energy decay measurements are carried out by preparing the resonator in an out-of-equilibrium state using a capacitive driving force, then switching the drive off and measuring the vibrational amplitude as the mechanical energy freely decays ( Fig. 2a ). A circular graphene mechanical resonator is capacitively coupled to a superconducting microwave cavity to detect the mechanical vibrations with a short time resolution, a high displacement sensitivity and over a broad range of vibrational amplitudes 29 , 30 , 31 , 32 . The time resolution is limited by the inverse of the coupling rate of the cavity to the external readout circuit, which is of the order of κ ext /2π ≈ 1 MHz in our devices. High displacement sensitivity with minimal heating is demonstrated by resolving thermal motion at about 50 mK, corresponding to ≈25 quanta of vibrations. The displacement sensitivity can be further improved using a near quantum-limited Josephson parametric amplifier (JPA) for the readout of devices with comparatively low signal output 33 . We typically record vibrations with amplitudes ranging from 1 pm to 1 nm. Using a time-resolved acquisition scheme combined with real-time digital signal processing (dashed box in Fig. 1c ), we record the two quadratures of motion, which are digitally squared to compute the vibrational energy. Energy decay traces are obtained by averaging typically 1,000 measurements and subtracting a time-independent noise background that is related to the amplifier chain. Decay traces are displayed as vibration amplitude versus time to make contact with other experiments. All the data presented here are taken at 15 mK. See Supplementary Sections 1–5 for further details on the devices, the measurement set-up and the displacement calibration. Figure 2: Energy decay measurements of a graphene resonator with a Q -factor of 1 million in the low-vibrational-amplitude regime. a , Measurement principle. At time t = 0, the mechanical driving force is switched off and the vibrational amplitude starts to decay. b , Measured energy decay of the vibrational amplitude of device I as a function of time for different drive frequencies (see colours in inset). The graphene membrane is 5 to 6 layers thick, as characterized by optical contrast measurements ( Supplementary Section 4 ). The lower-amplitude traces are shifted in time so all decaying curves overlap. The dashed grey line indicates an exponential decay corresponding to a Q -factor of 1 million. The inset shows the quality factor as a function of drive frequency. We apply V g = 0.6 V and pump the cavity with n p ≈ 1,000 photons. Full size image In the low-amplitude regime, this measurement scheme is beneficial as it allows us to observe record-high quality factors ( Fig. 2b ). The measured vibration amplitude of the fundamental mode at frequency ω 1 decays exponentially in time as ∝ e - γ 1 t / 2 with an energy decay rate γ 1 ≈ 1/(3.6 ms). This corresponds to a quality factor Q exceeding 1 million, surpassing previously reported Q -factors in graphene 9 , 29 , 31 . By collecting energy decay traces using different drive frequencies near ω 1 , we show that the Q -factor is independent of the drive frequency and the vibrational amplitude at the beginning of the ring-down ( Fig. 2b ). One reason for the observation of such high Q -factors is the fact that our technique is immune from dephasing, in contrast to previous spectral measurements on graphene resonators. Comparing energy decay measurements with spectral thermal motion measurements reveals that dephasing is significant, as the resonance linewidth is more than twice as large as the energy decay rate ( Supplementary Section 6 ). In the high-amplitude regime, our high-precision measurements reveal a crossover in the energy decay rate during the ring-down ( Fig. 3a,b ). This finding is robust, since it is observed in all the studied resonators. Changing the amplitude and frequency of the initial driving force does not affect the rates and amplitudes associated with the crossovers. However, what strongly affects the energy decay traces is the static voltage V g applied between the graphene resonator and the cavity ( Fig. 3a,b ). This is a first indication that the crossover of the decay is related to nonlinear mode coupling, since the variation of V g strongly modifies the different resonant frequencies. To ensure that all the vibrational energy during the decay is properly captured, we use signal filtering with large bandwidth. This is because the frequency changes during the decay ( Fig. 3c ). On comparing this smooth frequency change and the vibrational amplitude decay, we get the expected quadratic amplitude dependence of the frequency related to the nonlinear Duffing restoring force 34 ( Fig. 3d and Supplementary Section 7 ). Figure 3: Energy decay in the high-vibrational-amplitude regime. a , b , Energy decay measurements at different V g for three different devices (labeled I, II and III). In all cases, the decay rate changes abruptly. The bandwidth of the bandpass filter in a is 150 kHz for the violet and red traces and 200 kHz for the blue trace. The dashed lines in a indicate the slopes in the low and high amplitude regimes. The amplitude of the violet and the blue traces are multiplied by 4 and 1/4, respectively. In b , the bandwidth is 400 kHz and 200 kHz for devices II and III, respectively. c , Time dependence of the short-time Fourier transform of the vibrations corresponding to the red amplitude decay trace in a . d , Frequency shift as a function of vibrational amplitude. The quadratic dependence (red line) is in agreement with the frequency pulling expected from the nonlinear restoring force at low vibration amplitude. Full size image We characterize mode coupling by measuring the response of the fundamental mode to the driving force. For this, we tune V g so the frequency ω 1 of the fundamental mode is about one third the frequency ω 2 of a higher-order mode, that is, ω 1 /2π = 44.132 MHz and ω 2 /2π =132.25 MHz. For intermediate drive amplitudes V d , the response of mode 1 is that of a Duffing resonator with a softening nonlinearity ( Fig. 4a ). However, as driving increases, we observe a saturation of the frequency at which the high-amplitude state switches to the low-amplitude state. The related bifurcation points are shown in the top part of Fig. 4a . This saturation of the frequency is due to the efficient energy transfer between mode 1 and mode 2 when the frequency ratio is an integer 23 . Driving the system even harder, the response exhibits a plateau behaviour as shown in Fig. 4b . Figure 4: Driven response and energy decay traces. a , Measured and simulated driven response of the fundamental mode for intermediate drive strengths (bottom), the corresponding measured and calculated bifurcations where the high-amplitude states switch to the low-amplitude states (top). The drive is swept from high frequency to low frequency. The observed saturation of the switching frequency at high drive occurs when ω 2 = 3 ω 1 , providing strong evidence for mode interaction. b , Measured and simulated driven response of the fundamental mode for strong driving. In simulations, the plateau is associated with chaotic dynamics of coupled nonlinear modes. The blue and pink arrows indicate the location of the experimentally detectable bifurcations in this regime with a downward frequency sweep. c , Reduced bifurcation diagram showing the experimentally accessible bifurcations (measured and calculated) for a downward frequency sweep. The data points are obtained from driven response measurements at different drive strengths; see a , b . A more complete diagram can be found in Supplementary Section 8.1 . The pink region corresponds to driven responses that exhibit a plateau behaviour. d , Measured and simulated energy decay traces. Using the same parameters as in the fits in a – c , the model satisfactorily reproduces the measured energy decay trace, including the crossover in the decay rate. The grey points correspond to measurements and the violet and red lines to simulations. e , Simulated individual energy decay traces revealing coherent oscillations. Different traces correspond to different initial states, which are all prepared with the same drive strength. The calculated trace in d is obtained from averaging over many ring-downs. f , Vibration amplitude of the plateau feature in b as a function of gate voltage V g . The plateau is a consequence of mode hybridization ( Supplementary Section 8.1 ). The amplitude is extracted from the measured response at the frequency just below the bifurcation indicated by the blue arrow in b . Full size image We now show that the change in decay rate is related to nonlinear mode coupling. To demonstrate this, we use the driven response measurements shown above to determine the parameters of a minimal model of coupled nonlinear resonators, which allows us to describe the measured energy decay with good accuracy, as discussed next. Both features, frequency saturation and plateau, are well reproduced by two coupled nonlinear modes with α 1 and α 2 the Duffing constants, and a the force constant. The interaction Hamiltonian is with m ≈ 60 fg the effective mass of mode 1 and g the coupling constant ( Supplementary Section 4 ). The displacement q 2 of mode 2 is normalized to have the same effective mass. In these equations we only include terms necessary to capture the essential physics. We thus omit higher-order restoring force terms, off-resonant interaction terms and purely dispersive coupling terms. Although the omitted terms may give a better fit, we seek for clarity to minimize the number of free parameters. As can be seen from Fig. 4a–c , the model allows us to describe the measurements of the response and the reduced bifurcation diagram with good agreement ( Supplementary Section 8 ). Remarkably, the parameters used to fit these driven measurements reproduce quantitatively the measured energy decay trace ( Fig. 4d ). This shows that nonlinear mode coupling is at the origin of the observed crossover in the decay rate. Note that single energy decay traces are expected to feature oscillations because of mode coupling ( Fig. 4e ). But our simulations show that these oscillations disappear when averaging multiple traces, in agreement with our experiments. For the strong drives used for the excitation of the resonator, mechanical nonlinearities cause the initial state to be different from one decay measurement to the next, so that the oscillations in the averaged decay trace are washed out. We find that the averaged decay trace is approximately described by two exponential decays. The decay rate crossovers from about ( γ 1 + γ 2 )/2 to γ 1 . The change in the decay rate is related to the crossover from mode hybridization to weak mode coupling as the vibration amplitude freely decays. Indeed, the measured crossover amplitude of pm is consistent with the pm obtained from the crude estimate . Here, is the rate of energy exchange between the two modes with A 1 the amplitude of mode 1; and δ eff is the offset of frequency between the two modes including Duffing shifts ( Supplementary Section 8.4 ). During the ringdown, the crossover between different energy decay rates is controlled primarily by the variation of g eff , and only weakly by the shift of δ eff . In the opposite case, the modes would be either decoupled or hybridized at the beginning of the ringdown depending on the initial value of δ eff . This is not observed in our experiments, since the decay rate is larger at the beginning of the ringdown irrespective of the initial δ eff . This indicates that the crossover is related to the variation of g eff . The nonlinear behaviour in our energy decay measurements cannot be related to the Velcro effect 35 , where the total energy of the resonator, which is typically 0.1–1 eV at the lowest-amplitude crossover, is used to break surface bonds between the graphene flake and the supporting electrodes. Indeed, the release of the total energy from the mechanical resonator would then result in a dramatic reduction of the vibrational amplitude, but this is not observed in our experiments. Furthermore, our experimental finding is not related to the coupling of the resonator to two-level defect states 8 , which become saturated at high vibration amplitudes. Indeed, such a coupling would lead to a reduction of the decay rate as the vibrational amplitude increases, which is the opposite of the measured behaviour. The coupling constant g obtained from our simulations is in reasonable agreement with the value expected from continuum elasticity. In the case of a circular membrane clamped at the circumference, the strength of the fourth-order coupling expected from continuum elasticity yields with a prefactor that depends on the shape of the modes 36 . Using N = 35 as the layer number of the graphene flake in device II, E 2d = 340 N m −1 the two-dimensional Young modulus of graphene, R = 1.6 µm the radius of the resonator, and J m −4 the coupling obtained from our simulations, we get a reasonable prefactor of 14 (ref. 36 ). The nonlinear constant α 1 deduced from our simulations is consistent with the value of a graphene flake that is slightly bent by the static capacitive force 29 . All the parameters of the coupled system in Fig. 4 can be found in Supplementary Section 8.2 . We note that these parameters change when sweeping the gate voltage. The mode frequency ratio does not need to be strictly an integer to observe the crossover in the decay rate. The crossover is indeed measured when detuning V g so that ω 2 ≠ 3 ω 1 ( Supplementary Section 7 ). However, the hybridization occurs at larger amplitudes ( Fig. 4f ). Energy decay traces with crossover can be measured in different V g ranges, indicating that the frequency ratio of the mode coupling is close to an integer that can be different from three. Clear signatures of internal resonance in energy-decay and spectral measurements are observed with the gate voltage set at −3.88, −3.78 and −3.3 V. In summary, we reveal how nonlinear mode coupling can drastically change the dynamics of energy dissipation in nanomechanical resonators, that is, by enhancing dissipation at high vibration amplitude via mode hybridization. This effect is not a curiosity limited to a restrained parameter space, as the effect of hybridization is observed here for vibration amplitudes as low as ∼ 100 pm, while graphene and nanotube resonators are driven to the 1–10 nm amplitude range in many experiments. It will be intriguing to investigate hybridization driven by thermal noise. For this, nanotube resonators are appealing 22 , 37 , since the variance of thermal vibrations can be well above (1 nm) 2 at room temperature. It will be exciting to hybridize three and more modes, since this collective motion will feature rich physics with the interplay of hybridization, Duffing nonlinearity and noise. Hybridization of multiple modes could be achieved with membrane resonators of various shapes whose modes are tunable using a combination of split gates 6 . In addition, hybridization opens up new possibilities to control dissipation by electrostatic means and to manipulate vibrational states coherently 4 , 5 , 6 , 7 . When preparing the manuscript, we learned about another work on the role of nonlinear mode coupling in the dissipation of mechanical resonators 38 . Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Additional information Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Energy dissipation is a key ingredient in understanding many physical phenomena in thermodynamics, photonics, chemical reactions, nuclear fission, photon emissions, or even electronic circuits, among others. In a vibrating system, the energy dissipation is quantified by the quality factor. If the quality factor of the resonator is high, the mechanical energy will dissipate at a very low rate, and therefore the resonator will be extremely accurate at measuring or sensing objects thus enabling these systems to become very sensitive mass and force sensors, as well as exciting quantum systems. Take, for example, a guitar string and make it vibrate. The vibration created in the string resonates in the body of the guitar. Because the vibrations of the body are strongly coupled to the surrounding air, the energy of the string vibration will dissipate more efficiently into the environment bath, increasing the volume of the sound. The decay is well known to be linear, as it does not depend on the vibrational amplitude. Now, take the guitar string and shrink it down to nano-meter dimensions to obtain a nano-mechanical resonator. In these nano systems, energy dissipation has been observed to depend on the amplitude of the vibration, described as a non-linear phenomenon, and so far no proposed theory has been proven to correctly describe this dissipation process. In a recent study, published in Nature Nanotechnology, ICFO researchers Johannes Güttinger, Adrien Noury, Peter Weber, Camille Lagoin, Joel Moser, led by Prof. at ICFO Adrian Bachtold, in collaboration with researchers from Chalmers University of Technology and ETH Zurich, have found an explanation of the non-linear dissipation process using a nano-mechanical resonator based on multilayer graphene. In their work, the team of researchers used a graphene based nano-mechanical resonator, well suited for observing nonlinear effects in energy decay processes, and measured it with a superconducting microwave cavity. Such a system is capable of detecting the mechanical vibrations in a very short period of time as well as being sensitive enough to detect minimum displacements and over a very broad range of vibrational amplitudes. The team took the system, forced it out-of-equilibrium using a driving force, and subsequently switched the force off to measure the vibrational amplitude as the energy of the system decayed. They carried out over 1000 measurements for every energy decay trace and were able to observe that as the energy of a vibrational mode decays, the rate of decay reaches a point where it changes abruptly to a lower value. The larger energy decay at high amplitude vibrations can be explained by a model where the measured vibration mode "hybridizes" with another mode of the system and they decay in unison. This is equivalent to the coupling of the guitar string to the body although the coupling is nonlinear in the case of the graphene nano resonator. As the vibrational amplitude decreases, the rate suddenly changes and the modes become decoupled, resulting in comparatively low decay rates, thus in very giant quality factors exceeding 1 million. This abrupt change in the decay has never been predicted or measured until now. Therefore, the results achieved in this study have shown that nonlinear effects in graphene nano-mechanical resonators reveal a hybridization effect at high energies that, if controlled, could open up new possibilities to manipulate vibrational states, engineer hybrid states with mechanical modes at completely different frequencies, and to study the collective motion of highly tunable systems. | 10.1038/nnano.2017.86 |
Medicine | Researchers discover genetic causes of higher melanoma risk in men | Barbara Hernando et al. Sex-specific genetic effects associated with pigmentation, sensitivity to sunlight, and melanoma in a population of Spanish origin, Biology of Sex Differences (2016). DOI: 10.1186/s13293-016-0070-1 | http://dx.doi.org/10.1186/s13293-016-0070-1 | https://medicalxpress.com/news/2016-07-genetic-higher-melanoma-men.html | Abstract Background Human pigmentation is a polygenic quantitative trait with high heritability. In addition to genetic factors, it has been shown that pigmentation can be modulated by oestrogens and androgens via up- or down-regulation of melanin synthesis. Our aim was to identify possible sex differences in pigmentation phenotype as well as in melanoma association in a melanoma case-control population of Spanish origin. Methods Five hundred and ninety-nine females (316 melanoma cases and 283 controls) and 458 males (234 melanoma cases and 224 controls) were analysed. We genotyped 363 polymorphisms (single nucleotide polymorphisms (SNPs)) from 65 pigmentation gene regions. Results When samples were stratified by sex, we observed more SNPs associated with dark pigmentation and good sun tolerance in females than in males (107 versus 75; P = 2.32 × 10 −6 ), who were instead associated with light pigmentation and poor sun tolerance. Furthermore, six SNPs in TYR , SILV/CDK2 , GPR143 , and F2RL1 showed strong differences in melanoma risk by sex ( P < 0.01). Conclusions We demonstrate that these genetic variants are important for pigmentation as well as for melanoma risk, and also provide suggestive evidence for potential differences in genetic effects by sex. Background Human pigmentation traits are some of the most visible and differentiable human characteristics. Pigmentation in human tissue is attributable to the number, size and cellular distribution of melanosomes produced, and the type of melanin synthesised (the black-brown coloured eumelanin or the red-yellow coloured phaeomelanin), while the number of melanocytes is usually unchanged [ 1 ]. The type of melanin synthesised is influenced by sun exposure and is genetically controlled [ 2 ]. Ultraviolet (UV) exposure plays a key role in the evolutionary selective pressure on human pigmentation. Geographic distribution of human skin pigmentation reflects an adaptation to latitude-dependent levels of UV radiation [ 3 , 4 ]. The linear relationship between worldwide skin pigmentation variation, latitude, and UV radiation levels is thought to result from the physiological role of melanin type in UV-mediated vitamin D synthesis, UV-induced photolysis of folate, and in the protection from exposure to UV, which can cause sunburn and skin cancer [ 5 ]. However, the physiological role for eye and hair colour variations still remains unknown. Variation in genes implicated in human pigmentation has been associated with phenotypic characteristics such as skin colour, hair colour, eye colour, freckling, and sensitivity to sunlight [ 6 ], and also with the risk of various types of skin cancer [ 7 – 15 ]. The proteins encoded by these genes have effects at various stages of the pigmentation pathway, ranging from melanogenesis, the stabilisation and transport of enzymes in the melanin production pathway, the production and maintenance of melanosomes and the melanosomal environment, and the switch between the production of eumelanin and phaeomelanin. Other pigmentation-related proteins code for intrinsic factors that help in the regulation of pigmentation, such as factors produced by fibroblasts in the dermis that affect overlying melanocytes and keratinocytes, and endocrine factors from the blood supply, as well as neural factors and inflammation-related factors [ 6 , 16 , 17 ]. Melanin synthesis is also modulated, in part, by oestrogens and androgens [ 18 ]. Physiological hyperpigmentation in various forms (tanning, dark spots, chloasma, linea nigra, and/or melasma) is commonly seen in pregnant females due to an increase of the levels of pregnancy-related hormones [ 18 ]. The increase of pregnancy-related hormones—oestrogen, progesterone, and melanocyte-stimulating hormone (α-MSH)—during gestation induces the activation and expression of genes involved in melanin synthesis in melanocytes [ 19 ], while it has also been shown that androgens inhibit tyrosinase activity [ 20 ]. In addition to sex-endocrine factors, the use of oestrogen-containing oral contraceptives, certain cosmetics, and oestrogen-progesterone therapies has also been associated with hyperpigmentation [ 21 ]. Biological and behavioural gender differences likely contribute to the sexual disparity in skin aging, pigmentation, and melanoma incidence and outcome [ 22 , 23 ]. Recent studies point to Caucasian females having slightly darker eye colour [ 24 , 25 ] and skin colour [ 26 ] than Caucasian males. Regarding melanoma, females show lower melanoma predisposition and incidence, lower risk of metastases, and longer melanoma-specific survival rates than males [ 27 , 28 ]. Anatomic location of melanoma indeed tends to be different between sexes, being most commonly on the lower leg, hip, and thigh in females and on the back, abdomen, and chest in males [ 27 ]. In order to reveal possible sex-related differences in pigmentation phenotype as well as in melanoma association, we investigated the influence of 363 polymorphisms from 65 gene regions—previously associated with pigmentation traits, congenital pigmentation genetic syndromes, and/or skin cancer—in a melanoma case-control population of Spanish origin. Methods Study subjects and data collection In this study, a total number of 599 females (316 melanoma cases and 283 cancer-free controls) and 458 males (234 melanoma cases and 224 cancer-free controls) were collected at several Spanish hospitals. We carefully selected all cases and controls included in the current study to account for confounding variables. All individuals were Caucasians of Spanish origin where, according to a previous work by Laayouni and cols, there is no evidence of genetic heterogeneity within different Spanish geographical regions [ 29 ]. Controls were frequency-matched to the cases by age and place of birth. A standardised questionnaire was used to collect information on sex, age, pigmentation characteristics (eye colour, hair colour, skin colour, number of naevi, and presence of solar lentigines), history of childhood sunburns, Fitzpatrick’s skin type classification, and personal and family history of cancer, to classify individuals as previously described [ 30 ]. Forty melanoma cases from our previous work were excluded in the current analysis due to the absence of sex details. All individuals gave written informed consent and the study was approved by the Ethics Committee of the Gregorio Marañon Hospital (Madrid, Spain) and the Biomedical Research Institute - INCLIVA (Valencia, Spain). Gene, SNP selection, and genotyping Gene and single nucleotide polymorphism (SNP) selection was performed as previously described [ 30 ]. Sixty-five gene regions were included in this study. They covered a broad range of biological processes, mostly related to pigmentation. We genotyped a total number of 384 tag-SNPs from the selected genes ranging from the hypothetical promoter area (approximately 10 kb upstream) until approximately 5 kb downstream of the gene. SNP codes, locations, and frequencies were obtained from NCBI ( ), HapMap ( ), and Illumina databases. A minor allele frequency (MAF) threshold of 0.05 in the HapMap CEU population and an ‘Illumina score’ not lower than 0.6 (as recommended by manufacturer) were established to ensure high genotyping success rate of the SNPs selected. SNP genotyping was done using the Golden Gate Assay according to manufacturer’s protocol (Illumina, San Diego, CA, USA), as previously described [ 30 ]. Statistical analysis Quality control processes and allelic and genotypic association tests were performed using the SNPator software ( ). Additional statistical analyses and plots were conducted using the R statistical framework. All genetic analyses were performed estimating the effect of the minor allele in the Spanish population. For all polymorphisms studied, Fisher’s exact test was used both to test for deviations from Hardy-Weinberg equilibrium (HWE) between sexes and to compare allele counts between female and male individuals. Bonferroni correction was applied and P values higher than 1.37 × 10 −4 were considered in HWE. Associations between the genotyped SNPs and various pigmentation and sun sensitivity traits were assessed via logistic regression, coded additively for each copy of the minor allele. This was done for males and females separately, with eye colour (blue/green versus brown/black), hair colour (brown/black versus blond/red), skin colour (fair versus dark), number of naevi (≥50 versus <50), presence of lentigines (yes versus no), and childhood sunburn (yes versus no) as the outcome variables. Genotype-related Odds Ratios (ORs), their corresponding 95 % confidence intervals (CIs) and associated P values were estimated. Results of the association analysis were represented using volcano plots, mapping significance (−log10 P value) versus fold-change (log2 OR) for the comparison between individuals for eye colour, hair colour, skin colour, presence of lentigines, childhood sunburns and naevi number separately. P values were two sided and those lower than 0.01 were considered statistically significant (since six pigmentation traits were studied separately, statistical significant threshold of P value < 0.05/6 = 0.01). In order to have an overview of all the significant estimates obtained in the sex-specific logistic regression analyses, we evaluated the differences in the number of polymorphisms associated both with protective and risk phenotypes between sex groups ( P values < 0.05), using 2 × 2 contingency tables and performing a Fisher’s exact test. Logistic regression was performed to re-assess associations between genotypes and melanoma risk done previously [ 30 ], but separating individuals by sex in order to estimate sex-specific ORs, 95 % CIs and P values. As mentioned above, the minor allele was also modelled under an additive model. Using the same criteria as in the analysis of pigmentation traits, two-sided P values lower than 0.01 were considered to constitute evidence of association. Finally, we performed a sex-differentiated regression estimate test for each SNP for all phenotypic traits. We tested for equality of sex-specific allelic effects with the aim of obtaining sex-differentiated P values [ 31 ], and a statistical significance threshold of sex-differentiated P value < 0.05 was used. Briefly, for each sex-specific association test, sex-specific beta coefficients (log ORs) and the corresponding standard errors were evaluated using a Chi-square test with one degree of freedom. Then, a Chi-square test with two degrees of freedom was performed for each SNP, in which the previously calculated female-specific and male-specific Chi-square statistics were added up. Finally, a test of heterogeneity of allelic effects between males and females using a Chi-square test with one degree of freedom was conducted. A significant sex-specific and sex-differentiated P value is required to verify a potential significance in allelic effect by sex, following the same criteria used by Kocarnik and cols. [ 32 ]. Manhattan plots were used to display the strength of significant differences between male-only and female-only associated effects (−log10 sex-differentiated P value) for each trait tested. Results and discussion Our sample set included 599 females and 458 males of Spanish ancestry. Median age was 45 years (range 18–92) for females and 47 years (range 18–92) for males. Regarding control individuals, mean age was 42 years (range 18–91) for females and 45 years (range 18–90) for males. Melanoma cases presented a median age of 52 years (range 18–92) for females and 53 years (range 18–92) for males. Since age differences were not observed between sample subsets ( P value > 0.05), association analyses were not adjusted by age. From an initial list of 384 tag-SNPs selected, 21 SNPs (5.4 %) were discarded due to failed genotyping (no PCR amplification, insufficient intensity for cluster separation or poor cluster definition). All 363 remaining SNPs were in HWE after Bonferroni correction (Additional file 1 : Table S1). Minor allele frequencies estimated for each SNP were almost identical for females and males, with a remarkable linear correlation ( R 2 ) of 0.982 (Additional file 1 : Figure S1). Association with phenotypic characteristics by sex In a previous study published by our group, the association of some genes with phenotypic characteristics was reported [ 30 ]. However, analyses were performed without taking into account sex data. In the current study, samples were additionally stratified by sex to evaluate differences in pigmentation and sun response between males and females. Thirty four SNPs showed association with at least one pigmentation trait, and 42 SNPs were associated with at least one sun response trait studied ( P < 0.01) (Additional file 1 : Tables S2 and S3). Each of these polymorphisms displayed a moderate effect on pigmentation in our Spanish population dataset. Our results showed apparent differences in the direction of the association with the pigmentation characteristics, with variants showing ORs below 1.0 correlated with dark pigmentation and/or good tolerance to sunlight, and variants with ORs above 1.0 associated with light pigmentation and/or poor tolerance to sunlight. Variants in these genes most likely play important roles in the differences in pigmentation and tanning response among individuals of the Spanish population, and subsequently also in skin cancer risk [ 33 ]. Representations of −log10 P values versus log2 ORs comparing 599 female individuals to 458 males for eye colour, hair colour, skin colour, presence of lentigines, childhood sunburns and naevi number are shown in Fig. 1 . Detailed information on rs numbers, genes, chromosome locations, minor alleles, ORs, 95 % CIs, and P values for pigmentation and sun response characteristics are summarised in Additional file 1 : Tables S2 and S3. Fig. 1 Volcano plots showing significance (−log10 P value) versus fold change (log2 OR) for pigmentation and sun sensitivity traits separated by sex. Red dots indicate SNPs with a significant fold change ( P values < 0.01) Full size image Sex-specific analyses in this study showed significant differences in the pattern of association with pigmentation and tanning response traits between male and female individuals. Out of all SNPs with significant sex-specific associations, we found significantly more SNPs associated with dark pigmentation or sun protection in female than in male individuals (107 versus 75; P = 2.32 × 10 −6 ), the latter being more commonly associated with light pigmentation and poor sun tolerance – traits highly associated with melanoma predisposition [ 9 ] (Fig. 2 ). The percentage of SNPs associated with both dark eye and dark hair colour in females was higher than in males (72.72 versus 40.74 %, P = 0.025; 78.57 versus 48.28 %, P = 0.018, respectively). This association pattern was also observed for skin colour, but without significance (66.67 versus 41.94 %, P = 0.068). In addition, female individuals presented more SNPs associated with both ≤50 naevi and absence of childhood sunburns than males (65.38 versus 36.67, P = 0.032; 61.11 versus 36.11 %, P = 0.034; respectively). On the other hand, a similar percentage of SNPs associated with absence of lentigines was observed in both female and male individuals (56.00 versus 43.33 %, P = 0.35). A representation of the distribution/count of polymorphisms associated with phenotype groups for each trait studied, separated by sex, is displayed in Fig. 2 . Fig. 2 Distribution of the SNPs associated with pigmentation and sun sensitivity traits separated by sex. The percentage of each phenotype (protection or risk) is calculated taking into account the total number of significant SNPs associated in males and females ( P values < 0.05). Percentages are represented by bars of the corresponding colour. The number on the top of each bar represents the count of associated SNPs in each category Full size image It is important to note that these associations do not reflect differences in the allelic frequencies of these pigmentation genes between males and females. These results basically indicate that, for a given genotype, the allelic effects on the phenotypic traits are shown to be significantly different in both sexes. Additionally, sex-differentiated analysis was performed in order to test for equality between male-specific and female-specific regression estimates. Sex-differentiated P values are represented in Fig. 3 . A significant sex-specific and sex-differentiated SNP association is required to establish a potential difference in effect for each polymorphism by sex. Three SNPs showed a strong potential sex-difference in eye colour effect, 10 SNPs in skin colour effect, 3 SNPs in hair colour effect, 4 SNPs in sunburns effect, 5 SNPs in lentigines effect, and 5 SNPs in naevi effect ( P < 0.01). Among these SNPs, PLDN SNP rs12909221, GPR143 SNP rs2521667, POMC SNP rs6734859, AP3M2 SNP rs7009632, BCL2 SNP rs1462129, and TYRP1 SNP rs10809828 were associated with light pigmentation and poor sun tolerance in males. Only one polymorphism, rs2521578 on the GPR143 gene, showed a high association with poor sun tolerance in females (Additional file 1 : Tables S2 and S3). Fig. 3 Manhattan plots displaying the strength of significant differences between male-only and female-only associated effects (−log10 sex-differentiated P value) for a ) pigmentation and b ) sun sensitivity traits. Darker dots of the corresponding colour indicate SNPs with a significant fold change (sex-differentiated P values < 0.01) Full size image Promising differences in allelic effect by sex were also observed for TYR SNP rs1042602. Females and males showed statistically significant effects in opposite directions for this SNP, and this difference in effect by sex would remain hard to discriminate from chance. Indeed, a sex-differentiated P value of 1.30 × 10 −3 was estimated for rs1042602, as shown in Fig. 3 . Polymorphisms showing potential differences in allelic effect by sex are located on genes that have functions related to melanocyte development, melanosome formation, maturation, and transportation, as well as to skin cancer [ 6 , 9 , 10 , 15 , 16 , 30 , 34 – 37 ]. Interestingly, we also found associations between pigmentation phenotypes and several genes— CDKN2A , GNA11 , NRAS , and WNT3A —involved in the up-regulation of melanogenic genes, the activation, survival, and proliferation of the melanocyte, and/or the processes leading to carcinogenesis. Associations with melanoma risk by sex In a previous study published by our group, the association of 65 gene regions with melanoma risk was reported [ 30 ]. However, at that time, no sex stratification was applied to perform the association analysis. In this work, we have carried out an analysis of association between genotypes and melanoma risk for female and male individuals separately. Sixteen SNPs located in 10 genes showed consistent male- or female-specific association with melanoma risk. Eleven of those SNPs showed potential differences in effect by sex, since P values obtained in the sex-differentiated regression estimate test were lower than 0.05. Detailed information on rs numbers, genes, chromosome locations, minor alleles, ORs, 95 % CIs, and P values for melanoma risk are summarised in Table 1 . Table 1 SNPs highly associated with melanoma risk in sex-stratified analysis Full size table Among these 11 SNPs, we found six SNPs located in 4 genes showing a strong difference in melanoma risk effect when samples were stratified by sex – sex-specific and sex-differentiated P values lower than 0.01. F2RL1 SNP rs2242991, GPR143 SNPs rs2521667, and rs2732872, and TYR SNP rs5021654 increased melanoma predisposition in males as opposed to females. Additionally, a strong melanoma protective effect was displayed by rs2069398 on CDK2/SILV in females only. These SNPs were also associated with pigmentation and sun tolerance in opposite directions in males (ORs > 1, melanoma risk traits) versus females (ORs < 1, melanoma protective traits), supporting lower melanoma predisposition and incidence in females than in males. Oppositely, rs1042602 on the TYR gene showed a melanoma protective effect in males compared to females. Therefore, these results are in accordance with the association between rs1042602 and dark pigmentation and good sun tolerance in males but not in females (Additional file 1 : Tables S2 and S3). These genes with potential differences in melanoma risk effect by sex are graphically represented in Additional file 1 : Figure S2. The F2RL1/PAR2 gene, expressed in keratinocytes but not in melanocytes, is a G-protein coupled receptor involved in melanosome transfer [ 38 ], and changes in its expression pattern are correlated with skin cancer progression [ 39 ]. The GPR143 gene, located in the X chromosome, encodes for a G-protein coupled receptor for tyrosine, L-DOPA, and dopamine localised on melanosomal membranes and plays an important role in melanosome biogenesis, organisation and transport. Ocular albinism type 1 (OA1; MIM300500) is caused by mutations in GPR143 and is transmitted as an X-linked trait. The TYR gene codes for another melanosomal membrane-bound enzyme involved in the rate-limiting steps of melanogenesis. Mutations in the TYR gene are associated with light pigmentation, freckling and sun sensitivity—well-recognised melanoma risk factors—as well as with melanoma [ 30 , 40 ]. The CDK2 gene, which overlaps with the melanocyte-specific gene SILV , is also important for melanoma growth and proliferation [ 41 ]. SILV melanosomal matrix protein represents a melanoma-specific antigen recognised by tumour infiltrating cytotoxic T lymphocytes [ 42 ]. A recent study is worth mentioning in this respect. According to Kocarnik and cols. (2014), SLC45A2 SNP rs16891982, the non-synonymous mutation F374L located in exon 5, influenced melanoma risk differently by sex, with higher melanoma risk for males than females, probably through alterations in melanogenesis and pigmentation [ 32 ]. In our study, two SNPs on the SLC45A2 gene (rs35414 and rs35415) displayed associations with melanoma in both female-only and male-only analysis, although they do not present significant sex-differentiated P values. It is important to note that the minor allele of these two SNPs showed a protective effect for melanoma and that the allele frequencies for these protective minor alleles, causing a darker pigmentation, are actually quite common in the Spanish population, as opposed to Northern European populations. Subsequently, this association was stronger in males (rs35414: OR = 0.66, 95 % CI 0.51–0.85, P = 0.0018; and rs35415: OR = 0.70, 95 % CI 0.54–0.91, P = 0.008) than in females (rs35414: OR = 0.77, 95 % CI 0.61–0.97, P = 0.026; and rs35415: OR = 0.78, 95 % CI 0.62–0.97, P = 0.029). In the male-only but not in the female-only analysis, rs35414 and rs35415 tended to be also associated with dark pigmentation and the absence of childhood sunburns ( P < 0.05). It is important to state here that in the work by Kocarnik and cols., it was the major allele—in the Caucasian population—of the SLC45A2 SNP that was in fact modelled as the purposed risk allele for melanoma, while in this study it is the minor allele of the two SLC45A2 SNPs that was actually used as reference to perform the analyses. Therefore, the genetic effect shown by the SLC45A2 gene in our study exhibits the opposite direction that the one displayed by the Kocarnik and cols. work. We are aware of the limitations of the current work. Firstly, the sample size was relatively restricted after dividing by sex the complete sample set, probably resulting in limited statistical power to detect modest effects for additional SNPs. Unfortunately, there are not previously published genome-wide studies presenting data stratified by sex, hindering chances of enlarging the sample size. Secondly, the subjective nature of the attributes considered may be another reason for misclassification. Thirdly, we presented two-sided unadjusted P values for the associations considered; and the level of statistical significance considered was lower than the threshold required to declare unequivocally positive results. However, the results of this work—as well as other previous studies [ 24 , 25 , 32 ]—show that there is a strong tendency showing sex-differentiated genetic effects in pigmentary traits. Therefore, we believe that the work presented here is nonetheless reporting very interesting findings. For all these limitations, replication of our findings is essential before venturing on drawing firmer conclusions. The results of this study suggest that there are indeed sex-specific genetic effects in human pigmentation, with larger effects for darker pigmentation in females compared to males. A plausible cause might be the differentially expressed melanogenic genes in females due to higher oestrogen levels. These sex-specific genetic effects would help explain the presence of darker eye and skin pigmentation in females, as well as the well-known higher melanoma risk displayed by males. Conclusions Overall, the results of this work reveal the presence of sex-specific effects in human pigmentation that might be important not only in skin colour and sensitivity to sunlight but also in the higher incidence of melanoma described in males. These findings also show that, at times, sex-stratified analyses enrich genetic association studies with valuable information and knowledge. Abbreviations CEU: Utah residents with Northern and Western European ancestry Chr: chromosome CI: confidence interval HWE: Hardy-Weinberg equilibrium mA: minor allele MAF: minor allele frequency OR: odds ratio P : P value Sex-diff: sex-differentiated regression estimates test SNP: single nucleotide polymorphism UV: ultraviolet | A study led by researchers at Universitat Jaume I de Castellón has identified one of the genetic causes underlying the higher rate of melanoma in men. The results have been published in Biology of Sex Differences. The Genetics of Skin Cancer and Human Pigmentation (Melanogen) research group, led by lecturer Conrado Martínez-Cadenas at the Universitat Jaume I de Castellón (UJI), has studied the differences between men and women in terms of pigmentation (eye, hair and skin) and sun response, i.e. history of sunburn, and the presence irregular moles and freckles caused by sun exposure. This study was carried out in collaboration with Dr. Gloria Ribas' research group at Incliva Biomedical Research Institute. It involved 1,057 people in total, some 52% of which were melanoma patients from hospitals in Castellón, Valencia, Madrid and Bilbao. "The study included 384 genetic variants and six physical characteristics. The results show that, with the same genetic variability, men tend to have lighter skin pigmentation and a worse response to the effects of ultraviolet rays," Martínez-Cadenas says. Oestrogen enhances sun protection Skin cancer is determined both by environmental factors, such as sun exposure, and other genetic factors. People with light skin or eyes and blonde or red hair have a 20 to 30 times greater chance of getting skin cancer that darker skinned people, who tan easily. Meanwhile, several studies have shown that female hormones promote the production of melanin, the pigment that protects the skin from the sun. Indeed, "oestrogen could be the reason why women have a darker skin tone, even when the genotypes of both sexes are the same, meaning that their risk of skin cancer is lower. So much so that skin cancer is much more prevalent in men," explains Bárbara Hernando, fellow researcher at the Melanogen research group and coauthor of the study. This study on melanoma in Spain grew out of a previous study, where results "showed that men tend to have lighter eyes than women with the same genetic variety," Martínez-Cadenas adds. Additional forensic uses Research into the genetics of skin pigmentation is important for understanding human biology and evolution, as well as the biology of skin cancer. By identifying genetic factors that influence melanoma risk, we can study this kind of cancer in more detail. But beyond this, other studies carried out at the Melanogen group show that introducing the factor 'sex' in the eye colour prediction model developed for forensic purposes "significantly improves the success rate in identifying a suspect (or victim) from a biological sample found at a crime scene, for instance," Martínez-Cadenas explains. Prevention is key to the fight against melanoma The sheer number of factors involved in melanoma mean that treatments to cure it have not made much progress in recent years. Prevention is therefore the most effective weapon: "the best way to prevent melanoma is to limit exposure to the sun when UV radiation is at its peak, and to use sunscreen —at least factor 30— when outdoors" (Bárbara Hernando). Self-examination and regular visits to the specialist, especially if we detect irregular moles or freckles with uneven colouration, or which are larger than six millimetres in diameter, "are essential to preventing this disease," she concludes. Three main lines of research are developed at the Melanogen research group at the Universitat Jaume I, led by Conrado Martínez-Cadenas. The first explores the genetic basis of human susceptibility to melanoma and other skin cancers. The second focuses on the molecular mechanisms and intracellular signalling pathways involved in the genesis and progression of skin cancer, both melanoma and non-melanoma (basal cell and squamous cell carcinomas). The third addresses the genetic, hormonal and environmental factors involved in the development of benign pigmented lesions: freckles, nevus, solar lentigo, melasma, etc. | 10.1186/s13293-016-0070-1 |
Medicine | Research community comes together to provide new 'gold standard' for genomic data analysis | Combining tumor genome simulation with crowdsourcing to benchmark somatic single-nucleotide-variant detection, DOI: 10.1038/nmeth.3407 The Challenge: https://www.synapse.org/#!Synapse:syn312572 Journal information: Nature Methods | http://dx.doi.org/10.1038/nmeth.3407 | https://medicalxpress.com/news/2015-05-gold-standard-genomic-analysis.html | Abstract The detection of somatic mutations from cancer genome sequences is key to understanding the genetic basis of disease progression, patient survival and response to therapy. Benchmarking is needed for tool assessment and improvement but is complicated by a lack of gold standards, by extensive resource requirements and by difficulties in sharing personal genomic information. To resolve these issues, we launched the ICGC-TCGA DREAM Somatic Mutation Calling Challenge, a crowdsourced benchmark of somatic mutation detection algorithms. Here we report the BAMSurgeon tool for simulating cancer genomes and the results of 248 analyses of three in silico tumors created with it. Different algorithms exhibit characteristic error profiles, and, intriguingly, false positives show a trinucleotide profile very similar to one found in human tumors. Although the three simulated tumors differ in sequence contamination (deviation from normal cell sequence) and in subclonality, an ensemble of pipelines outperforms the best individual pipeline in all cases. BAMSurgeon is available at . Main Declining costs of high-throughput sequencing are transforming our understanding of cancer 1 , 2 , 3 and facilitating delivery of targeted treatment regimens 4 , 5 , 6 . Although new methods for detecting cancer variants are rapidly emerging, their outputs are highly divergent. For example, four major genome centers predicted single-nucleotide variants (SNVs) for The Cancer Genome Atlas (TCGA) lung cancer samples, but only 31.0% (1,667/5,380) of SNVs were identified by all four 7 . Calling somatic variants is a harder problem than calling germline variants 8 because of variability in the number of somatic mutations, extent of tumor subclonality and effects of copy-number aberrations. Benchmarking somatic variant detection algorithms has been challenging for several reasons. First, benchmarking is resource intensive; it can take weeks to install and hundreds of CPU-hours to execute an algorithm. Second, evolving technologies and software make it difficult to keep a benchmark up to date. For example, the widely used Genome Analysis Toolkit was updated five times in 2013. Third, establishing gold standards is challenging. Validation data may be obtained on independent technology or from higher-depth sequencing, but routines used to estimate 'ground truth' may exhibit sources of error similar to those of the algorithms being assessed (for example, alignment artifacts). Privacy controls associated with personal health information hinder data sharing. Further, most research has focused on coding aberrations, restricting validation to <2% of the genome. Fourth, sequencing error profiles can vary between and within sequencing centers 9 . Finally, most variant-calling algorithms are highly parameterized. Benchmarkers may not have equal and high proficiency in optimizing each tool. To identify the most accurate methods for calling somatic mutations in cancer genomes, we launched the International Cancer Genome Consortium (ICGC)-TCGA Dialogue for Reverse Engineering Assessments and Methods (DREAM) Somatic Mutation Calling Challenge (“the SMC-DNA Challenge”) 10 . The challenge structure allowed us to perform an unbiased evaluation of different approaches and distribute the process of running and tuning algorithms by crowdsourcing. To create tight feedback loops between prediction and evaluation, we generated three subchallenges, each based on a different simulated tumor-normal pair with a completely known mutation profile and termed IS1, IS2 and IS3 ( Supplementary Note 1 and Supplementary Fig. 1 ). To produce these large-scale benchmarks, we first developed BAMSurgeon, a tool for accurate tumor genome simulation 11 , 12 , 13 , 14 . Our analyses of error profiles revealed characteristics associated with accuracy that could be exploited in algorithm development. Strikingly, many algorithms, including top performers, exhibit a characteristic false positive pattern, possibly owing to introduction of deamination artifacts during library preparation. We also found that an ensemble of several methods outperforms any single tool, suggesting a strategy for future method development. Results Generating synthetic tumors with BAMSurgeon Defining a gold standard for somatic mutation detection is fraught with challenges: no tumor genome has been completely characterized (i.e., with all real somatic mutations known); thus, estimates of precision and recall are subject to the biases of site-by-site validation. False negatives are particularly difficult to study without a ground truth of known mutations. Typically, validation involves targeted capture followed by sequencing, sometimes on the same platform. To address the lack of fully characterized tumor genomes, simulation approaches are often used. Existing approaches to create synthetically mutated genomes simulate reads and their error profiles either de novo on the basis of a reference genome 15 ( ) or through admixture of polymorphic (for example, dbSNP) sites between existing BAM sequence alignment files 16 . In the first approach, simulated reads can only approximate sequencing error profiles, which vary between and within sequencing centers, and it is challenging to add mutations at multiple user-specified variant allele frequencies (VAFs) to simulate subclones. In the second, platform-specific error profiles are accurate, but the repertoire of spiked-in mutations is limited to examples detected previously, and thus already known to be discoverable. An overview of these approaches is in Supplementary Note 2 . BAMSurgeon represents a third approach: directly adding synthetic mutations to existing reads ( Fig. 1a ). BAMSurgeon can add mutations to any alignment stored in BAM format, including RNA-seq and exome data. It can generate mutations at any allelic fraction, allowing simulation of multiple subclones or sample contamination; can avoid making mutations incongruent with existing haplotypes; and supports copy-number variation–aware addition of mutations if copy-number information is available. In addition, BAMSurgeon supports an increasing number of alignment methods, allowing testing of aligner-caller combinations on the same mutations. Figure 1: BAMSurgeon simulates tumor genome sequences. ( a ) Overview of SNV spike-in. (1) A list of positions is selected in a BAM alignment. (2) The desired base change is made at a user-specified variant allele fraction (VAF) in reads overlapping the chosen positions. (3) Altered reads are remapped to the reference genome. (4) Realigned reads replace corresponding unmodified reads in the original BAM. ( b ) Overview of workflow for creating synthetic tumor-normal pairs. Starting with a high-depth mate-pair BAM alignment, SNVs and structural variants (SVs) are spiked in to yield a 'burn-in' BAM. Paired reads from this BAM are randomly partitioned into a normal BAM and a pre-tumor BAM that receives spike-ins via BAMSurgeon to yield the synthetic tumor and a 'truth' VCF file containing spiked-in positions. Mutation predictions are evaluated against this ground truth. ( c , d ) To test the robustness of BAMSurgeon with respect to changes in aligner ( c ) and cell line ( d ), we compared the rank of RADIA, MuTect, SomaticSniper and Strelka on two new tumor-normal data sets: one with an alternative aligner, NovoAlign, and the other on an alternative cell line, HCC1954. RADIA and SomaticSniper retained the top two positions, whereas MuTect and Strelka remained third and fourth, independently of aligner and cell line. ( e ) Summary of the three in silico tumors described here. Source data Full size image Briefly, the software works by selecting sites using coverage information from the target BAM file. Mutations are spiked in by modifying reads covering the selected sites, realigning a requisite number to achieve the desired alternate allele fraction, and merging the reads back into the original BAM by replacement. Realistic tumors are created ( Fig. 1b ) by partitioning a high-depth BAM, optionally with 'burn-in' mutations to differentiate it from the original BAM, into two disjoint subset BAMs. One receives the spike-in mutations to become the simulated tumor; the other is left intact and is the matched normal. The result is a synthetic tumor-normal pair and a VCF file of true positives (TPs). BAMSurgeon is open source and highly parameterized, thereby allowing fine-tuning of characteristics such as tumor purity, subclone number and coverage thresholds. To demonstrate BAMSurgeon's utility, we performed a series of quality-control studies. First, we took the sequence of the HCC1143 BL cell line and created two separate synthetic tumor-normal pairs, each using the same set of spiked-in mutations but with different random read splitting. We executed four widely used, publicly available mutation callers on each pair: MuTect 16 , RADIA (RNA and DNA integrated analysis) 17 , Strelka 18 and SomaticSniper 19 . We assessed performance on the basis of recall (fraction of spiked-in mutations detected), precision (fraction of predicted SNVs that are true) and F -score (harmonic mean of precision and recall). Ordering and error profiles were largely independent of read splits: RADIA and SomaticSniper retained first and second place, whereas MuTect and Strelka were third and fourth ( Supplementary Fig. 2 ). Second, we generated alignments of HCC1143 using the Burrows-Wheeler Aligner (BWA) and NovoAlign with and without insertion or deletion (indel) realignment. Caller ordering was largely independent of aligner used ( Fig. 1c ). Finally, we tested whether BAMSurgeon results are influenced by genomic background by taking the same set of mutations and spiking them into both HCC1143 and HCC1954 BWA-aligned BAMs. Caller ordering was largely independent of cell line ( Fig. 1d ). The ICGC-TCGA DREAM Somatic Mutation Calling Challenge To maximize participation, we began with three synthetic genomes each generated by applying BAMSurgeon to an already-sequenced tumor cell line, thereby avoiding data access issues associated with patient-derived genomes. The tumors varied in complexity, with IS1 being the simplest and IS3 being the most complex. IS1 had a moderate mutation rate (3,537 somatic SNVs), 100% tumor cellularity and no subclonality. In contrast, IS3 had a higher mutation rate (7,903 somatic SNVs) and three subpopulations present at different VAFs. Tumor and normal samples had ∼ 450 million 2-by-101-bp paired-end reads produced by an Illumina HiSeq 2000 sequencer, resulting in ∼ 30× average coverage ( Fig. 1e and Supplementary Table 1 ). Sequences were distributed via the GeneTorrent client from Annai Systems. As a supplement to local computing resources, participants were provided cost-free access to the Google Cloud Platform, where Google Cloud Storage hosted the data and the Google Compute Engine enabled scalable computing. Contestants registered for the SMC-DNA Challenge and submitted predicted mutations in VCF format through the Synapse platform 20 ( ). Multiple entries were allowed per team, and all scores were displayed on public, real-time leaderboards ( Supplementary Table 1 ). To assess overfitting, we excluded a fraction of each genome from leaderboard scores during the challenge. Over 157 d, we received 248 submissions from 21 teams, as well as 21 submissions by the SMC-DNA Challenge administration team to prepopulate leaderboards. A list of all submissions, along with a description of the pipeline used in each, is in Supplementary Table 2 and the Supplementary Data 1 . The set of all submissions shows clear precision-recall trade-offs ( Fig. 2a and Supplementary Fig. 3 ) and distinctions amongst top-performing teams. Performance metrics varied substantially across submissions: for the simplest tumor, IS1, recall ranged from 0.559 to 0.994, precision from 0.101 to 0.997 and F -score from 0.046 to 0.975. Figure 2: Overview of the SMC-DNA Challenge data set. ( a ) Precision-recall plot for all IS1 entries. Colors represent individual teams, and the best submission (top F- score) from each team is circled. The inset highlights top-ranking submissions. ( b ) Performance of an ensemble somatic SNV predictor. The ensemble was generated by taking the majority vote of calls made by a subset of the top-performing IS1 submissions. At each rank k , the gray dot indicates performance of the ensemble algorithms ranking 1 to k , and the colored dot indicates the performance of the algorithm at that rank. Source data Full size image We then used the “wisdom of the crowds” 12 , 13 by aggregating predictions into an ensemble classifier. We calculated consensus SNV predictions by majority vote (TP or false positive, FP) at each position across the top k submissions. For IS1, consensus predictions were comparable to those of the best-performing teams ( F -score = 0.955–0.984; Fig. 2b ). The consensus achieved high precision (range: 0.968–0.999; Supplementary Fig. 4a ) while maintaining recall (range: 0.939–0.971; Supplementary Fig. 4b ). To assess robustness we evaluated the majority vote predictions of randomly selected sets of submissions. The consensus classifier improved and stabilized as submissions were added ( Supplementary Fig. 5 ). Consensus classifiers for IS2 and IS3 outperformed the best method and showed stable performance ( Supplementary Figs. 3 and 4 ). Effects of parameterization The within-team variability caused by version and parameter changes was often comparable to that across different teams: 25.6% of variance in IS1 occurred within teams. Critically, this does not reflect overfitting: a team's best submission yielded nearly identical performance on the leaderboard and held-out data (for IS1 the median difference was −1.87 × 10 −3 , ranging from −0.091 to 0.032; Supplementary Fig. 6 ). F -scores were tightly correlated between training and testing data sets (Spearman's rank correlation coefficient ( ρ ) = 0.98 for all three tumors; Fig. 3a ), as were precision and recall ( Supplementary Fig. 7 ). The large variability in accuracy of submissions within a single team highlights the very strong impact of tuning parameters during the challenge. Initial submissions by a team ( Fig. 3b ) tended to achieve a favorable recall with an unsatisfactory precision. The median team improved its F -score from 0.64 to 0.91 (range of improvement: 0.18–0.98) by exploiting leaderboard feedback. Similar results were observed for IS2 and IS3 ( Supplementary Figs. 6 and 7 ). Figure 3: Effects of algorithm tuning. ( a ) The performance of groups on the training data set and on the held-out portion of the genome ( ∼ 10%) are tightly correlated (Spearman's ρ = 0.98) and fall near the plotted y = x line for all three tumors. ( b ) F- score, precision and recall of all submissions made by each team on IS1 are plotted in the order they were submitted. Teams were ranked by the F- score of their best submissions. Color coding as in a . The horizontal red lines give the F- score, precision and recall of the best-scoring algorithm submitted by the Challenge administrators, SomaticSniper. A clear improvement in recall, precision and F- score can be seen as participants adjusted parameters over the course of the challenge. Bar width corresponds to the number of submissions made by each team. ( c ) For each tumor, each team's initial (“naive”) and final (“optimized”) submissions are shown, with dot size and color indicating overall ranking within these two groups. An “X” indicates that a team did not submit to a specific tumor (or changed the team name). Algorithm rankings were moderately changed by parameterization. ( d ) For each tumor, we assessed how much each team was able to improve its performance. The color scale represents bins of F- score improvement. Source data Full size image We considered the ranking of each team within each tumor based on initial (“naive”) and best (“optimized”) submissions. In general, rankings were moderately changed by parameterization: when a team's naive submission ranked in the top 3, its optimized submission remained among the top 3 66% of the time ( Fig. 3c ). Nevertheless, teams routinely improved their overall performance, with 39% able to improve their F -score by at least 0.05 through parameter tuning and 25% improving it by more than 0.20 ( Fig. 3d ). These improvements did not lead to overfitting ( Fig. 3a,b ), a result emphasizing the importance of verification data for algorithm tuning. Effects of genomic localization In subsequent analyses, we focused on the single highest F -score submission from each team, supplemented by submissions generated by executing widely used algorithms with default parameters (for example, MuTect, Strelka, SomaticSniper and VarScan). We first examined the effect of genomic location on prediction accuracy. For IS1, F -scores differed significantly between intergenic, intronic, untranslated and coding regions ( P = 6.61 × 10 −7 ; Friedman rank-sum test; Fig. 4a ). Predictions were more accurate for coding SNVs (median F -score = 0.95 ± 0.13; ±s.d. unless otherwise noted) than for those in UTRs (median = 0.93 ± 0.14; P = 3.3 × 10 −3 ; paired Wilcoxon rank-sum test), introns (median = 0.91 ± 0.17; P = 2.3 × 10 −5 ) or intergenic regions (median = 0.90 ± 0.19; P = 7.6 × 10 −6 ). This may reflect algorithm tuning on exome sequences or differences in either sequence characteristics or completeness of databases used for germline filtering across these different genomic regions. These trends were replicated in IS2 and IS3 ( Supplementary Fig. 8a,b ). Figure 4: Effects of genomic localization. ( a ) Box plots show the median (line), interquartile range (IQR; box) and ±1.5× IQR (whiskers). For IS1, F- scores were highest in coding and untranslated regions and lowest in introns and intergenic ( P = 6.61 × 10 −7 ; Friedman rank-sum test). ( b ) Rows show individual submissions to IS1; columns show genes with nonsynonymous SNV calls. Green shading means a call was made. The upper bar plot indicates the fraction of submissions agreeing on these calls, and the color indicates whether these are FPs or TPs. The bar plot on the right gives the F- score of the submission over the whole genome. The right-hand side covariate shows the submitting team. All TPs are shown, along with a subset of FPs. Source data Full size image Next, we evaluated error rates on nonsynonymous mutations, which are the most likely to be functionally relevant ( Fig. 4b and Supplementary Fig. 8c,d ). Teamwise ranks were generally preserved across different genomic regions ( Supplementary Fig. 9 ), and performance metrics were well correlated ( Supplementary Fig. 10 ) across genomic regions. Nevertheless, few teams achieved 100% accuracy on nonsynonymous mutations. On IS1, 4/18 teams (ranked 1st, 2nd, 5th and 15th on the entire genome) achieved 100% accuracy on nonsynonymous mutations. The remaining submissions contained false negatives (FNs; 3/13), FPs (4/13) or both (6/13). Most nonsynonymous SNVs in IS1 were correctly detected by all submissions (22/39), but 7/39 were missed (i.e., FNs) by at least two teams. These results hold when all individual submissions were considered ( Supplementary Fig. 11 ). In more complex tumors, more errors were seen. No team achieved 100% accuracy on nonsynonymous mutations in IS2: the top two teams made one and four errors, respectively. For IS3, two teams (ranked second and third overall) had 100% SNV accuracy, and error profiles differed notably between subclonal populations ( Supplementary Fig. 12 ). Thus, even in the most accurately analyzed regions of the genome, there are significant inter-algorithm differences in prediction accuracy. Next we asked whether error rates differed across chromosomes as well as between functional regions. For IS1, we observed a surprisingly large F- score range across chromosomes from 0.76 (chromosome 21, chr21) to 0.93 (chr11) compared to with resampled null chromosomes of equal size (chr21, 0.90 ± 0.074; chr11, 0.90 ± 0.076). The poor prediction accuracy for chr21 was an outlier: the next worst-performing chromosome was chr1 ( F -score = 0.87). Chr21 showed lower F- scores than that expected by chance (false discovery rate (FDR) = 3.6 × 10 −25 ; two-way ANOVA), whereas chr11 showed higher F- scores (FDR = 2.8 × 10 −3 , two-way ANOVA; Supplementary Table 3 ). The reduced prediction accuracy on chr21 was observed in both FPs ( Supplementary Fig. 13a ) and FNs ( Supplementary Fig. 13b ). We compared a series of 12 variables thought to influence prediction accuracy ( Supplementary Table 4 ). FPs on chr21 showed higher reference-allele counts (mean of 33 versus 23 for the rest of the genome; P < 0.01, one-way ANOVA) and base qualities (sum of 1,268 versus 786; P < 0.01, one-way ANOVA) than FPs on other chromosomes ( Supplementary Table 5 ). These chromosome-specific trends influenced all algorithms in similar ways: permutation analysis showed no chromosome or submission with more variability than that expected by chance ( Supplementary Fig. 14a ). Interestingly, there was no evidence of chromosome-specific error on IS2 and IS3, making its origins and generality unclear ( Supplementary Figs. 14b,c, 15 and 16 ). We premasked chromosomes to exclude regions containing structural variations, and there was no evidence of kataegis (small genomic regions with a localized concentration of mutations) in any tumor 21 ( Supplementary Fig. 17 ). These results highlight the variability of mutational error profiles across tumors. Characteristics of prediction errors We next exploited the large number of independent analyses to identify characteristics associated with FPs and FNs. We selected the best submission from each team and focused on 12 variables ( Supplementary Table 4 ). In IS1, 9/12 variables were weakly associated with the proportion of submissions that made an error at each position (0 ≤ ρ ≤ 0.1; Supplementary Figs. 18–29 ). To evaluate whether these factors contribute simultaneously to somatic SNV prediction error, we created a Random Forest 22 for each submission to assess variable importance ( Supplementary Table 6 ). Key variables associated with FP rates ( Fig. 5a ) included allele counts and base and mapping qualities. Intriguingly, each of these was associated with increased error for some algorithms and reduced error for others. Key determinants of FN rates included mapping quality and normal coverage ( Fig. 5b ). The characteristics of FNs and FPs differed for most algorithms for IS1 (median ρ = 0.40; range: −0.19 to 0.71; Supplementary Fig. 30 ), IS2 ( Fig. 5c,d ) and IS3 (data not shown). Figure 5: Characteristics of prediction errors. ( a – j ) Random Forests assess the importance of 12 genomic variables on SNV prediction accuracy (Online Methods ). Random Forest analysis of FPs ( a , c , e , g , i ) and FNs ( b , d , f , h , j ) for IS1 ( a , b ) and IS2 ( c , d ) as well as for all three tumors using default settings with widely used algorithms MuTect ( e , f ), SomaticSniper ( g , h ) and Strelka ( i , j ). Dot size reflects mean change in accuracy caused by removing this variable from the model. Color reflects the directional effect of each variable (red for increasing metric values associated with increased error; blue for decreasing values associated with increased error; black for factors). Background shading indicates the accuracy of the model fit (see bar at bottom for scale). Each row represents a single set of predictions for a given in silico tumor, and each column shows a genomic variable. SNP, single-nucleotide polymorphism. Source data Full size image To further compare error profiles across tumors, we executed three widely used somatic SNV prediction algorithms with default settings: MuTect ( Fig. 5e,f ), SomaticSniper ( Fig. 5g,h ) and Strelka ( Fig. 5i,j ). Error profiles showed universal, algorithm-specific and tumor-specific components. For example, elevated nonreference allele counts were associated with FPs in all tumors for all three methods. FNs were much more sensitive to coverage in the normal sample for Strelka than for other algorithms ( Fig. 5j ). The largest notable tumor-specific difference was strong association of normal sample coverage with FPs in IS1 and IS2, but not IS3, for all algorithms ( Fig. 5e,g,i ). Given the importance of context-specific errors in sequencing 23 , 24 , 25 , we evaluated trinucleotide bias. BAMSurgeon spike-ins (TPs) had no trinucleotide bias relative to the genome ( Supplementary Fig. 31 ), but FPs showed two significant biases in all three tumors ( P < 2.2 × 10 −16 , χ 2 test; Fig. 6 ). First, N C G-to-N T G errors accounted for the four most enriched trinucleotides. This profile, along with elevated N C N-to-N A N and N T N-to-N C N mutations, closely matches the age signature (Signature 1A) detected in human cancers 26 . Second, mutations of a C to create a homopolymeric trinucleotide (i.e., A C A-to-A A A, G C G-to-G G G, T C T-to-T T T) accounted for the 6th–8th most enriched profiles. Because both these signatures were detected in positions with no spike-ins, they are entirely artifactual. The Signature 1A profile was detected in the FPs of some, but not all, submissions ( Supplementary Fig. 32 ) and was not associated with specific sequencing characteristics ( Supplementary Fig. 33 and Supplementary Table 7 ). Figure 6: Trinucleotide error profiles. Proportions of FP SNVs are normalized to the number observed in the entire genome (top) binned by trinucleotide context (bottom) for IS1–IS3. Source data Full size image Discussion The crowdsourced nature of the SMC-DNA Challenge created a large data set for learning general error profiles of somatic mutation detection algorithms and gives specific guidance. We see diverse types of bias across the three tumors, along with a trinucleotide profile of FPs closely resembling the mutational Signature 1A found in primary tumors, likely reflecting spontaneous deamination of 5-methylcytosine at N C G trinucleotides 26 . Algorithms may be detecting low levels present in all cells, artifacts may arise in sequencing (for example, library preparation artifacts) or current algorithms may have higher error rates at N C G trinucleotides. Rigorous mutation verification appears critical before mutational signature generation. As seen with previous challenges 12 , 13 , ensembles were comparable to the best individual submission, even when including many poorly performing submissions. This suggests that mutation calls should be made by aggregating multiple algorithms, although this strategy would need tuning to account for its significant computational demands. The real-time leaderboard highlighted the critical role of parameterization: teams were able to rapidly improve, particularly in precision, once they had an initial performance estimate. Robust ensemble learners may eventually eliminate the problem of parameter optimization, but meanwhile, many studies may benefit from a multistep procedure. An initial analysis would be followed with a round of experimental validation and then a final parameter optimization. The lack of overfitting suggests a modest amount of validation data may suffice, although studies on larger numbers of tumors are needed to optimize this strategy. Indeed, participants were often able to improve performance over time, which suggests that, as with previous crowdsourced challenges, real-time feedback can yield improved methods without overfitting 12 , 13 . Perhaps the most notable impact of this Challenge has been the creation of 'living benchmarks'. Since the ground truth was revealed, 204 new submissions have been made by 22 teams who are using the Challenge data for pipeline evaluation and development. We will keep leaderboards open indefinitely to allow rapid comparison of methods, and we hope journals will expect benchmarking on these data sets in reports of new somatic SNV detection algorithms. Methods Synthetic tumor generation. An overview of the process for generating synthetic tumor-normal pairs using BAMSurgeon is shown in Figure 1 . BAMSurgeon supports SNV, Indel and SV spikein, each accomplished by a separate script (addsnv.py, addindel.py and addsv.py). As the results presented in this paper only cover single-nucleotide mutations, the SNV portion of the software will be discussed. Sites for single-nucleotide mutations are represented by a single base in the reference genome; three examples are shown in Figure 1a indicated by the blue, orange and green arrows; let S be one of these sites. A column of n bases b 0 ...n ∈ {A,T,C,G} from n reads is aligned over reference position S . Let the reference base R ∈ {A,T,C,G}. The variant allele fraction (VAF) at S refers to the fraction of bases in b at S , where b ≠ R . In BAMSurgeon, the VAF is specified for each site independently and implemented so that for each site S , n × VAF reads are selected and the bases b in those reads aligned to position S are changed to some base m ∈ {A,T,C,G}, where m ≠ b ≠ R ( Fig. 1a , step 2). Optionally, a minimum alternate allele fraction (let this be a ) can be specified such that the specified mutation at S will not be made if any other position sharing a read with position S has VAF ≥ a . For the synthetic tumors analyzed in this paper, this value was set to a = 0.1. This effectively prevents mutation spike-in 'on top' of existing alternate alleles and avoids making mutations that would be inconsistent with existing haplotypes. For each site, modified reads are output to a temporary BAM file, and reads are realigned using one of the supported methods, which currently includes bwa backtrack 27 , bwa mem 28 , Bowtie2 (ref. 29 ), GSNAP 30 and NovoAlign ( ) ( Fig. 1a , step 3). For each site, a number of parameters govern whether a mutation will be made successfully. These include minimum read depth, i.e., | b |, which defaults to 5; minimum read depth for the mutation, i.e., | m |, which defaults to 2; and a minimum differential coverage | b after |/| b before |, which must be ≥0.9 by default. For these last three parameters, the synthetic tumor analyzed in this paper was generated using these default values. If any of these criteria are not met, the mutation at the failing site is skipped and will not appear in the 'truth' output. All remapped mutations are merged together and then merged with the original BAM at the end of the process ( Fig. 1a , step 4). This scheme also allows for parallelization, which is implemented in each of the BAMSurgeon tools. The procedure for generating synthetic tumor-normal pairs using BAMSurgeon is outlined in Figure 1b . This process requires a high-coverage BAM file; for IS1, HCC1143 BL was used, obtained from . To differentiate this from the original BAM file (step 1 of Fig. 1b ), we selected 10,000 single-nucleotide sites at random using the script included in etc/randomsites.py in the BAMSurgeon distribution, requiring that the selected bases be present in the GRCh37 reference (i.e., positions not represented by the 'N' gap character) and covered by at least ten reads in the original high-coverage BAM file. Of these, 9,658 were added to the original BAM using addsnv.py ( Supplementary Data 2 ) as well as structural mutations not discussed here. This 'burned-in' BAM was then sorted by read name using SAMtools sort -n, and the read pairs were distributed randomly into two BAMs, with each read pair having 50% chance to end up in one or the other of the output BAMs (step 2, Fig. 1b ). A script to accomplish this is included in the BAMSurgeon distribution in etc/bamsplit.py. Because the original BAM contained 60× genome coverage worth of reads, each of the split BAMs contained ∼ 30× worth of reads. One of the two BAMs was arbitrarily designated 'synthetic normal' and the other 'pre-tumor'. We again selected 4,000 single-nucleotide sites at random and used addsnv.py to add these to the 'pre-tumor' BAM (step 3, Fig. 1b ). Of these, 3,537 were added to the BAM file ( Supplementary Data 2 ). The relevant settings for addsnv.py were as follows: -s 0.1 -m 0.5 -d 0.9 --mindepth 5 --minmutreads 2. Following addition of structural mutations, the resulting 'synthetic tumor' was post-processed to ensure adherence to the SAM format specification using the script etc/postprocess.py, included in the BAMSurgeon distribution. The resulting tumor-normal pair was validated via ValidateSamFile.jar (part of the Picard tool set: ) and distributed to participants. Given the mutations spiked into the synthetic tumor, a 'truth' VCF was generated and used as the ground truth against which participant mutation calls returned in VCF format were judged using the evaluation script available at . BAMSurgeon robustness. To test the robustness of BAMSurgeon, we compared the output of four commonly used algorithms—MuTect 16 , RADIA 17 , SomaticSniper 19 and Strelka 18 —on the original data set against the output when an alternate aligner (NovoAlign), cell line (HCC1954) or read split was used. The same spike-in set of mutations was used in each control case. The following algorithm procedures were used for each control case. First, MuTect (v.1.14) was run with default parameters and the per-chromosome VCF output was concatenated using Picard MergeVcfs (v.1.107). Only calls flagged with “PASS” were retained. Second, RADIA (github-July-11-2014) was run with default parameters, and the output VCF files were filtered using the radia filter script with default parameters. After the filtered VCF files were indexed using igvtools (v2.3.12) 31 , the VCFs were merged together using VCFtools (v0.1.11) 32 . Finally, high-confidence somatic SNVs were extracted to generate the final VCF file. Third, somatic SNV candidates were detected using bam-somaticsniper (v.1.0.2) with the default parameters except -q option (mapping quality threshold). The -q was set to 1 instead of 0 as recommended by the developer. To filter the candidate SNVs, we generated a pileup indel file for both normal BAM and tumor BAM files using SAMtools (v0.1.6). The SomaticSniper package provides a series of Perl scripts to filter out possible FPs ( ). First, standard and LOH filtering were performed using the pileup indel files, and then the bam-readcount filter (bam-readcount downloaded on 10 January 2014) was applied with a mapping quality filter -q 1 (otherwise default settings). In addition, we ran the FP filter. Finally, the high-confidence filter was used with the default parameters. The final VCF file that contains high-confidence somatic SNVs was used. Last, the configuration script was used to set up the Strelka (v1.0.7) analysis pipeline. The default configuration file for BWA was used with the default parameters with the exception of SkipDepthFilters - depth filter was turned off. Following the configuration step, somatic SNVs were called using eight cores. This step automatically generates a VCF file containing confident somatic SNVs, and the VCF file was used. The resulting predictions were compared using recall (equation (1)), precision (equation (2)) and F -score (equation (3)). Univariate analysis. A subset of all submissions was used for downstream analysis; this subset consisted of the best submission from each team along with four default submissions submitted by SMC-DNA Challenge admins: MuTect, SomaticSniper, Strelka and VarScan 33 using default parameters. A list of all positions called by at least one of these submissions was generated (including all true SNV positions). For each of these positions, 12 genomic factors were calculated: depth of coverage in tumor and normal data set, median mapping quality, read position, number of reference alleles, number of nonreference alleles, sum of base qualities, homopolymer rate, G+C content, region type, distance to nearest germline single-nucleotide polymorphism (SNP) and trinucleotide sequence spanning position. Coverage was calculated using BEDTools 34 coverage (v2.18.2), which calculated the number of reads at each position in both the tumor and normal BAM files. Mapping quality was extracted from the tumor BAM file by converting the BAM file to a BED file using BEDTools bamtobed (v2.18.2) and calculating the median quality score at each position using BEDTools groupby (v2.18.2). The median read position of each genomic position was extracted using Bio-SamTools Pileup (v1.39). Number of reference alleles, number of alternate alleles and sum of base qualities were determined using SAMtools 35 mpileup (v0.1.18). Both homopolymer rate and G+C content were measured over a 201-bp window (±100 bp from position) determined using BEDTools getfasta (v2.18.2). Homopolymer rate was measured using the following equation, where n represents the number of bases in each homopolymer and N represents the number of homopolymers: G+C content was measured using the equation Annovar region-based annotation (v.2012-10-23) was used to annotate the genomic elements at each position—classifying as intergenic, intronic, untranslated and exonic. SNPs were called using the Genome Analysis ToolKit (GATK) 36 UnifiedGenotyper, VariantRecalibrator and ApplyRecalibration (v2.4.9). The distance to the closest SNP was calculated using BEDTools closest (v2.18.2). Finally, a recent study showed that cancer types show unique somatic SNV signatures defined by the SNV base change and the trinucleotide context surrounding the variation 26 . To explore the effect of both on SNV prediction, we added base changes (as defined by submitted VCF files) and trinucleotide context (extracted using BEDTools getfasta) to our model. To determine the relationship between each variable and prediction success, we plotted each genomic variable against the proportion of submissions that made an error at each position. The Spearman correlation coefficient and corresponding P value were calculated for continuous variables, and a one-way ANOVA was run on categorical variables (base change, trinucleotide context and coding region). Multivariate analysis. A Random Forest was used to model the effect of all 12 genomic variables on SNV prediction. Prior to modeling, the correlation between variables was tested. Variables were loosely correlated, with the exception of tumor and normal coverage and reference and alternate allele counts. Because of this correlation, the cforest implementation of Random Forest from the R package Party (v1.0-13) was used to reduce correlation bias 22 , 37 , 38 , 39 . Average decrease in accuracy, as output by the function varimp from the same package, was used to quantify the importance of each variable: the larger the decrease in accuracy, the more important the variable in explaining prediction accuracy. Each tree predicts whether a submission called an SNV at that position. Ten thousand trees were created, and at each branch three variables were randomly selected for node estimation. This model was run on each submission, analyzing true and false SNV positions separately (number of observations can be found in Supplementary Table 7 ). One submission, 2319000, failed to converge when the model was run with 10,000 trees, so the model was run with 1,000 trees on this submission (only). The directional effect of each variable was determined by calculating the median difference between a sample from each response category using the Wilcoxon rank test. Variable importance was compared across submissions and visualized with a dot map—generated using lattice (v0.20-29) and latticeExtra (v0.6-26)—where dot size and color reflect the mean decrease in accuracy and directional effect of the variable for that submission, respectively, and background shading shows the accuracy of the model fit (for example, Fig. 4a ). Finally, submissions were clustered by variable importance using the Diana algorithm. Trinucleotide analysis. The trinucleotide context (±1 bp) at each SNV called was found using BEDTools getfasta (v.2.18.2). Trinucleotide counts were calculated, ensuring that forward and reverse strands were binned together (for example, ATG was binned with CAT). These bins were further stratified by the base change of the central base as documented in the submitted FCF files. For three FP positions, out of approximately 200,000, the base change specified did not align with the reference, i.e., the base change specified was from T to C, whereas the trinucleotide at that position was AGT. These positions were considered to be alignment errors, and the positions were removed from the analysis. The distribution of trinucleotides in each base change was plotted and normalized using the trinucleotide distribution of the genome. Genomic trinucleotide counts were found by pattern matching each trinucleotide in the FASTA reference file. Again these trinucleotides found in either the forward or reverse strand were binned together. TP and FP positions were plotted separately to compare distributions. Both trinucleotide distributions were tested against the genomic distribution using a χ 2 test for given probabilities in the R statistical environment (v.3.0.3). Coding versus noncoding. To determine whether position functionality affected SNV prediction, we annotated all positions using Annovar region-based annotation (v.2012-10-23) to determine the genomic element of each SNV. Positions called by at least one submission (including all true SNVs) were binned into intergenic ( n = 24,226), intronic ( n = 10,893), untranslated ( n = 252) and coding ( n = 211) regions. The F- score of positions in these regions was calculated and visualized in a strip plot generated using lattice (v0.20-29) and latticeExtra (v0.6-26). The difference in F- score over the four regions was tested using Friedman rank-sum test to account for the effect of each submission. The difference in F- score of each pair of regions was compared using the paired Wilcoxon rank-sum test. Accuracy in exonic regions. The F- score was calculated in a subset of SNVs located in exonic regions corresponding to known genes (as determined by Annovar gene-based annotation (v.2012-10-23)). It was hypothesized that algorithms would have increased prediction success in these regions owing to the negative clinical impact that prediction errors would have. Out of the 126 called positions in functional genes, a further subset of 42 positions was extracted and classified on the basis of mutation functionality; only nonsynonymous SNVs were present in this subset (as determined by Annovar). Selection criteria ensured that these positions were called by four or more of the submissions. Lattice (v0.20-29) and latticeExtra (v0.6-26) were used to compare the difference in prediction success of submissions in this subset. Chromosomal bias of predicted SNVs. The F- score of each submission on each chromosome was calculated individually. A box plot, generated using lattice (v0.20-29) and latticeExtra (v0.6-26), suggested differences in F- scores over chromosomes. To quantify the chromosome variation seen, we implemented a two-way ANOVA incorporating chromosomes and submissions. Resulting P values were adjusted for multiple-hypothesis testing using FDR 40 . To account for the variation seen in chromosome 21, we compared the distributions of ten genomic variables ( Supplementary Table 5 ) in both FNs and FPs on chromosome 21 against the remaining genome using the Wilcoxon rank-sum test. P values were adjusted for multiple testing using the false discovery rate method. To further analyze chromosomal bias, we compared the rank of each submission on individual chromosomes to the overall rank of the submission. The significance of the observed variation was tested by generating a null distribution similar to that previously described. The F- score of null 'chromosomes' (randomly sampled positions over 10,000 iterations) was calculated and used to rank submissions. The deviation of each submission on each chromosome from its overall rank was weighed by the difference in overall F- score accuracy between the chromosome rank and overall rank. We then determined the number of times, over the 10,000 iterations, that the deviation seen in the null ranks was greater than the deviation in the chromosomal ranks. This count was divided by 10,000 to produce the probability of observing the chromosomal variation by chance alone (or the P value) for each submission on each chromosome. The variation and corresponding P value were visualized using a dot map generated using lattice (v0.20-29) and latticeExtra (v0.6-26). Accession codes. NCBI Sequence Read Archive: SRP042948 . | Cancer research leaders at the Ontario Institute for Cancer Research, Oregon Health & Science University, Sage Bionetworks, the distributed DREAM (Dialog for Reverse Engineering Assessment and Methods) community and The University of California Santa Cruz published the first findings of the ICGC-TCGA-DREAM Somatic Mutation Calling (SMC) Challenge today in the journal Nature Methods. These results provide an important new benchmark for researchers, helping to define the most accurate methods for identifying somatic mutations in cancer genomes. The results could be the first step in creating a new global standard to determine how well cancer mutations are detected. The Challenge, which was initiated in November 2013, was an open call to the research community to address the need for accurate methods to identify cancer-associated mutations from whole-genome sequencing data. Although genomic sequencing of tumour genomes is exploding, the mutations identified in a given genome can differ by up to 50 per cent just based on how the data is analyzed. Research teams were asked to analyze three in silico (computer simulated) tumour samples and publicly share their methods. The 248 separate analyses were contributed by teams around the world and then analyzed and compared by Challenge organizers. When combined, the analyses provide a new ensemble algorithm that outperforms any single algorithm used in genomic data analysis to date. The authors of the paper also report a computational method, BAMSurgeon (developed by co-lead author Adam Ewing, a postdoctoral fellow in the lab of Dr. David Haussler at UC Santa Cruz), capable of producing an accurate simulation of a tumour genome. In contrast to tumour genomes from real tissue samples, the Challenge organizers had complete knowledge of all mutations within the simulated tumour genomes, allowing comprehensive assessment of the mistakes made by all submitted methods, as well as their accuracy in identifying the known mutations. The submitted methods displayed dramatic differences in accuracy, with many achieving less than 80 per cent accuracy and some methods achieving above 90 per cent. Perhaps more surprisingly, 25 per cent of teams were able to improve their performance by at least 20 per cent just by optimizing the parameters on their existing algorithms. This suggests that differences in how existing approaches are applied are critically important - perhaps more so than the choice of the method itself. The group also demonstrated that false positives (mutations that were predicted but didn't actually exist) were not randomly distributed in the genome but instead they were in very specific locations, and, importantly, the errors actually closely resemble mutation patterns previously believed to represent real biological signals. "Overall these findings demonstrated that the best way to analyze a human genome is to use a pool of multiple algorithms," said co-lead author Kathleen Houlahan, a Junior Bioinformatician at the Ontario Institute for Cancer Research working with the Challenge lead, Dr. Paul Boutros. "There is a lot of value to be gained in working together. People around the world are already using the tools we've created. These are just the first findings from the Challenge, so there are many more discoveries to share with the research community as we work through the data and analyze the results." "Science is now a team sport. As a research community we're all on the same team against a common opponent," said Dr. Adam Margolin, Director of Computational Biology at Oregon Health & Science University and co-organizer of the challenge. "The only way we'll win is to tackle the biggest, most challenging problems as a global community, and rapidly identify and build on the best innovations that arise from anywhere. All of the top innovators participated in this Challenge, and by working together for a year, I believe we've advanced our state of knowledge far beyond the sum of our isolated efforts." "Paul and the whole team have done something truly exceptional with this Challenge. By leveraging the SMC Challenge to establish a living community benchmark, the Challenge organizers have made it run more like an "infinite game" where the goal is no longer one of winning the Challenge but instead of constantly addressing an ever-changing horizon," said Dr. Stephen Friend, President of Sage Bionetworks. "And given the complex heterogeneity of cancer genomes and the rapid rate with which next generation sequencing technologies keep changing and evolving, this seems like an ideal approach to accelerate progress for the entire field." "We owe it to cancer patients to interpret tumour DNA information as accurately as we can. This study represents yet another great example of harnessing the power of the open, blinded competition to take a huge step forward in fulfilling that vision," said Josh Stuart, professor of biomolecular engineering at UC Santa Cruz and a main representative of The Cancer Genome Atlas project among the authors. "We still have important work ahead of us, but accurate mutation calls will give a solid foundation to build from." | 10.1038/nmeth.3407 |
Physics | How our biological clocks are locked in sync | Colas Droin et al. Low-dimensional dynamics of two coupled biological oscillators, Nature Physics (2019). DOI: 10.1038/s41567-019-0598-1 Journal information: Nature Physics , Nature | http://dx.doi.org/10.1038/s41567-019-0598-1 | https://phys.org/news/2019-08-biological-clocks-sync.html | Abstract The circadian clock and the cell cycle are two biological oscillatory processes that coexist within individual cells. These two oscillators were found to interact, which can lead to their synchronization. Here, we develop a method to identify a low-dimensional stochastic model of the coupled system directly from time-lapse imaging in single cells. In particular, we infer the coupling and nonlinear dynamics of the two oscillators from thousands of mouse and human single-cell fluorescence microscopy traces. This coupling predicts multiple phase-locked states showing different degrees of robustness against molecular fluctuations inherent to cellular-scale biological oscillators. For the 1:1 state, the predicted phase-shifts following period perturbations were validated experimentally. Moreover, the phase-locked states are temperature-independent and evolutionarily conserved from mouse to human, hinting at a common underlying dynamical mechanism. Finally, we detect a signature of the coupled dynamics in a physiological context, explaining why tissues with different proliferation states exhibited shifted circadian clock phases. Main The circadian clock and the cell cycle are two periodic processes that cohabit in many types of living cell. In single mammalian cells, circadian clocks consist of autonomous feedback loop oscillators ticking with an average period of about 24 h (ref. 1 ), and controlling many downstream cellular processes 2 . In conditions of high proliferation such as those found in cultured cells or certain tissues, the cell cycle progresses essentially continuously and can thus be abstracted as an oscillator with an average period matching the cell doubling time. Both processes fluctuate due to intra-cell molecular noise, as well as external fluctuations. While the precision of the circadian period is typically about 15% in fibroblast cells 1 , the cell cycle can be more variable depending on the conditions and cell lines 3 , 4 . Interestingly, previous work showed that the two cycles can mutually interact 1 , which may then lead, as theory predicts, to synchronized dynamics 5 , 6 and important physiological consequences such as cell-cycle synchrony during liver regeneration 7 . In tissue-culture cells, which are amenable to systematic microscopy analysis, it was found that the phase dynamics of two oscillators shows phase-locking 5 , 6 , defined by a rational rotation number p : q such that p cycles of one oscillator are completed while the other completes q . The influence of the circadian clock on cell-cycle progression and division timing has been analysed in several systems 7 , 8 , 9 , 10 , 11 , 12 , 13 . On the other hand, we showed in mouse fibroblasts that the cell cycle strongly influences the circadian oscillator 5 , which was also investigated theoretically and linked with DNA replication in bacteria 14 . In addition, human cells can switch between a state of high cell proliferation with a damped circadian oscillator, to a state of low proliferation but robust circadian rhythms, depending on molecular interactions and activities of cell-cycle and clock regulators 15 . Here, we exploit the fact that the two coupled cycles evolve on a low-dimensional and compact manifold (the flat torus) to fully characterize their dynamics. In particular, starting from a generic stochastic model for the interacting phases combined with fluorescence microscopy recordings from thousands of individual cells, we obtained a data-driven reconstruction of the coupling function describing how the cell cycle influences the circadian oscillator. This coupling phase-locks the two oscillators in a temperature-independent manner, and only a few of the deterministically predicted phase-locked states were stable against inherent fluctuations. Moreover, we established that the coupling between the two oscillators is conserved from mouse to human, and can override systemic synchronization signals such as temperature cycles. Finally, we showed in a physiological context how such coupling explains why mammalian tissues with different cell proliferation rates have shifted circadian phases. Modelling the dynamics of two coupled oscillators To study the phase dynamics of the circadian and cell-cycle oscillators, we reconstructed a stochastic dynamic model of the two coupled oscillators from single-cell time-lapse microscopy traces of a fluorescent Rev-erbα–YFP (yellow fluorescent protein) circadian reporter 1 , 5 . Our approach consists of explicitly modelling the measured fluorescent signals, using a set of stochastic ordinary differential equations (SODEs) whose parameters are estimated by maximizing the probability of observing the data over the entire set of cell traces ( Methods ). Here, we present the key components of the model (detailed in Supplementary Information ). Phase model First, we represent the phase dynamics of the circadian oscillator ( θ = 0 corresponds to peaks of fluorescence) and the cell cycle ( ϕ = 0 represents cytokinesis) on a [0,2π) × [0,2π) torus. Since we showed previously that the influence of the clock on the cell cycle was negligible in NIH3T3 cells 5 , here we model only how the cell-cycle progression influences the instantaneous circadian phase velocity ω θ using a general coupling function F ( θ , ϕ ) (Fig. 1a ). To account for circadian phase fluctuations and variability in circadian period length known to be present in single cells 1 , 16 , we added a phase diffusion term σ θ d W t . For the cell-cycle phase, we assumed a piecewise linear and deterministic phase progression in between two successive divisions. The SODEs for the phases read: $$\left\{{\begin{array}{*{20}{l}} {{\rm d}\theta } = {\displaystyle\frac{{2{\mathrm{\pi}} }}{{T_\theta }}{\rm d}t + F\left( {\theta ,\phi } \right){\rm d}t + \sigma _\theta \,{\rm d}W_t}\hfill{} \\ {{\rm d}\phi } = {\displaystyle\frac{{2{\mathrm{\pi}} }}{{T_\phi ^i}}{\rm d}t}\hfill{}\end{array}} \right.$$ (1) Here, T θ represents the intrinsic circadian period, while the term \(T_\phi ^i\) represents the i th cell-cycle interval between two successive divisions. Fig. 1: Reconstructing the phase dynamics and coupling of two biological oscillators. a , In mouse fibroblasts, the cell cycle (left) can influence the circadian oscillator (right) according to a coupling function F ( θ , ϕ ), where ϕ denotes the cell cycle and θ the circadian oscillator phases. b , The stochastic model for the signal S t using diffusion-drift SODEs for the circadian phase θ t , amplitude A t and background B t fluctuations, as well as a function w ( θ ) linking the phase θ t to the measured observations, and F ( θ , ϕ ). c , Fluorescence microscopy traces (Rev-erbα–YFP circadian reporter) are recorded for non-dividing and dividing cells (top left and top right boxes). Coupling-independent parameters (*) are estimated from non-dividing cells while it is necessary to use dividing cells to infer F ( θ , ϕ ) (**). The optimization problem is solved by converting the model to a HMM in which θ t , A t and B t are latent variables. The HMM is used on traces to compute posterior probabilities \({P}(\theta_t|{D})\) of circadian phases (bottom right box), while the cell-cycle phase is retrieved using linear interpolation between successive divisions (top right box, vertical lines). D represents the data (all dividing cells). An iterative expectation–maximization (E–M) algorithm then yields the converged F ( θ , ϕ ) (bottom left box). Full size image Model of the signal We linked the circadian phase with the measured time traces through a 2π-periodic function w ( θ ). In addition, as suggested by typical data traces (Supplementary Fig. 1a ), we considered amplitude ( A t ) and baseline ( B t ) fluctuations, which for simplicity we modelled as independent from θ t , an assumption that was supported a posteriori ( Supplementary Information ). The full model for the observed signal S t thus reads: $$S_t = {\rm e}^{A_t}w\left( {\theta _t} \right) + B_t + \xi$$ (2) where ξ is a normally distributed random variable (measurement noise) and A t and B t are Ornstein–Uhlenbeck processes varying more slowly than the phase distortion caused by F ( θ , ϕ ) (that is, on timescales on the order of the circadian period; Supplementary Information ). Inference of phases and coupling function From this stochastic model (equations ( 1 ) and ( 2 ), Fig. 1b ), we built a hidden Markov model (HMM) to calculate posterior probabilities of the oscillator phases at each measured time point, using the forward–backward algorithm 17 . To estimate F ( θ , ϕ ), we used a maximum-likelihood approach that combines goodness of fit with sparseness and smoothness constraints, which we implemented with an expectation–maximization algorithm ( Methods and Supplementary Information ). The successive steps of our approach are illustrated in Fig. 1c . The traces of dividing cells indicated that, typically, the circadian phase progression shows variations in phase velocity (Supplementary Fig. 1a ). To validate that these variations can be used to identify F ( θ , ϕ ), we generated noisy traces in silico with predefined F ( θ , ϕ ) and reconstructed the coupling function, showing excellent qualitative agreement (Supplementary Fig. 1b,c ). Influence of the cell cycle on the circadian phase In mouse embryonic fibroblasts (NIH3T3), we showed that due to the coupling, circadian periods decrease with temperature in dividing cells, but not in quiescent cells 5 . To further understand how temperature influences the interaction between the two oscillators, we reanalysed NIH3T3 traces (24–72 h long) obtained at 34 °C, 37 °C, and 40 °C (ref. 5 ). From those, we found that both the inferred coupling functions and phase densities at the three temperatures were very similar, with almost identical 1:1 phase-locked orbits (Supplementary Fig. 2a–c ). We therefore modelled the coupling as temperature-independent and reconstructed a definitive F ( θ , ϕ ) from traces at all temperatures (Fig. 2a and Supplementary Fig. 2d ). This function shows a diffuse structure mainly composed of two juxtaposed diagonal stripes: one for phase acceleration (red) and one, less structured, for deceleration (blue). The slopes of these stripes are about 1, which indicates that an approximate minimal model of the coupling would be a function F ( θ , ϕ ) = f ( θ − ϕ ). However, the phase velocity varies along the stripes and attractor (see below), which justifies using a two-dimensional parameterization of the coupling function. The phase density for cells with fixed cell-cycle period of 22 h (corresponding to the mean cell-cycle period in the full dataset; Fig. 2b and Supplementary Video 1 ) clearly suggests 1:1 phase-locking. In fact, analysing the predicted deterministic dynamics (equation ( 1 ), with the reconstructed F ( θ , ϕ ), and without the noise) shows a 1:1 attractor (Fig. 2c ). Thus, in this 1:1 state, the endogenous circadian period of 24 h is shortened by 2 h, which results from acceleration occurring after cytokinesis ( ϕ = 0 when the circadian phase is near θ ≈ 0.8 × 2π, and lasting for the entire G1 phase, until about θ ≈ 0.4 × 2π when cells typically enter S phase ( ϕ ≈ 0.4 × 2π at the G1/S transition; Supplementary Information and Fig. 2d ). Fig. 2: Influence of the cell cycle on the circadian phase enables 1:1 phase-locking. a , Coupling F ( θ , ϕ ) optimized on dividing single-cell traces. Due to the similar results (Supplementary Fig. 2 ), traces from the three temperatures ( n = 154, 271 and 302 traces at 34 °C, 37 °C and 40 °C, respectively) are pooled. b , The density of inferred phase traces from all of the dividing traces with 22 ± 1 h cell-cycle intervals indicates a 1:1 phase-locked state. c , Numerical integration of the phase velocity field (arrows, deterministic model) yields 1:1 attractor (green line) and repeller (red line). Here, the cell-cycle period was set to 22 h. d , The circadian phase velocity is not constant along the attractor, here for cells with 22 ± 1 h cell-cycle intervals. Data (blue line, standard deviation in light-blue shading) and deterministic simulation (orange line). Inset: integrated time along the attractor. The grey line shows constant bare phase velocity \(\omega _\theta = \frac{{2{\mathrm{\pi}} }}{{24\,{\rm h}}}\) . Full size image Phase dynamics in perturbation experiments The reconstructed model allows us to simulate the circadian phase dynamics as a function of the cell-cycle period, which is relevant as the cells display a significant range of cell-cycle lengths (Supplementary Fig. 3a ). In the deterministic system, we find 1:1 phase-locking over a range of cell-cycle times varying from 19 h to 27 h, showing that the cell cycle can both globally accelerate and slow down circadian phase progression (Fig. 3a ). The attractor shifts progressively to the right in the phase space, yielding a circadian phase at division ranging from θ ≈ 0.7 × 2π when T ϕ = 19 h to θ ≈ 0.9 × 2π when T ϕ = 27 h. Since the attractor for different cell-cycle periods shifts, the circadian phase velocity profile also changes (Supplementary Fig. 3b ). To validate the predicted shifts, we experimentally subjected cells to perturbations inducing a large variety of cell-cycle periods and compared the observed circadian phase to the model prediction at three different cell-cycle phases, revealing an excellent agreement, with no additional free parameters (Fig. 3b ). Fig. 3: The coupling between the cell cycle and the circadian oscillator predicts phase-shifts and phase-locking attractors in perturbation experiments. a , Simulated (deterministic) attractors for cell-cycle periods ranging from 19 h to 27 h show that the dephasing of the cell cycle and the circadian oscillator changes within the 1:1 state. Periods just outside this range yield quasiperiodic orbits. The horizontal dashed lines indicate three different cell-cycle phases \(\phi = 0,\phi = \frac{1}{3} \times 2{\mathrm{\pi}} ,\phi = \frac{2}{3} \times 2{\mathrm{\pi}}\) used in b . b , Predictions from a (dashed grey lines) against independent experimental data collected from 12 perturbation experiments (coloured symbols, see legend, notation explained in Methods ). shCRY, shRNA-mediated knockdown of Cry2 . c , Multiple phased-locked states are predicted, recognizable by rational relationships between the frequencies of the entraining cell cycle and the entrained circadian oscillator, interspersed by quasiperiodic intervals. d , Arnold tongues showing multiple phase-locked states as a function of cell-cycle periods and coupling strength ( K = 1 corresponds to the experimentally found coupling). Stable zones (white tongues) reveal attractors interspersed by quasiperiodicity. Although there are only two wider phased-locked states (1:1 and 1:2), several other p : q states are found. e , f , Representative single-cell traces (data in yellow) evolving near predicted attractors (green lines). The traces are for a cell with T ϕ = 24 h ( e ) and one with T ϕ = 48 h ( f ) near the 1:1 and 1:2 orbits, respectively. Full size image The simulations also clearly revealed multiple phase-locked states (1:2, 1:1, 2:1, 3:1 and so on, with p : q indicating the number of cell cycles p and the number of clock cycles q ), represented as Arnold tongues (Fig. 3c , and Supplementary Video 2 for an animated phase-space representation). We identified cell data trajectories following almost perfectly the deterministic attractors, both in the 1:1 and 1:2 phase-locking states (Fig. 3e,f , respectively); however, cells showing other p:q states were rarely observed. Fluctuations extend 1:1 phase-locking asymmetrically To understand the differences between the simulated deterministic system and observed cell traces, we simulated the stochastic dynamics (equation ( 1 )). We then compared measured data trajectories stratified by cell-cycle periods (Fig. 4a ) with deterministic (Fig. 4b ) and stochastic simulations (Fig. 4c ). This revealed that data agree better with stochastic than deterministic simulations, indicating that the phase fluctuations qualitatively change the phase portrait. One striking observation is the increased range of 1:1 phase-locking in the noisy system; however, the extension is asymmetric, as it occurs for shorter, but not for longer, cell-cycle periods. Indeed, while 1:2 phase-locking is observed in the data and the noisy simulations, the deterministically predicted 2:1 state is replaced in the data and stochastic system by 1:1-like orbits. Consistently, spectral analysis revealed significant differences between deterministic and stochastic simulations (Supplementary Fig. 4a,b and Supplementary Video 3 ); in addition, the coupling, specifically in the 1:1 state, was able to efficiently filter the noise (Supplementary Fig. 4b , right). Fig. 4: Single-cell data and stochastic simulations reveal robust 1:1 and 1:2 phase-locked states. a , Phase-space densities from the experimental traces stratified by cell-cycle periods (±1 h for each reference period); n = 16, 223, 303, 54 and 4 cell traces in the T = 12, 16, 24, 36 and 48 h panels, respectively. b , Vector fields and simulated (deterministic) trajectories for the different cell-cycle periods. Attractors are shown in green (forward time integration) and repellers (backward integration) in red (see also Supplementary Video 2 ). c , Phase-space densities obtained from stochastic simulations of the model match better with the data compared to b . Full size image Evolutionarily conserved phase-locking Most studies investigating the interaction between the cell cycle and the circadian oscillators in mammals are based on rodent models 1 , 5 , 6 , 7 , 8 , 18 . To test whether the above phase-locked dynamics are conserved in human U2OS cells, an established circadian oscillator model 19 , 20 , we engineered a U2OS cell line termed U2OS-Dual expressing a dual circadian fluorescent (Rev-erbα–YFP) and luminescent (Bmal1–Luc) reporter system. U2OS-Dual cells possess a functional circadian clock behaving similarly to NIH3T3 cells also expressing a Bmal1–Luc luminescent reporter 21 (Fig. 5a ). We scrutinize the relation between the two cell lines by comparing their behaviour in different conditions: at 34 °C and 37 °C for cells with synchronized and non-synchronized circadian cycles 5 (Fig. 5b–g ). Fig. 5: Conserved influence of the cell cycle on the circadian clock in human U2OS osteosarcoma cells. a , Mean luminescence intensities (± s.d., n = 3) from non-dividing NIH3T3 and U2OS cells grown at 37 °C expressing a Bmal1–Luc reporter. The values in the legend correspond to the mean periods ± s.d. b , Top: semi-automated tracking of U2OS cell lines expressing the Rev-erbα–YFP circadian fluorescent reporter. Bottom: circadian traces obtained from quantification of tracked nuclei. The red vertical lines represent cell divisions (cytokinesis) and the blue vertical lines show Rev-erbα–YFP signal peaks. c , Top: stack of divisions (red) and Rev-erbα–YFP peaks (blue) for U2OS single-cell traces centred on divisions. Bottom: distribution of the time of division relative to the next Rev-erbα–YFP peaks (in red: mean ± s.d., n = 1,298). d , Divisions and Rev-erbα–YFP peaks from single non-synchronized (top), and dexamethasone-synchronized (bottom) U2OS traces ordered on the first division. e , The synchronization index from non-synchronized (black) and dexamethasone-synchronized (red) traces for the circadian phase (top) and cell-cycle phase (bottom) estimated as in ref. 5 . The circadian synchronization index from non-synchronized cells is relatively high due to plating. The dashed grey lines show the 95th percentiles of the synchronization index for randomly shuffled traces. f , Cell-cycle and circadian periods for U2OS cells grown at 34 °C and 37 °C ( n > 90 for all distributions). g , Mean luminescence intensities (± s.d., n = 3) for non-dividing U2OS cells grown at 34 °C expressing a Bmal1–Luc reporter. The values in the legend correspond to the mean periods ± s.d. h , Mean and standard deviation of the circadian period for non-dividing U2OS cells grown at 34 °C and 37 °C ( n = 8 at 34 °C and n = 9 at 37 °C, two-sided Wilcoxon’s test). i , The coupling function F ( θ , ϕ ) optimized on n = 551 dividing U2OS cells grown at 37 °C, superimposed with the attractor ( T ϕ = 22 h) obtained from deterministic simulations (green line). Full size image Similarly to NIH3T3 cells, the division events of non-synchronized U2OS-Dual cells grown at 37 °C occurred 4.96 ± 2.6 h before a peak in the circadian fluorescent signal (Fig. 5c ), indicating that the cell cycle and the circadian clock interact. To investigate the directionality of this interaction, we tested, as in NIH3T3 cells 5 , whether the circadian clock phase could influence cell-cycle progression by resetting the circadian oscillator using dexamethasone, a circadian resetting cue 22 that does not perturb the cell cycle 1 . The expected resetting effect of dexamethasone on the circadian phase is indicated by the density of peaks in reporter levels during the first 10 h of recording, but with unnoticeable effects on the timing of the first division (Fig. 5d ). However, the circadian peak following the first division occurred systematically around 5 h after the division in both conditions, suggesting that cell division in U2OS can reset circadian phases and overwrite dexamethasone synchronization. Synchronization of the circadian clocks for dexamethasone- versus non-treated cells was expectedly higher for dexamethasone and gradually decreased to reach the level of the untreated cells (Fig. 5e ), contrasting with the generally lower synchronization of cell divisions in both conditions. To then test whether the cell cycle could influence the circadian clock, we lengthened the cell-cycle period by growing cells at 34 °C and compared their behaviour with those grown at 37 °C. Interestingly, cells at 34 °C showed a longer circadian period compared to those at 37 °C (Fig. 5f ), unlike the temperature-compensated circadian periods (~25 h) in non-dividing cells (Fig. 5a,g,h ). Thus, similarly to mouse NIH3T3 cells, the coupling directionality is predominantly from the cell cycle to the circadian clock. In fact, the reconstructed coupling function for U2OS-Dual cells grown at 37 °C (Fig. 5i ) is structurally similar to that obtained in mouse fibroblasts (Fig. 2a ), with the ensuing dynamics also showing a 1:1 attractor. Dividing cells lose circadian temperature entrainment In mammals, circadian clocks in tissues are synchronized by multiple systemic signals 23 . In fact, temperature oscillations mimicking those physiologically observed can phase-lock circadian oscillators in non-dividing (contact-inhibited) NIH3T3 cells in vitro 24 . To study how the observed interactions influence temperature entrainment, we applied temperature cycles (24 h period ranging from 35.5 °C to 38.5 °C) to U2OS cells growing at different rates (plated at different densities) and monitored population-wide Bmal1–Luc signals (Fig. 6a ). We found that, independently of initial densities, as the populations reach confluency, the phases and amplitudes become stationary, showing 1:1 entrainment (Fig. 6b,c and Supplementary Fig. 5a ). During the initial transients, emerging circadian oscillations in non-confluent cells showed phases that were already stationary, at least once cell numbers were sufficiently high to obtain reliable signals. Fig. 6: Temperature cycles do not entrain circadian oscillators in dividing cells and proliferation genes are associated with tissue-specific circadian phases. a , Corrected and averaged Bmal1–Luc intensities and 95% confidence intervals ( n = 6) from U2OS-Dual cells plated at different initial densities and subjected to a temperature entrainment (top). b , Acrophases (times of the local peaks in luminescence) of the Bmal1–Luc signal as a function of the reporter intensity for the cells in a . Loess fit (black) and 95% confidence intervals (grey). c , Amplitude (log of peak to mean ratio) of the Bmal1–Luc oscillations as a function of the reporter intensity for the cells in a . Loess fit (black) and 95% confidence intervals (grey). d , e , Circadian phases ( d ) and amplitudes ( e ) of different mouse tissues obtained in ref. 27 , relative to liver. f , Expression levels of genes positively associated with phases from d and linked to cell proliferation. g , h , Correlations between Mki67 ( g ) and Myc ( h ) mRNA expression and circadian phases across mouse tissues (Pearson’s correlation, two-sided P values from a t -distribution with n − 2 degrees of freedom). i , Expression levels of genes negatively associated with amplitudes and linked to nervous system development. Full size image As the cell confluence increases, the proportion of cells that stop cycling (exit to G0) increases 25 . We therefore hypothesized that the observed phase and amplitude profiles in Bmal1–Luc signals (Fig. 6b,c ) originate from a mixture of two populations: an increasing population of non-dividing cells (G0) showing ‘normal’ entrainment properties, and dividing cells. We considered three scenarios for the dividing cells: the circadian oscillators in dividing cells adopt the same circadian profile as non-dividing entrained cells; are not entrained; or are entrained, but with a different phase compared to non-dividing cells (Supplementary Fig. 5b ). These scenarios can be distinguished by the predicted phase and amplitude profiles (Supplementary Fig. 5c ). Clearly, the measured profiles for U2OS-Dual cells favoured the second scenario, suggesting that circadian oscillators in dividing cells do not entrain to the applied temperature cycles. Proliferation is associated with tissue-specific phases The above findings suggest that phases or amplitudes of circadian clocks in organs in vivo might be influenced by the proliferation state of cells in the tissue. To test this, we investigated circadian clock parameters in different mouse tissues using a study of mRNA levels in 12 adult (6-week-old males) mouse tissues, which revealed that clock phases span 1.5 h between the earliest and latest tissues (Fig. 6d ) 26 , 27 , an effect that is considered large in chronobiology as even period phenotypes of core clock genes are often smaller 2 , 28 , 29 . We noticed that the mean mRNA levels across tissues of many genes correlated with the phase offsets (Supplementary Table 1 ). However, gene functions related to cell proliferation stood out as the most strongly enriched (Fig. 6f and Supplementary Table 1 ). Among those genes, the levels of known markers of cell proliferation such as Mki67 or Myc were strongly correlated with the phase offsets (Fig. 6g,h and Supplementary Table 1 ). Amplitudes, on the other hand, were not correlated with proliferation genes, but rather with neuronal specific genes, as expected owing to the damped rhythms present in those tissues (Fig. 6i and Supplementary Table 1 ) 26 . Thus, this analysis suggests that the differences in basal proliferation levels observed in normal tissues might underlie the dephasing of the circadian clock, suggesting a physiological role for the interaction of the cell cycle and circadian clocks. Discussion A goal in quantitative single-cell biology is to obtain data-driven and dynamical models of biological phenomena in low dimensions. In practice, the heterogeneity and complex physics underlying the emergence of biological function in non-equilibrium living systems, as well as the sparseness of available measurements, pose challenges. Here, we studied a system of two coupled biological oscillators, sufficiently simple to allow data-driven model identification, yet complex enough to exhibit qualitatively distinct dynamics (that is, p:q states and quasiperiodicity). In the coupled cell-cycle and circadian oscillator system, phase-locked states different from 1:1 have been observed 10 . While multiple attractors, notably 1:1 and 3:2, were found in mouse NIH3T3 cells under transient dexamethasone stimulation 6 , here we report 1:2 states for long cell-cycle times under steady, unstimulated, conditions. Unlike other deterministically predicted states, 1:2 was sufficiently robust and observed in some cells. In fact, we found that noise extended the range of the 1:1 tongue, but asymmetrically (that is, towards decreased cell-cycle periods). This may be reminiscent of generalized Poincaré oscillators showing that the entrainment range is broader for limit cycles with low relaxation rates 30 . Indeed, noise could decrease relaxation rates and thereby broaden Arnold tongues. In addition, for certain cell-cycle periods, we observed the superposition of multiple states, both in the data and in the stochastic simulations, but not in the deterministic analysis (Fig. 4 , see notably T = 12 h and T = 36 h). This is reminiscent of mode hopping as described in the context of an oscillatory gene circuit underlying inflammatory responses 31 ; however, here the corresponding Arnold tongues do not overlap in the range of the biologically relevant coupling strength ( K = 1, Fig. 3d ). While we focused on the emergent dynamics in the coupled oscillator system, considerations on possible biological mechanisms are relevant for follow-up biochemical analyses. How chromosome condensation or nuclear envelope breakdown may influence the circadian clock phase progression via either transcriptional shutdown or displacement of chromatin-bound circadian repressors, respectively, was discussed previously 5 . For example, Rev-erbα transcription being so tightly locked to cell divisions (the peak accumulation of the reporter occurs 5 h after mitosis) could reflect the sudden derepression of its promoter, due to a displaced CRYPTOCHROME1 (CRY1)-containing repressor complex following nuclear envelope breakdown 32 . In turn, REV-ERB-A accumulation influences the clock phase by binding to promoters of multiple core clock components, including Cry1 33 , 34 . More specific transcriptional activities could also play a key role in coupling the cell and circadian cycles. In fact, the circadian oscillator is exquisitely sensitive to numerous signalling pathways, impinging on the clock by transcriptional induction of Period genes, which thus provides an efficient synchronization method 22 . Similarly, entrainment via temperature cycles also converges onto Period gene transcription 24 . However, we are not aware of cell-cycle-dependent transcriptional regulators, such as E2F factors, targeting clock components such as the Period genes. Finally, since the regulation of protein stability is important for clock function 35 , it is possible that phosphorylation-controlled proteolytic activities driving the cell cycle could target circadian phase regulators 36 , thereby mediating the observed coupling. In mammals, the circadian oscillator in the suprachiasmatic nucleus is the pacemaker for the entire organism 37 , driving 24 h rhythms in activity, feeding, body temperature and hormone levels. In particular, the suprachiasmatic nucleus can synchronize peripheral cell-autonomous circadian clocks located within organs across the body 38 . Consistent with theory 39 , in a physiological context of entrainment, the coupling of the cell cycle with the circadian clock can induce proliferation-dependent phase-shifts, which we observed. Such phase-shifts could reflect a homogeneous behaviour of all cells, or it could reflect heterogeneity of cell proliferation states, possibly leading to wave propagation. The phase-shifts we observed in tissues were associated with low proliferation (that is, non-pathological states of tissue homeostasis and cell renewal). For example, the liver and the adrenal gland showed a phase advance compared to fully quiescent tissues such as the brain. When cell proliferation is abnormally high such as in cancer, circadian clocks are often severely damped 40 . While this absence of a robust circadian rhythm in malignant tissue states may reflect non-functional circadian oscillators due to mutations in clock genes 41 , the damped rhythms may also reflect circadian desynchrony of otherwise functional circadian oscillators. Such desynchrony would readily follow from the coupling between the cell-cycle and circadian oscillators we highlight here, in the presence of non-coherent cell-cycle progression. Methodologically, the new approach to reconstruct a dynamical model for the coupled oscillator system has significant advantages over previous methods; notably, strong assumptions such as the sparse and localized coupling are dispensable 5 . Compared with generic model identification techniques 42 , our approach models the raw data and its noise structure explicitly. In the future, such data-driven identification of dynamical models might reveal dynamical instabilities underlying ordered states in spatially extended systems, as occurring, for instance, during somitogenesis 43 . Methods Cell lines All cell lines (U2OS-Dual, NIH3T3-Bmal1–Luc and U2OS-PGK–Luc) were maintained in a humidified incubator at 37 °C with 5% CO 2 using DMEM cell culture medium supplemented with 10% fetal bovine serum (FBS) and 1% penicillin–streptomycin–glutamine (PSG). One day before luminescence or fluorescence acquisitions, we replaced DMEM with FluoroBrite DMEM media supplemented with 10% FBS and 1% PSG. NIH3T3 perturbation experiments were generated in Bieler et al. 5 . Briefly, they correspond to temperature changes (34, 37 and 40 °C), treatment with CDK1 (RO-3306, Sigma-Aldrich) and CDK1/2 (NU-6102, Calbiochem) inhibitors at 1, 5, 7 and 10 µM (CDK1in-[1,5,7,10] and CDK2in-[1,5,7,10]), and short hairpin RNA-mediated knockdown of Cry2 . Fluorescent time-lapse microscopy Time-lapse fluorescent microscopy for U2OS-Dual cells was performed at the Biomolecular Screening Facility (Swiss Federal Institute of Technology (EPFL)) using an InCell Analyzer 2200 (GE Healthcare). Experiments were performed at different temperatures (34 °C, 37 °C or 40 °C) with a humidity and CO 2 (5%) control system. We used 100 ms excitation at 513/17 nm and emission at 548/22 nm to record the YFP channel. Cells were recorded by acquiring one field of view per well in a 96-well black plate (GE Healthcare). We used our previously developed semi-automated pipeline for segmentation and tracking of individual cells 5 . In total, traces from n = 551 U2OS cells were obtained (typically 50 cells are obtained per video). NIH3T3 single-cell traces are reanalysed from previous work 5 ; here, we used n = 2,504 of those time traces. In all cases (NIH3T3 and U2OS), we followed several quality control metrics from ref. 5 . Briefly, we discarded all traces that left the field of view at some point during the acquisition. We also visually inspected all traces, using a custom-made Matlab tool, to remove traces with problematic segmentation and tracking. In addition, we kept only traces with significant circadian amplitude (peak height >0.25, rescaled signals, Supplementary Information ). To minimize boundary artefacts, typically, only traces with at least two full cell cycles were kept. The number of cells used for specific analyses, including sub-selections of traces based on the cell-cycle intervals, are indicated in the figure captions. Inferring the phase dynamics of two biological oscillators Denoted by D the entire set of single-cell traces comprising temporal intensity measurements (Δ t = 30 min) from all fluorescent traces and Λ the set of model parameters, comprising the gridded coupling function F ij . Note that all parameters are shared by all cells in D . To reconstruct the phase dynamics of our model, we seek to maximize the likelihood of the data \({\cal{L}}({\mathbf{\Lambda }}|{D})\) ; that is, we solve: $${\mathbf{\Lambda }}{\ast} = {\mathrm{argmax}}\,{\cal{L}}\left( {{\mathbf{\Lambda }}|{D}} \right)$$ In practice, we used an expectation–maximization algorithm, by iteratively optimizing the function Q ( Λ , Λ ′) over its first argument, where Q can be written as follows: $$Q\left({{\mathbf{\Lambda}},{\mathbf{\Lambda}}^\prime} \right) = E\left[ {{\mathrm{log}}\,p\left( {{D,{\mathbf{X}}}|{\mathbf{\Lambda}}} \right)|{\mathbf{X}},{\mathbf{\Lambda}}^\prime } \right]$$ That is, Q ( Λ , Λ ′) corresponds to the expected value of the log-likelihood of the data with respect to the posterior probabilities of the hidden phases X (latent variables), computed using the current parameter Λ ′. This process guarantees a monotonous convergence of the log-likelihood, although a global maximum is not necessarily reached 44 . To control for the many parameters F ij , we added regularization constraints for both the smoothness and sparsity: $$Q_p\left( {{\mathbf{\Lambda }},{\mathbf{\Lambda }}^\prime } \right) = {\it{Q}}\left( {{\mathbf{\Lambda }},{\mathbf{\Lambda }}^\prime } \right) - \lambda _1\mathop {\sum }\limits_{ij} ||\nabla F_{ij}||^2 - \lambda _2\mathop {\sum }\limits_{ij} F_{ij}^2$$ This expression is also guaranteed to converge 45 . Details about the optimization method, choice of the regularization parameters and computation of the phase posteriors using a HMM are provided in the Supplementary Information . Long-term temperature entrainment and luminescence recording We performed long-term temperature entrainment experiments using a Tecan plate reader Infinite F200 pro with CO 2 and temperature modules. One day before starting the experiment, serial dilutions ranging from 40,000 to 2,500 cells were seeded in 96-well white flat-bottom plates (Costar 3917). To prevent medium evaporation, all wells were filled with 300 µl of medium composed of FluoroBrite, 10% FBS, 1% PSG and 100 nM d -luciferin (NanoLight Technology) and covered with a sealing tape (Costar 6524). We set up temperature entrainment using a stepwise increase (or decrease) of 0.5 °C every 2 h to produce temperature-oscillating profiles going from 35.5 °C to 38.5 °C and back to 35.5 °C again over a period of 24 h. Intensities from all wells were recorded every 10 min with an integration time of 5,000 ms. Since temperature impacts the enzymatic activity of the luciferase 46 , we corrected the signal for this systematic effect ( Supplementary Information ). Association between gene expression and phase in tissues We used the average gene expression obtained from a selected set of 12 adult (6-week-old males) mouse tissues from the Zhang et al. dataset (GEO accession GSE54650 ) 26 . For this analysis, we estimated the Pearson correlation between the averaged gene expression and the circadian tissue phases or amplitudes reported in ref. 27 . We selected the top 200 genes positively or negatively associated with either the phases or the amplitudes for gene ontology analysis 47 (Supplementary Table 1 ). Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data supporting the figures and other findings of this study are available from the corresponding author on request. Code availability The code is available at . | Scientists from EPFL's Institute of Bioengineering have discovered that the circadian clock and the cell-cycle are, in fact, synchronized. Nothing in biology is static; everything is fluid, dynamic and ever-moving. Often, this movement occurs in repeating patterns—regular, measurable cycles that tick just like "clocks." Two of the most important such cycles are the circadian clock, which regulates the sleep/awake rhythm, and the cell cycle, which regulates the growth, life and death of virtually every cell in the body. Considering things like sleep abnormalities, cancer, aging and other related problems, it's not hard to see why both of these cycles have gained enormous interest from researchers. One of the big questions in the field has been that of synchronization, a phenomenon first observed by Dutch physicist—and clock-maker—Christian Huygens. In synchronization, the rhythms (phases) of two oscillators match up in lockstep. Naturally, the circadian clock takes up a daily rhythm, but it turns out that also the cell cycle in many systems involves a similar time scale. In addition, there is some evidence that suggests that both clocks might actually influence each other. Now, scientists from the lab of Felix Naef have found that the circadian and cell-cycle clocks are actually synchronized. The breakthrough study is published in Nature Physics, and is also featured on the journal's News and Views section. To carry out the study, the scientists developed a "small data" methodology to build and identify a mathematical model of the coupled clocks from time-lapse movies of thousands of single cells from mice and humans. The model allowed them to predict and measure phase shifts when the two clocks were synchronized in a 1:1 and 1:2 pattern, and then look at how system noise influences this synchronization. Finally, the researchers investigated as well how it might be modeled in a randomized way ("stochastically"), which would better capture what happens in real cells. The synchronization was also found to be remarkably robust against temperature changes, which is known to affect the cell-cycle clock, changing the rhythm of cell divisions. The team found that this circadian-cell cycle synchronization is common across different species including human, suggesting a fundamental biological mechanism behind it. "This interaction might play a physiological role," says Felix Naef. "It can explain why different body tissues have their clocks set at slightly different times, a bit like world time zone wall clocks in an airport." The implications of the study are significant, and Nature's News & Views describes it as "a new chapter in the story of how non-linear coupling mechanisms can be of fundamental importance to our understanding of living systems." | 10.1038/s41567-019-0598-1 |
Medicine | Antibodies improve in quality for months after COVID-19 vaccination | Wooseob Kim et al, Germinal centre-driven maturation of B cell response to mRNA vaccination, Nature (2022). DOI: 10.1038/s41586-022-04527-1 Journal information: Nature | http://dx.doi.org/10.1038/s41586-022-04527-1 | https://medicalxpress.com/news/2022-02-antibodies-quality-months-covid-vaccination.html | Abstract Germinal centres (GC) are lymphoid structures in which B cells acquire affinity-enhancing somatic hypermutations (SHM), with surviving clones differentiating into memory B cells (MBCs) and long-lived bone marrow plasma cells 1 , 2 , 3 , 4 , 5 (BMPCs). SARS-CoV-2 mRNA vaccination induces a persistent GC response that lasts for at least six months in humans 6 , 7 , 8 . The fate of responding GC B cells as well as the functional consequences of such persistence remain unknown. Here, we detected SARS-CoV-2 spike protein-specific MBCs in 42 individuals who had received two doses of the SARS-CoV-2 mRNA vaccine BNT162b2 six month earlier. Spike-specific IgG-secreting BMPCs were detected in 9 out of 11 participants. Using a combined approach of sequencing the B cell receptors of responding blood plasmablasts and MBCs, lymph node GC B cells and plasma cells and BMPCs from eight individuals and expression of the corresponding monoclonal antibodies, we tracked the evolution of 1,540 spike-specific B cell clones. On average, early blood spike-specific plasmablasts exhibited the lowest SHM frequencies. By contrast, SHM frequencies of spike-specific GC B cells increased by 3.5-fold within six months after vaccination. Spike-specific MBCs and BMPCs accumulated high levels of SHM, which corresponded with enhanced anti-spike antibody avidity in blood and enhanced affinity as well as neutralization capacity of BMPC-derived monoclonal antibodies. We report how the notable persistence of the GC reaction induced by SARS-CoV-2 mRNA vaccination in humans culminates in affinity-matured long-term antibody responses that potently neutralize the virus. Main Vaccination of humans with the Pfizer-BioNTech SARS-CoV-2 mRNA vaccine BNT162b2 induces a robust but transient circulating plasmablast response and a persistent germinal centre (GC) reaction in the draining lymph nodes 6 . Whether these persistent GC responses lead to the generation of affinity-matured memory B cells (MBCs) and long-lived bone marrow-resident plasma cells (BMPCs) remains unclear. To address this question, we analysed long-term B cell responses in the participants enrolled in our previously described observational study of 43 healthy participants (13 with a history of SARS-CoV-2 infection) who received two doses of BNT162b2 6 , 7 (Extended Data Tables 1 ). Long-term blood samples ( n = 42) and fine needle aspirates (FNAs) of the draining axillary lymph nodes ( n = 15) were collected 29 weeks after vaccination (Fig. 1a ). Bone marrow aspirates were collected 29 ( n = 11) and 40 weeks ( n = 2) after vaccination, with the 40-week time point used only for B cell receptor (BCR) repertoire profiling (Fig. 1a ). None of the participants who contributed FNA or bone marrow specimens had a history of SARS-CoV-2 infection. Fig. 1: Persistence of humoral immune responses to SARS-CoV-2 mRNA vaccination. a , Forty-three participants (13 with a history of SARS-CoV-2 infection) were enrolled, followed by vaccination. Blood ( n = 42) was collected before and at indicated time points after vaccination. For 15 participants without infection history, aspirates of draining axillary lymph nodes were collected at indicated time points after vaccination. For 11 participants without infection history, aspirates of bone marrow were collected at 29 and 40 weeks after vaccination. b , Representative flow cytometry plots of GC B cells (CD19 + CD3 − IgD low BCL6 + CD38 int ) and S-binding GC B cells in lymph nodes 29 weeks after vaccination. The percentage of cells in the bound region is indicated. c , Kinetics of total (left) and S-binding GC B cells (right) as gated in b . d , Representative ELISpot wells coated with the indicated antigens, bovine serum albumin or anti-immunoglobulin and developed in blue (IgG) and red (IgA) after plating the indicated numbers of BMPCs. e , Frequencies of IgG-secreting BMPCs specific for the indicated antigens 29 weeks after vaccination. Symbols at each time point represent one sample in c ( n = 15) and e ( n = 11). f , Plasma anti-S IgG titres measured by ELISA in participants without (red, n = 29) and with (black, n = 9) infection history. Horizontal lines and numbers indicate geometric means. Results are from one experiment performed in duplicate. g , Representative flow cytometry plot of S-binding MBCs (CD20 + CD38 − IgD low CD19 + CD3 − ) in blood 29 weeks after vaccination. h , Frequencies of S-specific MBCs in participants without (red, n = 29) and with (black, n = 13) infection history as gated in g . Horizontal lines indicate medians in e and h . LoD, limit of detection. Full size image GC B cells were detected in FNAs from all 15 participants (Fig. 1b, c , left, Extended Data Fig. 1a , Extended Data Table 2 ). All 14 participants with FNAs collected prior to week 29 generated spike (S)-binding GC B cell responses of varying magnitudes (Fig 1b, c , right, Extended Data Table 2 ). Notably, S-binding GC B cells were detected in FNAs from 10 out of 15 participants at week 29 (Fig. 1b, c , right, Extended Data Table 2 ), demonstrating that two-thirds of the sampled participants maintained an antigen-specific GC B cell response for at least six months after vaccination. S-binding lymph node plasma cells (LNPCs) were also detected in FNAs from all 15 participants and exhibited similar dynamics to S-binding GC B cells, albeit at lower frequencies within the total B cell population (Extended Data Fig. 1a, b , Extended Data Table 2 ). None of the FNAs demonstrated significant contamination with peripheral blood, based on the nearly complete absence of myeloid cells (Extended Data Table 2 ). Frequencies of BMPCs secreting IgG or IgA antibodies against either the 2019–2020 inactivated influenza virus vaccine, the tetanus–diphtheria vaccine or S protein were assessed in bone marrow aspirates collected 29 weeks after vaccination by enzyme-linked immunosorbent spot assay (ELISpot) (Fig. 1d, e , Extended Data Fig. 1c ). Influenza and tetanus–diphtheria vaccine-specific IgG-secreting BMPCs were detectable (median frequencies of 1.4% and 0.15%, respectively) in all 11 participants (Fig. 1e ). S-binding IgG-secreting BMPCs were detected in 9 out of 11 participants (median frequency of 0.06%). IgA-secreting BMPCs specific to influenza vaccine were detected in 10 out of 11 participants, but IgA-secreting BMPCs directed against the tetanus–diphtheria vaccine and the S protein were largely below the limit of detection (Extended Data Fig. 1c ). All participants had detectable plasma anti-S IgG antibodies and circulating S-binding MBCs at the 29-week time point (Fig. 1f–h ). Anti-S IgG titres at 29 weeks were higher than titres observed in a cohort of unvaccinated people who had recovered from SARS-CoV-2 measured 29 weeks after infection 9 , 10 , 11 (Extended Data Fig. 1d ). Vaccinated participants with a history of SARS-CoV-2 infection had significantly higher titres of anti-S IgG antibodies at 5 and 29 weeks compared with their naive counterparts 9 , 11 , 12 (Fig. 1f ). Similar trends were observed for plasma anti-S IgM and IgA antibodies (Extended Data Fig. 1e ). S-binding MBCs were detected in all participants, with a median frequency of 0.23% of total circulating B cells (Fig. 1g, h , Extended Data Fig. 1f ). To track S-specific B cell evolution and clonal distribution within blood, lymph node and bone marrow, we performed single-cell RNA sequencing (scRNA-seq) and concurrent BCR sequencing of immune cells from eight participants who contributed specimens from the three compartments. We first sorted plasmablasts from samples collected at their peak frequencies, one week after the second immunization 6 (Fig. 2a , top, Extended Data Fig. 2a ). We then investigated the dynamics of the immune response in draining axillary lymph nodes. Single-cell transcriptional analysis of lymph nodes revealed distinct immune cell populations, as previously described 13 , 14 , 15 , 16 (Fig. 2a , bottom left, Extended Data Fig. 2b, c , Extended Data Table 3 ). To further distinguish distinct B cell subsets in the lymph node, we performed unbiased secondary clustering of the B cell populations from the total cellular analysis (Fig. 2a , bottom right, Extended Data Fig. 2d, e , Extended Data Table 3 ). Around 40% and 7.9% of the B cells in the lymph node had GC B cell and LNPC transcriptomic profiles, respectively. Fig. 2: Identification of SARS-CoV-2 S-binding B cell clones in draining axillary lymph nodes. a , Uniform manifold approximation and projection (UMAP) showing scRNA-seq transcriptional clusters of total cells (left) and of B cells (right) from plasmablasts (PBs) sorted from PBMC (top) and from FNA of lymph nodes (bottom). Each dot represents a cell, coloured by phenotype as defined by transcriptomic profile. Total numbers of cells are at the top right corner. FDC, follicular dendritic cell; GC, GC B cell; Mo, monocyte; NK, natural killer cell; PB, plasmablast; pDC, plasmacytoid dendritic cell. b , Positive binding of recombinant monoclonal antibodies derived from GC B cells (blue) or LNPCs (green) to SARS-CoV-2 S measured by ELISA. Results are from one experiment performed in duplicate. Full size image We next generated recombinant monoclonal antibodies from expanded clones detected in FNA samples 7 and 15 weeks after vaccination, representing early and late time points. For two of the eight participants from whom the late point was unavailable due to insufficient specimens, we analysed two separate early time points: weeks five and seven for participant 02a, and weeks four and seven for participant 04. A total of 2,099 recombinant monoclonal antibodies were generated, of which 1,503 (71.6%) bound SARS-CoV-2 S in an enzyme-linked immunosorbent assay (ELISA) (Fig. 2b , Extended Data Table 4 ). In subsequent analyses, we included 37 previously identified S-binding monoclonal antibodies generated from GC B cells at week 4 from participants 07, 20, and 22 6 . Clonal relationships were computationally inferred using heavy chains from scRNA-seq BCR libraries (Extended Data Table 5 ), bulk-seq BCR libraries for GC B cells, LNPCs (Extended Data Fig. 2g ) and BMPCs (Extended Data Table 5 ), as well as previously published bulk-seq BCR libraries of sorted plasmablasts and GC B cells 6 , and magnetically enriched IgD low activated B cells or MBCs from PBMC 17 . B cell clones with experimentally validated S-binding B cells were designated S-binding clones (Extended Data Fig. 2f ) and accounted for 43.1% and 64.4%, respectively of the single-cell profiled GC B cells and LNPCs (Extended Data Fig. 2h , Extended Data Table 3 ). B cells that were clonally related to S-binding B cells were also found in the plasmablast compartment in blood (6.7%) and the MBC compartment in lymph nodes (0.3%) (Extended Data Fig. 2h , Extended Data Table 3 ). B cell maturation in the germinal centre We analysed the proportion of S-binding GC B cells clonally related to week 4 circulating plasmablasts. The frequencies of plasmablast-related, S-binding GC B cells varied broadly among participants, ranging from 12.7% to 82.5% (Fig. 3a ). Consistent with our flow cytometry results (Fig. 1c ), GC B cells from long-lasting S-binding clones were observed for at least 29 weeks after vaccination (Extended Data Fig. 3a ). In addition, we detected the presence of clonally related MBCs in blood at 29 weeks after vaccination (Extended Data Fig. 3b ). S-binding GC B cells accumulated significantly higher levels of SHM compared to clonally related plasmablasts, and this difference increased over time (Fig. 3b ). We observed a 3.5-fold increase in SHM frequency among all S-binding GC B cells between weeks 4 and 29 (Fig. 3c , Extended Data Fig. 3c ). S-binding MBCs detected at 29 weeks after vaccination, however, had slightly lower SHM frequencies than their clonally related GC B cell counterparts (Extended Data Fig. 3d ). The relative proportion of S-binding GC B cells expressing BCR of IgA isotype increased in the lymph node over time (Extended Data Fig. 3e ). Clonal analysis revealed a high degree of overlap between S-binding GC and LNPC compartments (Fig. 3d ). Furthermore, SHM frequencies of both S-binding LNPCs and GC B cells increased over time at a very similar rate with small differences (Fig. 3e ) in contrast to those between S-binding plasmablast and GC B cells (Fig. 3b ). Fig. 3: Maturation of SARS-CoV-2 S-binding B cells in the lymph node. a , Circos diagrams showing clonal overlap between S-binding plasmablasts and GC B cells at indicated time points. Purple and grey chords correspond to clones spanning both compartments and clones spanning only the GC compartment, respectively. Percentages are of GC B cell clones related to plasmablasts at each time point. b , Nucleotide mutation frequency in the immunoglobulin heavy chain variable gene ( IGHV ) region for clonally related week-4 plasmablasts and GC B cells at weeks 4 ( n = 81), 5 ( n = 52), 7 ( n = 289), 15 ( n = 162) and 29 ( n = 47). c , IGHV nucleotide mutation frequency of S-binding GC B cells at weeks 4 ( n = 1,701), 5 ( n = 21,543), 7 ( n = 62,927), 15 ( n = 49,837) and 29 ( n = 3,314). Horizontal lines and numbers represent medians. P values were determined by Kruskal–Wallis test followed by Dunn’s multiple comparison test. d , Circos diagrams showing clonal overlap (purple) between S-binding GC B cells and LNPCs over combined time points. Percentages are of GC B cell clones overlapping with LNPCs or vice versa. Arc length corresponds to the number of BCR sequences and chord width corresponds to clone size in a and d . e , IGHV nucleotide mutation frequency of clonally related GC B cells and LNPCs at weeks 4 ( n = 48), 5 ( n = 224), 7 ( n = 877), 15 ( n = 449) and 29 ( n = 76). Each dot represents the median SHM frequency of a clone within the indicated compartment, and medians are presented on the top of each dataset in b and e . P values were determined by paired two-sided Mann–Whitney test and corrected for multiple testing using the Benjamini–Hochberg method in b and e . **** P < 0.0001. Full size image Affinity maturation of antibody response To determine whether the increase in SHM frequencies of S-specific GC B cells and LNPCs over time is reflected in increased circulating anti-S antibody binding affinity, we measured the avidity of plasma anti-S IgG. In participants without a history of SARS-CoV-2 infection, anti-S IgG avidity increased at 29 weeks compared with the 5-week time point. Of note, participants with a history of SARS-CoV-2 infection had similar plasma anti-S IgG avidity at 5 and 29 weeks after vaccination (Fig. 4a ). Consistently, SHM frequencies of S-binding LNPCs increased over time (Fig. 4b ). S-binding BMPCs from 29 and 40 weeks after vaccination exhibited a degree of SHM that was similar to that of LNPCs from 15 and 29 weeks after vaccination (Fig. 4b ) and significantly higher than any other S-binding B cell population except for MBCs (Extended Data Fig. 4a ). To understand the evolutionary trajectory of vaccine-induced B cell lineages, we analysed S-specific clones using a phylogenetic model tailored for BCR repertoires 18 . Consistent with their SHM frequencies (Fig. 4b ), plasmablasts tended to locate closer to the germline on the phylogenetic trees, whereas LNPCs and BMPCs tended to be evolutionarily more distant (Fig. 4c , Extended Data Fig. 4b ). In contrast to plasmablasts, which clustered to a separate branch of their own, BMPCs and LNPCs co-located on shared branches, suggesting a closer evolutionary relationship between BMPCs and LNPCs (Fig. 4c ). Together, these results support a model in which S-specific BMPCs are the products of affinity-matured, GC-derived LNPCs. Fig. 4: Evolution of B cell clones induced by SARS-CoV-2 vaccination. a , Avidity indices of plasma anti-S IgG between the indicated time points in participants without (red, n = 29) and with (black, n = 9) a history of SARS-CoV-2 infection. b , IGHV nucleotide mutation frequency of S-binding plasmablasts ( n = 2,735), LNPCs at weeks 4 ( n = 552), 5 ( n = 11,253), 7 ( n = 45,436), 15 ( n = 24,538) and 29 ( n = 571), and BMPCs ( n = 47). Horizontal lines and numbers represent median values. P values were determined by Kruskal–Wallis test followed by Dunn’s multiple comparison test. c , Representative phylogenetic trees showing inferred evolutionary relationships between plasmablasts, LNPCs and BMPCs. Horizontal branch length represents the expected number of substitutions per codon in V-region genes, corresponding to the scale bar. Clone IDs are displayed near the root of the trees. Asterisks denote neutralizing monoclonal antibodies. d , Neutralizing activity of clonally related plasmablast- and BMPC-derived monoclonal antibodies ( n = 8) against SARS-CoV-2 D614G strain. Dotted line indicates detection limit. Results are from one experiment with duplicates in a and d . e , Equilibrium dissociation constant ( K D ) of neutralizing clone-derived Fabs ( n = 8) interacting with immobilized S protein measured by BLI. Symbols indicate K D values of clonally related, plasmablast (red)- and BMPC (black)-derived Fabs. Results are from at least two replicates in e . P values were determined by two-tailed Wilcoxon matched-pairs signed rank test in a , d and e . NS, P > 0.9999, **** P < 0.0001. Full size image We next expressed monoclonal antibodies derived from clonally related plasmablasts and BMPCs and their corresponding monomeric antigen-binding fragments (Fabs) (Extended Data Table 6 ). We then examined binding affinity and in vitro neutralization capacity using biolayer interferometry (BLI) and high-throughput GFP-reduction neutralization test 19 , respectively. BMPC-derived Fabs exhibited significantly higher binding affinity against S protein compared with plasmablast-derived Fabs (Extended Data Fig. 4c, d ). Of the 21 S-specific clones we detected among BMPCs, 7 potently neutralized the SARS-CoV-2 D614G strain (Extended Data Fig. 4e ). These BMPC-derived monoclonal antibodies showed higher neutralizing potency than their clonally related, plasmablast-derived counterparts (Fig. 4d ), consistent with the significantly increased binding affinity of the BMPC-derived Fabs to S protein (Fig. 4e ). Overall, the increased frequency of SHM observed over time and the correlated functional improvements in neutralization suggest that the GC reactions induced by SARS-CoV-2 mRNA vaccination facilitate the development of affinity-matured BMPCs. Discussion This study evaluated whether the persistent GC response induced by SARS-CoV-2 mRNA-based vaccines in humans 6 results in the generation of affinity-matured MBCs and BMPCs 1 , 3 , 13 , 20 , 21 . The two-dose series of BNT162b2 induced a robust S-binding GC B cell response that lasted for at least 29 weeks after vaccination. The results of such persistent GC reactions were evident in the form of circulating S-binding MBCs in all participants and S-specific BMPCs 29 weeks after vaccination in all but two of the sampled participants. It is likely that S-specific BMPCs in those two participants are present but below the assay detection limit. Longitudinal tracking of more than 1,500 vaccine-induced B cell clones revealed the gradual accumulation of SHM and isotype switching to IgA within the GC B cell compartment. We also show that GC B cells differentiate into affinity-matured LNPCs within the lymph node, with some of these cells potentially migrating to the bone marrow where they establish long-term residence. The enhanced maturity of the secreted antibodies was reflected in the significantly increased avidity of circulating anti-S IgG antibodies over time. It is also evident from increased affinity of BMPC-derived monoclonal antibodies detected six months after vaccination in comparison to that of their corresponding plasmablast-derived monoclonal antibodies. Our data corroborate multiple reports demonstrating the maturation of circulating MBC responses after SARS-CoV-2 mRNA vaccination in humans 9 , 10 , 12 , 22 , 23 , 24 . This study shows that a persistent vaccine-induced GC response in humans culminates in the induction of affinity-matured, antigen-specific BMPCs. Notably, none of the 11 bone marrow specimens came from a participant with a history of SARS-CoV-2 infection. An intriguing finding in our study is that the S-specific BMPCs detected more than six months after vaccination exhibited high SHM frequencies relative to other B cell compartments. These data corroborate similar observations made in the mouse model 25 , 26 . The mouse data led to the proposal that there is a division of labour between MBCs and long-lived BMPCs 27 , 28 . In that model, BMPCs secrete highly specific, high-affinity antibodies that provide the first layer of protection against the invading pathogen upon re-exposure, whereas MBCs would only be engaged in the event that the pathogen is not fully neutralized by BMPC-derived antibodies. Consistent with this notion, multiple reports have documented the evolution of circulating MBCs induced by SARS-CoV-2 mRNA vaccination in humans 9 , 10 , 12 , 23 . These reports have shown that not only did the frequency of circulating S-binding MBCs increased over time, but their ability to recognize S proteins from emerging SARS-CoV-2 variants seems to have expanded as well 22 , 23 . These data indicate an important role for affinity maturation of responding B cell clones beyond increasing binding affinity to the immunizing antigen. Our study raises a number of important questions that will need to be addressed in future studies concerning the effects of an additional homologous or heterologous immunization on the dynamics and products of ongoing GCs, particularly with respect to breadth of induced B cell responses. It also remains to be addressed whether the IgA + GC B cell compartment induced by this systemic immunization can give rise to long-term IgA + MBCs and BMPCs. Overall, our data demonstrate the remarkable capacity of mRNA-based vaccines to induce robust and persistent GC reactions that culminate in affinity-matured MBC and BMPC populations. Methods Sample collection, preparation and storage All studies were approved by the Institutional Review Board of Washington University in St Louis. Written consent was obtained from all participants. Forty-three healthy volunteers were enrolled, of whom 13 had a history of confirmed SARS-CoV-2 infection (Extended Data Table 1 ). Fifteen out of 43 healthy participants provided FNAs of draining axillary lymph nodes. In 6 out of the 15 participants, a second draining lymph node was identified and sampled following secondary immunization. One participant (15) received the boost vaccination in the contralateral arm; draining lymph nodes were identified and sampled on both sides. Eleven out of 43 healthy participants provided bone marrow aspirates. Forty-eight participants who had recovered from mild SARS-CoV-2 infection but had not been vaccinated within 7 months of illness were previously described 21 . Peripheral blood samples were collected in EDTA tubes, and PBMCs were enriched by density gradient centrifugation over Ficoll-Paque PLUS (Cytiva) or Lymphopure (BioLegend). The residual red blood cells were lysed with ammonium chloride lysis buffer, and cells were immediately used or cryopreserved in 10% dimethyl sulfoxide in fetal bovine serum (FBS). Ultrasound-guided FNA of draining axillary lymph nodes was performed by a radiologist or a qualified physician’s assistant under the supervision of a radiologist. Scans were performed with a commercially available ultrasound unit (Loqic E10, General Electric) using an L2–9 linear array transducer with transmit frequencies of 7, 8, and 9 MHz or a L6–15 linear array transducer with transmit frequencies of 10, 12, and 15 MHz. Lymph node dimensions and cortical thickness were measured, and the presence and degree of cortical vascularity and location of the lymph node relative to the axillary vein were determined before each FNA. For each FNA sample, six passes were made under continuous real-time ultrasound guidance using 25-gauge needles, each of which was flushed with 3 ml of RPMI 1640 supplemented with 10% FBS and100 U ml −1 penicillin-streptomycin, followed by three 1-ml rinses. Red blood cells were lysed with ammonium chloride buffer (Lonza), washed with washing buffer (phosphate-buffered saline supplemented with 2% FBS and 2 mM EDTA), and immediately used or cryopreserved in 10% dimethyl sulfoxide in FBS. Participants reported no adverse effects from phlebotomies or serial FNAs. Bone marrow aspirates of approximately 30 ml were collected in EDTA tubes from the iliac crest. Bone marrow mononuclear cells (BMMCs) were enriched by density gradient centrifugation over Ficoll-Paque PLUS, and then the remaining red blood cells were lysed with ammonium chloride buffer (Lonza) and washed with washing buffer. BMPCs were enriched from BMMCs using EasySep Human CD138 Positive Selection Kit II (StemCell Technologies) and immediately used for ELISpot or cryopreserved in 10% dimethyl sulfoxide in FBS. Antigens Recombinant soluble S protein derived from SARS-CoV-2 was expressed as previously described 29 . In brief, a mammalian cell codon-optimized nucleotide sequences coding for the soluble version of S (GenBank: MN908947.3, amino acids 1–1213) including a C-terminal thrombin cleavage site, T4 fold trimerization domain and hexahistidine tag was cloned into the mammalian expression vector pCAGGS. The S protein sequence was modified to remove the polybasic cleavage site (RRAR to A) and two stabilizing mutations were introduced (K986P and V987P, wild-type numbering). Recombinant proteins were produced in Expi293F cells (Thermo Fisher Scientific) by transfection with purified plasmid using the ExpiFectamine 293 Transfection Kit (Thermo Fisher Scientific). Supernatants from transfected cells were collected 3 days after transfection, and recombinant proteins were purified using Ni-NTA agarose (Thermo Fisher Scientific), then buffer-exchanged into PBS and concentrated using Amicon Ultra centrifugal filters (MilliporeSigma). For flow cytometry staining, recombinant S was labeled with Alexa Fluor 7647-NHS ester or biotinylated using the EZ-Link Micro NHS-PEG4-Biotinylation Kit (Thermo Fisher Scientific); excess Alexa Fluor 647 and biotin were removed using 7-kDa Zeba desalting columns (Thermo Fisher Scientific). For expression of biotinylated SARS-CoV-2 S Avitag, the CDS of pCAGGS vector containing recombinant soluble SARS-CoV-2 S protein was modified to encode 3′ Avitag insert after the 6×His tag (5′-His tag-GGCTCCGGGCTGAACGACATCTTCGAAGCCCAGAAGATTGA GTGGCATGAG-Stop-3′; HHHHHHGSGLNDIFEAQKIEWHE-) using inverse PCR mutagenesis in a method described previously 30 . Protein expression and purification of SARS-CoV-2 S-Avitag was performed using the same methods as above. Immediately, after purification, site-specific biotinylation was performed similar to Avidity recommendations. Specifically, SARS-CoV-2 S-Avitag substrate was at 40 μM concentration with 15 μg ml −1 BirA enzyme in a 0.05 M bicine buffer at pH 8.3 containing 10 mM ATP, 10 mM MgOAc and 50 μM biotin, and the reaction was performed for 30 °C for 1 h. The protein was then concentrated and buffer exchanged with PBS using a 100-kDa Amicon Ultra centrifugal filter (MilliporeSigma). Flow cytometry and cell sorting Staining for flow cytometry analysis and sorting was performed using freshly isolated or cryo-preserved PBMCs or FNAs. For FNA staining, cells were incubated for 30 min on ice with biotinylated and Alexa Fluor 647-conjugated recombinant soluble S and PD-1-BB515 (EH12.1, BD Horizon, 1:100) in 2% FBS and 2 mM EDTA in PBS (P2), washed twice, then stained for 30 min on ice with IgG-BV480 (goat polyclonal, Jackson ImmunoResearch, 1:100), IgA-FITC (M24A, Millipore, 1:500), CD45-A532 (HI30, Thermo, 1:50), CD38-BB700 (HIT2, BD Horizon, 1:500), CD20-Pacific Blue (2H7, 1:400), CD27-BV510 (O323, 1:50), CD8-BV570 (RPA-T8, 1:200), IgM-BV605 (MHM-88, 1:100), HLA-DR-BV650 (L243, 1:100), CD19-BV750 (HIB19, 1:100), CXCR5-PE-Dazzle 594 (J252D4, 1:50), IgD-PE-Cy5 (IA6-2, 1:200), CD14-PerCP (HCD14, 1:50), CD71-PE-Cy7 (CY1G4, 1:400), CD4-Spark685 (SK3, 1:200), streptavidin-APC-Fire750, CD3-APC-Fire810 (SK7, 1:50) and Zombie NIR (all BioLegend) diluted in Brilliant Staining buffer (BD Horizon). Cells were washed twice with P2, fixed for 1 h at 25 °C using the True Nuclear fixation kit (BioLegend), washed twice with True Nuclear Permeabilization/Wash buffer, stained with FOXP3-BV421 (206D, BioLegend, 1:15), Ki-67-BV711 (Ki-67, BioLegend, 1:200), T-bet-BV785 (4B10, BioLegend, 1:400), BCL6-PE (K112-91, BD Pharmingen, 1:25), and BLIMP1-A700 (646702, R&D, 1:50) for 1 h at 25 °C, washed twice with True Nuclear Permeabilization/Wash buffer and resuspended in P2 for acquisition. For memory B cell staining, PBMC were incubated for 30 min on ice with biotinylated and Alexa Fluor 647-conjugated recombinant soluble S in P2, washed twice, then stained for 30 min on ice with IgG-BV480 (goat polyclonal, Jackson ImmunoResearch, 1:100), IgD-Super Bright 702 (IA6-2, Thermo, 1:50), IgA-FITC (M24A, Millipore, 1:500), CD45-A532 (HI30, Thermo, 1:50), CD38-BB700 (HIT2, BD Horizon, 1:500), CD24-BV421 (ML5, 1:100), CD20-Pacific Blue (2H7, 1:400), CD27-BV510 (O323, 1:50), CD8-BV570 (RPA-T8, 1:200), IgM-BV605 (MHM-88, 1:100), CD19-BV750 (HIB19, 1:100), FcRL5-PE (509f6, 1:100), CXCR5-PE-Dazzle 594 (J252D4, 1:50), CD14-PerCP (HCD14, 1:50), CD71-PE-Cy7 (CY1G4, 1:400), CD4-Spark685 (SK3, 1:200), streptavidin-APC-Fire750, CD3-APC-Fire810 (SK7, 1:50) and Zombie NIR (all BioLegend) diluted in Brilliant Staining buffer (BD Horizon). Cells were washed twice with P2 and resuspended in P2 for acquisition. All samples were acquired on an Aurora using SpectroFlo v.2.2 (Cytek). Flow cytometry data were analysed using FlowJo v.10 (BD Biosciences). For sorting plasmablasts from peripheral blood, B cells were enriched from PBMC by first using EasySep Human Pan-B cell Enrichment Kit (StemCell Technologies), and then stained with CD20-PB (2H7, 1:400), CD3-FITC (HIT3a, 1:200), IgD-PerCP-Cy5.5 (IA6-2, 1:200), CD71-PE (CY1G4, 1:400), CD38-PE-Cy7 (HIT2, 1:200), CD19-APC (HIB19, 1:200) and Zombie Aqua (all BioLegend). For sorting GC B cells and LNPCs from the lymph node, single-cell suspensions were stained for 30min on ice with PD-1-BB515 (EH12.1, BD Horizon, 1:100), CD20-Pacific Blue (2H7, 1:100), IgD-PerCP-Cy5.5 (IA6-2, 1:200), CD19-PE (HIB19, 1:200), CXCR5-PE-Dazzle 594 (J252D4, 1:50), CD38-PE-Cy7 (HIT2, 1:200), CD4-Alexa-Fluor-700 (SK3, 1:400), CD71-APC (CY1G4, 1:100), and Zombie Aqua (all BioLegend). Cells were washed twice, and single plasmablasts (live singlet CD19 + CD3 − IgD low CD38 + CD20 − CD71 + ), GC B cells (live singlet CD19 + CD4 − IgD low CD71 + CD38 int CD20 + CXCR5 + ), LNPCs (live singlet CD19 + CD4 − IgD low CD38 + CD20 − CD71 + ) were sorted using a FACSAria II. ELISA Assays were performed in MaxiSorp 96-well plates (Thermo Fisher) coated with 100 μl of recombinant SARS-CoV-2 S, Donkey anti-human IgG (H+L) antibody (Jackson ImmunoResearch, 709-005-149) or BSA diluted to 1 μg ml −1 in PBS, and plates were incubated at 4 °C overnight. Plates then were blocked with 10% FBS and 0.05% Tween 20 in PBS. Plasma or purified monoclonal antibodies were serially diluted in blocking buffer and added to the plates. Monoclonal antibodies and plasma samples were tested at 10 μg ml −1 and 1:30 starting dilution, respectively, followed by 7 additional threefold serial dilutions. Plates were incubated for 90 min at room temperature and then washed 3 times with 0.05% Tween 20 in PBS. Secondary antibodies were diluted in blocking buffer before adding to wells and incubating for 60 min at room temperature. Horseradish peroxidase (HRP)-conjugated goat anti-human IgG (H+L) antibody (Jackson ImmunoResearch, 109-035-088, 1:2,500) was used to detect monoclonal antibodies. HRP-conjugated goat anti-Human IgG Fcγ fragment (Jackson ImmunoResearch, 109-035-190, 1:1,500), HRP-conjugated goat anti-human serum IgA α chain (Jackson ImmunoResearch, 109-035-011, 1:2,500), and HRP-conjugated goat anti-human IgM (Caltag, H15007, 1:4,000) were used to detect plasma antibodies. Plates were washed three times with PBST and three times with PBS before the addition of O -phenylenediamine dihydrochloride peroxidase substrate (MilliporeSigma). Reactions were stopped by the addition of 1 M hydrochloric acid. Optical density measurements were taken at 490 nm. The threshold of positivity for recombinant monoclonal antibodies was set as two times the optical density of background binding to BSA at the highest concentration of each monoclonal antibody. The area under the curve for each monoclonal antibody and half-maximal binding dilution for each plasma sample were calculated using GraphPad Prism v.9. Plasma antibody avidity was measured as previously described 31 . Areas under the curve were calculated by setting the mean + 3 × s.d. of background binding to BSA as a baseline. In brief, plasma dilutions that would give an optical density reading of 2.5 were calculated from the serial dilution ELISA. S-coated plates were incubated with this plasma dilution as above and then washed one time for 5 min with either PBS or 8 M urea in PBS, followed by 3 washes with PBST and developed as above. The avidity index was calculated for each sample as the optical density ratio of the urea-washed to PBS-washed wells. ELISpot ELISpot plates were coated overnight at 4 °C with Flucelvax Quadrivalent 2019/2020 seasonal influenza virus vaccine (Seqirus, 1:100), tetanus/diphtheria vaccine (Grifols, 1:20), SARS-CoV-2 S (10 μg ml −1 ), anti-human Ig (Cellular Technology) and BSA. A direct ex vivo ELISpot assay was performed to determine the number of total, vaccine-binding or recombinant S-binding IgG- and IgA-secreting cells present in PBMCs or enriched BMPCs using Human IgA/IgG double-colour ELISpot kits (Cellular Technology) according to the manufacturer’s protocol. ELISpot plates were analysed using an ELISpot analyser (Cellular Technology). Single-cell RNA-seq library preparation and sequencing Sorted plasmablasts and whole FNA from each time point were processed using the following 10x Genomics kits: Chromium Next GEM Single Cell 5′ Kit v2 (PN-1000263); Chromium Next GEM Chip K Single Cell Kit (PN-1000286); BCR Amplification Kit (PN-1000253); Dual Index Kit TT Set A (PN-1000215). Chromium Single Cell 5′ Gene Expression Dual Index libraries and Chromium Single Cell V(D)J Dual Index libraries were prepared according to manufacturer’s instructions without modifications. Both gene expression and V(D)J libraries were sequenced on a Novaseq S4 (Illumina), targeting a median sequencing depth of 50,000 and 5,000 read pairs per cell, respectively. Bulk BCR sequencing Sorted GC B cells and LNPCs from FNA, enriched BMPCs from bone marrow or enriched MBCs from PBMCs from blood were used for library preparation for bulk BCR sequecning. Circulating MBCs were magnetically isolated by first staining with IgD-PE and MojoSort anti-PE Nanobeads (BioLegend), and then processing with the EasySep Human B Cell Isolation Kit (StemCell Technologies) to negatively enrich IgDlo B cells. RNA was prepared from each sample using the RNeasy Plus Micro kit (Qiagen). Libraries were prepared using the NEBNext Immune Sequencing Kit for Human (New England Biolabs) according to the manufacturer’s instructions without modifications. High-throughput 2 × 300-bp paired-end sequencing was performed on the Illumina MiSeq platform with a 30% PhiX spike-in according to manufacturer’s recommendations, except for performing 325 cycles for read 1 and 275 cycles for read 2. Preprocessing of bulk sequencing BCR reads Preprocessing of demultiplexed pair-end reads were performed using pRESTO v.0.6.2 32 as previously described 6 , with the exception that sequencing errors were corrected using the unique molecular identifiers (UMIs) as they were without additional clustering (Extended Data Table 5 ). Previously preprocessed unique consensus sequences from reported samples 6 were included as they were. Previously preprocessed unique consensus sequences from reported samples 17 corresponding to participants 01, 02a, 04, 07, 10, 13, 20, and 22 were subset to those with at least two contributing reads and included. Preprocessing of 10x Genomics single-cell BCR reads Demultiplexed pair-end FASTQ reads were preprocessed using the ‘cellranger vdj’ command from 10x Genomics’ Cell Ranger v.6.0.1 for alignment against the GRCh38 human reference v.5.0.0 (‘refdata-cellranger-vdj-GRCh38-alts-ensembl-5.0.0’). The resultant ‘filtered_contig.fasta’ files were used as preprocessed single-cell BCR reads (Extended Data Table 5 ). V(D)J gene annotation and genotyping Initial germline V(D)J gene annotation was performed on the preprocessed BCRs using IgBLAST v.1.17.1 33 with the deduplicated version of IMGT/V-QUEST reference directory release 202113-2 34 . IgBLAST output was parsed using MakeDb.py from Change-O v.1.0.2 35 . For the single-cell BCRs, isotype annotation was pulled from the ‘c_call’ column in the ‘filtered_contig_annotations.csv’ files outputted by Cell Ranger. For both bulk and single-cell BCRs, sequence-level quality control was performed, requiring each sequence to have non-empty V and J gene annotations; exhibit chain consistency in all annotations; bear fewer than 10 non-informative (non-A/T/G/C, such as N or -) positions; and carry a non-empty CDR3 with no N and a nucleotide length that is a multiple of 3. For single-cell BCRs, cell-level quality control was also performed, requiring each cell to have either exactly one heavy chain and at least one light chain, or at least one heavy chain and exactly one light chain. Within a cell, for the chain type with more than one sequence, the most abundant sequence in terms of UMI count (when tied, the sequence that appeared earlier in the file) was kept. Ultimately, exactly one heavy chain and one light chain per cell were kept. Additionally, quality control against cross-sample contamination was performed by examining the extent, if any, of pairwise overlapping between samples in terms of BCRs with both identical UMIs and identical non-UMI nucleotide sequences. Individualized genotypes were inferred based on sequences that passed all quality control using TIgGER v.1.0.0 36 and used to finalize V(D)J annotations. Sequences annotated as non-productively rearranged by IgBLAST were removed from further analysis. Clonal lineage inference B cell clonal lineages were inferred on a by-individual basis based on productively rearranged sequences using hierarchical clustering with single linkage 37 . When combining both bulk and single-cell BCRs, heavy chain-based clonal inference was performed 38 . First, heavy chain sequences were partitioned based on common V and J gene annotations and CDR3 lengths using the groupGenes function from Alakazam v1.1.0 35 . Within each partition, heavy chain sequences with CDR3s that were within 0.15 normalized Hamming distance from each other were clustered as clones using the hclust function from fastcluster v1.2.3 39 . When using only single-cell BCRs, clonal inference was performed based on paired heavy and light chains. First, paired heavy and light chains were partitioned based on common V and J gene annotations and CDR3 lengths. Within each partition, pairs whose heavy chain CDR3s were within 0.15 normalized Hamming distance from each other were clustered as clones. Following clonal inference, full-length clonal consensus germline sequences were reconstructed using CreateGermlines.py from Change-O v.1.0.2 35 for each clone with the D-segment (for heavy chains) and the N/P regions masked with Ns, resolving any ambiguous gene assignments by majority rule. Within each clone, duplicate IMGT-aligned V(D)J sequences from bulk sequencing were collapsed using the collapseDuplicates function from Alakazam v1.1.0 35 except for duplicates derived from different time points, tissues, B cell compartments, or isotypes. BCR analysis BCR analysis was performed in R v.4.1.0 with visualization performed using base R, ggplot2 v.3.3.5 40 , and GraphPad Prism v.9. For the B cell compartment label, gene expression-based cluster annotation was used for single-cell BCRs; FACS-based sorting was used in general for bulk BCRs, except that plasmablast sorts from lymph nodes were labelled LNPCs, week 5 IgDlo sorts from blood were labelled activated, and week 7 IgDlo sorts from blood were labelled memory. For the time point label, one blood plasmablast sample that pooled collections in both week 4 and week 5 was treated as week 4; and one blood memory sort sample that pooled collections in both week 29 and week 30 was treated as week 29. For analysis involving the memory compartment, the memory sequences were restricted to bulk-sequenced week 29 memory sorts from blood. A heavy chain-based B cell clone was considered a S-specific clone if the clone contained any sequence corresponding to a recombinant monoclonal antibody that was synthesized based on the single-cell BCRs and that tested positive for S-binding. Clonal overlap between B cell compartments was visualized using circlize v.0.4.13 41 . SHM frequency was calculated for each heavy chain sequence by counting the number of nucleotide mismatches from the germline sequence in the variable segment leading up to the CDR3, while excluding the first 18 positions that could be error-prone due to the primers used for generating the monoclonal antibody sequences. Calculation was performed using the calcObservedMutations function from SHazaM v.1.0.2 35 . Phylogenetic trees for S-specific clones containing BMPCs were constructed on a by-participant basis using IgPhyML v1.1.3 18 with the HLP19 model 42 . Only heavy chain sequences from week 4 plasmablast compartment, the GC B cell, LNPC and MBC compartments up to and including week 15, and the week 29 or 40 BMPC compartment were considered. For clones with > 100 sequences, subsampling was applied with probabilities proportional to the proportions of sequences from different compartments, in addition to keeping all sequences corresponding to synthesized monoclonal antibodies and all BMPC sequences. Only subsampled sequences from the plasmablast, LNPC and BMPC compartments were used for eventual tree-building. Trees were visualized using ggtree v3.0.4 43 . Human housekeeping genes A list of human housekeeping genes was compiled from the 20 most stably expressed genes across 52 tissues and cell types in the Housekeeping and Reference Transcript (HRT) Atlas v.1.0 44 ; 11 highly uniform and strongly expressed genes reported 45 ; and some of the most commonly used housekeeping genes 46 . The final list includes 34 genes: ACTB , TLE5 (also known as AES ), AP2M1 , BSG , C1orf43 , CD59 , CHMP2A , CSNK2B , EDF1 , EEF2 , EMC7 , GABARAP , GAPDH , GPI , GUSB , HNRNPA2B1 , HPRT1 , HSP90AB1 , MLF2 , MRFAP1 , PCBP1 , PFDN5 , PSAP , PSMB2 , PSMB4 , RAB11B , RAB1B , RAB7A , REEP5 , RHOA , SNRPD3 , UBC , VCP and VPS29 . Processing of 10x Genomics single-cell 5′ gene expression data Demultiplexed pair-end FASTQ reads were first preprocessed on a by-sample basis using the ‘cellranger count’ command from 10× Genomics’ Cell Ranger v.6.0.1 for alignment against the GRCh38 human reference v.2020-A (‘refdata-gex-GRCh38-2020-A’). To avoid a batch effect introduced by sequencing depth, the ‘cellranger aggr’ command was used to subsample from each sample so that all samples had the same effective sequencing depth, which was measured in terms of the number of reads confidently mapped to the transcriptome or assigned to the feature IDs per cell. Gene annotation on human reference chromosomes and scaffolds in Gene Transfer Format (‘gencode.v32.primary_assembly.annotation.gtf’) was downloaded (on 2 June 2021) from GENCODE v.32 47 , from which a biotype (‘gene_type’) was extracted for each feature. Quality control was performed as follows on the aggregate gene expression matrix consisting of 432,713 cells and 36,601 features using SCANPY v.1.7.2 48 and Python v.3.8.8. (1) To remove presumably lysed cells, cells with mitochondrial content greater than 12.5% of all transcripts were removed. (2) To remove likely doublets, cells with more than 8,000 features or 80,000 total UMIs were removed. (3) To remove cells with no detectable expression of common endogenous genes, cells with no transcript for any of the 34 housekeeping genes were removed. (4) The feature matrix was subset, based on their biotypes, to protein-coding, immunoglobulin, and T cell receptor genes that were expressed in at least 0.1% of the cells in any sample. The resultant feature matrix contained 15,842 genes. (5) Cells with detectable expression of fewer than 200 genes were removed. After quality control, there were a total of 383,708 cells from 56 single-cell samples (Extended Data Table 5 ). Single-cell gene expression analysis Single-cell gene expression analysis was performed in SCANPY v.1.7.2 48 . UMI counts measuring gene expression were log-normalized. The top 2,500 highly variable genes (HVGs) were identified using the ‘scanpy.pp.highly_variable_genes’ function with the ‘seurat_v3’ method, from which immunoglobulin and T cell receptor genes were removed. The data were scaled and centred, and principal component analysis (PCA) was performed based on HVG expression. PCA-guided neighborhood graphs embedded in uniform manifold approximation and projection (UMAP) were generated using the top 20 principal components via the ‘scanpy.pp.neighbors’ and ‘scanpy.tl.umap’ functions. Overall clusters (Extended Data Table 3 , top) were identified using Leiden graph-clustering via the ‘scanpy.tl.leiden’ function with resolution 0.23 (Extended Data Fig. 2b ). UMAPs were faceted by batch, by participant, and by participant followed by sample; and inspected for convergence across batches, participants, and samples within participants, to assess whether there was a need for integration (Extended Data Fig. 2b ). Cluster identities were assigned by examining the expression of a set of marker genes for different cell types (Extended Data Fig. 2c ): MS4A1 , CD19 and CD79A for B cells; CD3D , CD3E , CD3G , IL7R and CD4 or CD8A for CD4 + or CD8 + T cells, respectively; GZMB , GNLY , NKG7 and NCAM1 for NK cells; CD14 , LYZ , CST3 and MS4A7 for monocytes; IL3RA and CLEC4C for plasmacytoid dendritic cells (pDCs); and FDCSP , CXCL14 15 and FCAMR 16 for FDCs. One group of 27 cells labelled ‘B and T’ was excluded. To remove potential contamination by platelets, 73 cells with a log-normalized expression value of >2.5 for PPBP were removed. All 644 cells from the FDC cluster were confirmed to have originated from FNA samples instead of blood. Cells from the overall B cell cluster (Extended Data Table 3 , bottom) were further clustered to identify B cell subsets using Leiden graph-clustering via the ‘scanpy.tl.leiden’ function with resolution 0.18 (Extended Data Fig. 2d ). Cluster identities were assigned by examining the expression of a set of marker genes for different B cell subsets (Extended Data Fig. 2e ) along with the availability of BCRs. The following marker genes were examined: BCL6 , RGS13 , MEF2B , STMN1 , ELL3 and SERPINA9 for GC B cells; XBP1 , IRF4 , SEC11C , FKBP11 , JCHAIN and PRDM1 for plasmablasts and LNPCs; TCL1A , IL4R , CCR7 , IGHM , and IGHD for naive B cells; and TNFRSF13B , CD27 and CD24 for MBCs. Although one group clustered with B cells during overall clustering, it was labelled ‘B and T’ as its cells tended to have both BCRs and relatively high expression levels of CD2 and CD3E ; and was subsequently excluded from the final B cell clustering. Eighteen cells that were found in the GC B cell cluster but came from blood samples were labelled ‘PB-like’ 13 . Two hundred and twenty-three cells that were found in the plasmablast cluster but came from FNA samples were re-assigned as LNPCs. Forty cells that were found in the LNPC cluster but came from blood samples were re-assigned as plasmablasts. Heavy chain SHM frequency and isotype usage of the B cell subsets were assessed for consistency with expected values to further confirm their assigned identities. Selection of single-cell BCRs from GC B cell or LNPC clusters for expression Single-cell gene expression analysis was performed using lymph node samples up to and including week 15 on a by-participant basis. Clonal inference was performed based on paired heavy and light chains from the same samples. From every clone with a clone size of more than 3 cells that contained cells from the GC B cell and/or LNPC clusters, one GC B cell or LNPC was selected. For selection, where a clone spanned both the GC B cell and LNPC compartments, and/or multiple time points, a compartment and a timepoint were first randomly selected. Within that clone, the cell with the highest heavy chain UMI count was then selected, breaking ties based on IGHV SHM frequency. In all selected cells, native pairing was preserved. Selection of BCRs from S-specific BMPC clones for expression From each heavy chain-based S-specific clone containing both plasmablasts and BMPCs, where possible, one plasmablast heavy chain was selected, and, together with all BMPC heavy chains, were paired with the same light chain for expression. For the plasmablast heavy chain, if single-cell paired plasmablasts were available, the single-cell paired plasmablast whose IGHV mutation frequency was closest to the median mutation frequency of other single-cell paired plasmablasts in the same clone (breaking ties by UMI count), and whose light chain V gene, J gene, and CDR3 length (VJL) combination was consistent with the clonal majority, was used as the source. The natively paired light chain of the plasmablast from which the heavy chain was selected was used. In clones in which two plasmablasts had inconsistent light chain VJL combinations, both plasmablasts were used. Clones in which there was light chain uncertainty due to more than two plasmablasts or due to LNPCs were generally excluded. Curation of selected BCRs for expression The selected BCRs were curated prior to synthesis. First, artificial gaps introduced under the IMGT unique numbering system 49 were removed from the IMGT-aligned observed V(D)J sequences. IMGT gaps were identified as positions containing in-frame triplet dots (‘…’) in the IMGT-aligned germline sequences. Second, any non-informative (non-A/T/G/C, such as N or -) positions in the observed sequences, with the exception of potential in-frame indels, were patched by the nucleotides at their corresponding germline positions. Third, if applicable, the 3′ end of the observed sequences were trimmed so that the total nucleotide length would be a multiple of 3. Finally, potential in-frame indels were manually reviewed. For a given potential in-frame indel from a selected cell, its presence or lack thereof in the unselected cells from the same clone was considered. Barring strong indications that an in-frame indel was due to sequencing error rather than the incapability of the IMGT unique numbering system of capturing it, the in-frame indels were generally included in the final curated sequences. Transfection for recombinant monoclonal antibodies and Fab production Selected pairs of heavy and light chain sequences were synthesized by GenScript and sequentially cloned into IgG1, Igκ/λ and Fab expression vectors. Heavy and light chain plasmids were co-transfected into Expi293F cells (Thermo Fisher Scientific) for recombinant monoclonal antibody production, followed by purification with protein A agarose resin (GoldBio). Expi293F cells were cultured in Expi293 Expression Medium (Gibco) according to the manufacturer’s protocol. GFP-reduction neutralization test Serial dilutions of each monoclonal antibody diluted in DMEM were incubated with 10 2 plaque-forming unit (PFU) of VSV-SARS-CoV-2 D614G for 1 h at 37 °C. Antibody-virus complexes were added to Vero cell monolayers in 96-well plates and incubated at 37 °C for 7.5 h. Cells were fixed at room temperature in 2% formaldehyde (Millipore Sigma) containing 10 μg ml −1 of Hoechst 33342 nuclear stain (Invitrogen) for 45 min at room temperature. Fixative was replaced with PBS prior to imaging. Images were acquired using an IN Cell 2000 Analyzer automated microscope (GE Healthcare) in both the DAPI and FITC channels to visualize nuclei and infected cells. Images were analysed using the Multi Target Analysis Module of the IN Cell Analyzer 1000 Workstation Software (GE Healthcare). GFP-positive cells were identified using the top-hat segmentation method and subsequently counted within the IN Cell workstation software. The initial dilution of monoclonal antibody started at 25 μg ml −1 and was threefold serially diluted in 96-well plate over 8 dilutions. Affinity analysis by BLI We used the Octet Red instrument (ForteBio) with shaking at 1,000 rpm. The kinetic analysis using Octet SA biosensors (Sartorius) was performed as follows: (1) baseline: 120 s immersion in buffer (10 mM HEPES and 1% BSA); (2) loading: 130 s immersion in solution with 10 μg ml −1 biotinylated SARS-CoV-2 S Avitag; (3) baseline: 120 s immersion in buffer; (4) association: 300 s immersion in solution with serially diluted recombinant Fab; (5) dissociation: 600 s immersion in buffer. The BLI signal was recorded and analysed using BIAevaluation Software (Biacore). The 1:1 binding model with a drifting baseline was employed for the equilibrium dissociation constant ( K D ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Raw sequencing data and transcriptomics count matrix are deposited at the Sequence Read Archive ( PRJNA777934 ) and Gene Expression Omnibus ( PRJNA777934 ). Processed transcriptomics and BCR data are deposited at Zenodo ( ). Previously reported bulk-sequenced BCR data used in this study were deposited at the Sequence Read Archive under PRJNA731610 and PRJNA741267, and at and . | For at least six months after COVID-19 vaccination, antibodies produced by immune cells become steadily more formidable and more precisely targeted against the virus that causes COVID-19, according to a study of the antibody response to the Pfizer-BioNTech vaccine by researchers at Washington University School of Medicine in St. Louis. The idea that antibodies increase in quality as they decrease in quantity will come as no surprise to immunologists. The process was described in animals by Washington University immunologists Herman Eisen, MD, and Gregory Siskind, MD, in 1964. But this study, published Feb. 15 in Nature, is the first to track the maturation of the antibody response in detail in people. The findings suggest that declining antibody levels in the months after vaccination primarily represent a shift to a sustainable immune response. Producing vast quantities of antibodies burns a lot of energy. The immune system cannot sustain such a high level of activity indefinitely, so it gradually switches to producing smaller amounts of more powerful antibodies. Even quite low levels of antibodies would continue to provide some protection against disease, the researchers said—as long as the virus doesn't change. "If the virus didn't change, most people who got two doses of this vaccine would be in very good shape," said senior author Ali Ellebedy, Ph.D., an associate professor of pathology & immunology, of medicine and of molecular microbiology. "The antibody response we saw is exactly what we'd expect from a robust immune response. We never thought that six months following that second injection, many people would still be actively improving the quality of their antibodies. To me, that is remarkable. The problem is that this virus keeps evolving and producing new variants. So, the antibodies are getting better at recognizing the original strain, but unfortunately the target keeps changing." Immune cells that produce antibodies are from the B cell family. Following the B cell response through all of its stages—from initiation through peak antibody production to the emergence of memory cells that can quickly churn out new antibodies the next time the body encounters the same virus—requires repeatedly taking samples from parts of the body that can be hard to access. At different stages in the process, key members of the B cell family are located in the blood, the lymph nodes and the bone marrow. Getting B cells from the lymph nodes is technically challenging and involves using ultrasound to locate minuscule immune structures called germinal centers within the lymph nodes. Obtaining a sample of bone marrow involves inserting a needle into the pelvic bone. The researchers collected blood from 42 participants and lymph node samples from 15 participants before each person received his or her first dose of the Pfizer-BioNTech COVID-19 vaccine and at weeks three, four, five, seven, 15 and 29 afterward. The researchers also obtained bone marrow samples from 11 participants 29 and 40 weeks after the first vaccine dose. Eight people provided all three kinds of samples, allowing the researchers to track the development of the antibody response over time within those individuals. None of the eight had been infected with the virus that causes COVID-19, so their antibody responses were entirely due to vaccination. The research team was led by Ellebedy and co-first authors Wooseob Kim, Ph.D., a postdoctoral researcher, and Julian Q. Zhou, Ph.D., a staff scientist. The researchers found that B cells targeted against SARS-CoV-2 persisted in the germinal centers of all participants for months. Even six months after vaccination, 10 out of 15 people still had B cells in their germinal centers. Germinal centers are like boot camps where B cells are trained to make ever-better-quality antibodies. The more time B cells spend in germinal centers, the more potent their antibodies get. Germinal centers had been thought to last only a few weeks, so finding these boot camps still training B cells in a majority of people so long after vaccination was a surprise, Ellebedy said, and an indication of a strong antibody response that continued to mature and improve. Indeed, six months after vaccination, the antibodies were noticeably better than they had been in the beginning. In one set of experiments, the researchers found that only 20% of early antibodies bound to a protein from the virus. Six months later, nearly 80% of antibodies from the same individuals bound to the viral protein. "When you look at antibodies, quantity should not be your only concern," Ellebedy said. "The antibodies at six months might be less in quantity, but they are much better in quality. And that refinement of the antibody response happens on its own. You get your shot, maybe your arm hurts for a day, and then you forget about it. But six months later your germinal centers are still ongoing and your antibodies are still getting better and better." The quality of the antibodies, of course, is measured against the original virus that was used to design the vaccine. If a new variant is different enough from the original, it may be able to escape once-powerful antibodies. Ellebedy and colleagues have begun studying the effects of variant-specific boosters on the antibody response to vaccination. "Everything changes when a new variant comes," Ellebedy said. "You have to retrain your immune system. It's like updating your anti-malware software to make sure it matches the newest computer viruses that are going around. It doesn't mean the old software was bad. It just means it no longer completely matches the viruses it is going to encounter." | 10.1038/s41586-022-04527-1 |
Biology | DIY crop speed breeding system to boost drought research | Sreya Ghosh et al. Speed breeding in growth chambers and glasshouses for crop breeding and model plant research, Nature Protocols (2018). DOI: 10.1038/s41596-018-0072-z Journal information: Nature Protocols | http://dx.doi.org/10.1038/s41596-018-0072-z | https://phys.org/news/2018-11-diy-crop-boost-drought.html | Abstract ‘Speed breeding’ (SB) shortens the breeding cycle and accelerates crop research through rapid generation advancement. SB can be carried out in numerous ways, one of which involves extending the duration of plants’ daily exposure to light, combined with early seed harvest, to cycle quickly from seed to seed, thereby reducing the generation times for some long-day (LD) or day-neutral crops. In this protocol, we present glasshouse and growth chamber–based SB approaches with supporting data from experimentation with several crops. We describe the conditions that promote the rapid growth of bread wheat, durum wheat, barley, oat, various Brassica species, chickpea, pea, grass pea, quinoa and Brachypodium distachyon . Points of flexibility within the protocols are highlighted, including how plant density can be increased to efficiently scale up plant numbers for single-seed descent (SSD). In addition, instructions are provided on how to perform SB on a small scale in a benchtop growth cabinet, enabling optimization of parameters at a low cost. Introduction To improve the productivity and stability of crops, there is pressure to fast-track research and increase the rate of variety development. The generation time of most plant species represents a bottleneck in applied research programs and breeding, creating the need for technologies that accelerate plant development and generation turnover. Recently, we reported an approach for SB that involves extending the photoperiod using supplementary lighting and temperature control, enabling rapid generation advancement in glasshouses with sodium vapor lamps (SVL) or growth chambers fitted with a mixture of metal halide and light-emitting diode (LED) lighting 1 . By adopting a 22-h photoperiod and a controlled temperature regime, generation times were substantially reduced for spring bread wheat ( Triticum aestivum ), durum wheat ( T. durum ), barley ( Hordeum vulgare ), chickpea ( Cicer arietinum ), pea ( Pisum sativum ), canola ( Brassica napus ), the model grass, B. distachyon , and the model legume, Medicago truncatula , in comparison to those of plants grown in a field or a glasshouse with no supplementary light. Under the rapid growth conditions, plant development was normal, plants could be easily crossed (wheat and barley), and seed germination rates were high. We also demonstrated that SB can be used to accelerate gene transformation pipelines and that adult plant phenotyping for traits such as flowering time, plant height and disease resistance in wheat; leaf sheath glaucousness in barley; and pod shattering in canola could be performed under SB conditions 1 . The use of an extended photoperiod to hasten plant growth is not novel. Sysoeva et al. 2 provide an extensive review of the literature surrounding this subject published within the past 90 years, which outlines successful attempts using spring wheat, barley, pea, chickpea, radish ( Raphanus sativus ), alfalfa ( Medicago sativa ), canola, flax ( Linum usitatissimum ), Arabidopsis ( Arabidopsis thaliana ), apple ( Malus domestica ) and rose ( Rosa x hybrida ), among others. More recent examples of photoperiod manipulation to hasten flowering time of crop species include lentil ( Lens culinaris ) 3 , 4 , pea ( P. sativum ), chickpea ( C. arietinum ), fava bean ( Vicia faba ), lupin ( Lupinus angustifolius ) 5 and clover ( Trifolium subterraneum ) 6 . Here, we provide a standardized SB procedure for use in a glasshouse, or a growth chamber with additional data-supported modifications. We provide details for scaling up plant numbers in the glasshouse, suitable for SSD, to generate large populations. Because plant species, indeed even cultivars within a species, are highly diverse in their response to photoperiod, a universal procedure for all plant species and traits is not possible. We therefore provide instructions for building a low-cost benchtop SB cabinet with controlled lighting and humidity monitoring that is suitable for small-scale research projects and trialing SB parameters. Notwithstanding, we have observed that the procedures are flexible and can be tailored to fit a wide range of breeding or research objectives and crop species. By sharing these procedures, we aim to provide a pathway for accelerating crop research and breeding challenges. Overview of the procedure In this protocol, we describe how to implement SB in existing growth chambers (Box 1 ) and in temperature-controlled glasshouses using supplementary LED lighting, which provides significant cost savings over traditional SVLs (Equipment). The procedures have been tested in the United Kingdom and Australia, with lights from the same company, but with slightly different models. We also outline compatible soil mixes for various crops when growing them under these lighting regimes (see Reagent setup, ‘Soil’), along with advice for early harvest to reduce generation time further (see Procedure, Step 3). We provide supporting data to demonstrate the suitability of these setups (Anticipated results) to substantially decrease the number of days to flowering and overall generation advancement for spring wheat, barley, canola, chickpea, pea, B. distachyon , M. truncatula , oat ( Avena strigosa ), grass pea ( Lathyrus sativus ) and quinoa ( Chenopodium quinoa ). We also include the design, step-by-step construction procedure and operation of a small growth cabinet (see Equipment and Equipment setup, ‘Benchtop growth cabinet’), which allows control over the light quality, intensity and photoperiod to help optimize the SB recipe for different crops and cultivars before implementing a large-scale glasshouse experiment. Crop breeding programs commonly use SSD for several generations, on large numbers of segregating plants, to generate homozygous lines with fixed traits 7 . A glasshouse is often preferred for SSD because plant populations can be grown year-round. This process involves a large investment in time as well as space within the glasshouse. Following the crossing of two homozygous lines, six generations of self-pollination are required to produce progeny that are 98.4% homozygous, which, at a rate of two generations per year, would take 3 years to complete. Although only one or two seeds are needed from each plant to begin the next generation, plant researchers and breeders seek to maximize the number of plants within a restricted space. Plant density can be scaled up under SB to enable concurrent rapid cycling of large plant populations, which is ideal for SSD programs. To demonstrate this, we evaluated spring wheat and barley sown at different plant densities in a glasshouse fitted with LED supplementary lighting (Box 1 ). By comparing the physiological, morphological and yield parameters, we illustrate the normal development of these plants and highlight how this SB approach can save time and resources for SSD programs (see Anticipated results, ‘Speed breeding in single-seed descent programs’). Box 1 Speed breeding setup This box provides information about setting up SB in an existing plant growth chamber or CER. Here, we outline the core ‘recipe’ for programming an existing growth room to set up SB conditions. Lights . We have shown in our previous studies 1 , that any light that produces a spectrum that reasonably covers the PAR region (400–700 nm), with particular focus on the blue, red and far-red ranges, is suitable to use for SB. The referenced study has several examples of these spectra, and similar examples of possible SB spectra are provided here. An appropriate spectral range can be achieved through LEDs, or a combination of LEDs and other lighting sources (e.g., halogen lamps), or in the case of a glasshouse, by simply supplementing the ambient lighting with LEDs or SVLs. We highly recommend that measurements of the light spectrum be taken before commencement of the SB experiment. In addition to controlling the light quality, we recommend a PPFD of ~450–500 µmol/m 2 /s at plant canopy height. Slightly lower or higher PPFD levels are also suitable. Crops species vary in their response to high irradiance. However, the suggested level of 450–500 µmol/m 2 /s has been demonstrated to be effective for a range of crop species 1 . Photoperiod . We recommend a photoperiod of 22 h with 2 h of darkness in a 24-h diurnal cycle. Continuous light is another option, but our experience has shown that the dark period slightly improves plant health. Gradually increasing light intensity to mimic dawn and dusk states should be done, if possible, but is not vital. In our previous paper, we also provided an example in which an 18-h photoperiod was sufficient to achieve faster generation times for wheat, barley, oat and triticale 1 . Temperature . The optimal temperature regime (maximum and minimum temperatures) should be applied for each crop. A higher temperature should be maintained during the photoperiod, whereas a fall in temperature during the dark period can aid in stress recovery. At UQ, a 12-h 22 °C/17 °C temperature cycling regime with 2 h of darkness occurring within 12 h of 17 °C has proven successful (SB II) 1 . By contrast, a temperature cycling regime of 22 °C/17 °C for 22 h of light and 2 h of dark, respectively, is used at JIC (SB I) 1 . In both scenarios, the generation times of all crops were successfully accelerated and comparable. In the controlled-environment chamber in which this was demonstrated, the temperature was ramped up and down similarly to the lights, but this was subsequently found to not be of particular importance. Humidity . Most controlled-environment chambers have limited control over humidity, but a reasonable range of 60–70% is ideal. For crops that are more adapted to drier conditions, a lower humidity level may be advisable. Show more Development of the approach The SB concept was inspired by NASA’s efforts to grow crops in space, using an enclosed chamber and an extended photoperiod 8 . In recognizing the opportunity to more rapidly produce adult wheat and barley plants and allow faster selection and population development, SB became the norm in cereal research activities at the University of Queensland (UQ), Australia, thanks to Ian Delacy and Mark Dieters. The original approach was first described and implemented for wheat 9 and peanut ( Arachis hypogaea ) 10 . Variations of this approach have been demonstrated to be an efficient system for rapid screening of wheat germplasm for adult plant resistance to various diseases 11 , 12 , 13 , 14 and also for pyramiding multiple disease resistance in barley 15 . The approach has also been adapted for high-density plant production systems for SSD programs. The current SB approach described in this protocol was developed from the initial implementation described for wheat to include a 2-h dark period that improved plant health 1 . This change was made following experiments in a controlled environment chamber at the John Innes Centre (JIC), UK, and was demonstrated to be suitable for accelerating research activities involving adult plant phenotyping and genetic structuring, as well as for molecular studies such as those on gene transformation in wheat and barley. It was further demonstrated to be suitable for rapid generation advancement for durum wheat ( T. durum ); pea; the model grass, B. distachyon ; and the model legume, M. truncatula ; and could be scaled up in the SB glasshouse system at UQ, to be made suitable for rapid generation advancement of wheat, barley, canola and chickpea. Comparison with other approaches Perhaps, the most well-known strategy for increasing generation turnover is ‘shuttle breeding’, introduced by Norman Borlaug in the 1950s at the International Centre for Maize and Wheat Improvement (CIMMYT), which enabled growing two generations per year by sowing wheat populations at field locations differing in altitude, latitude and climate in Mexico 16 . There is also a long history of extensive efforts to accelerate plant growth of many species by manipulating the photoperiod under artificial conditions, as briefly outlined above. Supplementary lighting is not the only basis for rapid generation advance in plants. A common approach involves exerting physiological stress to trigger flowering and earlier setting of seed. This may involve restricting plant growth area (by growing plants at high densities), nutrient and water access 17 and/or use of intense light. Such a method is well established and documented for rice 18 and has also been demonstrated for pea (Supplementary Fig. 1 ) and canola 19 . Embryo rescue—in which immature seed is harvested and induced to germinate on culture medium, with or without the addition of plant growth regulators (PGRs), to negate the waiting time for the seed to mature—is another common feature in many rapid-cycling methods. Bermejo et al. 20 used PGRs in embryo culture media to promote germination of immature lentil seed to achieve four generations annually. Mobini et al. 21 sprayed lentil and fava bean plants with PGRs to promote early flowering and applied embryo rescue with PGR-enriched agar media to achieve up to 8 and 6.8 generations per year, respectively. Castello et al. 22 reported three to four generations per year in subterranean clover ( T. subterraneum ), also with PGRs in the culture medium. Application of PGRs is not required for SB, which may be desirable, considering the additional time and effort required for handling these and working out the logistics of their application at specific times. In addition, if a species-specific protocol is not available, extensive testing would be needed to optimize such applications. There are also examples of the use of embryo rescue without PGR to shorten generation time. Zheng et al. 23 and Yao et al. 24 reported up to eight generations per year for wheat, and Zheng et al. 23 reported up to nine generations per year for barley. Ochatt et al. 25 and Mobini and Warkentin 5 reported up to 6.9 and 5.3 generations of pea per year, respectively, and Roumet and Morin 26 reported five cycles per year in soybean ( Glycine max L.), all with embryo rescue without PGRs. Other methods of reducing generation time have involved combining embryo rescue with other techniques. In addition to hastening flowering through stress, Liu et al. 27 used embryo rescue to achieve shorter generation times in oat ( Avena sativa ) and triticale ( Triticosecale ) and Ribalta et al. 28 in pea. Yao et al. 19 reported seven generations per year in canola when combining stress and embryo rescue. Ribalta et al. 29 used the PGR flurprimidol to reduce plant growth and induce early maturation in pea, followed by embryo rescue to achieve more than five generations per year. Without embryo rescue, SB conditions are capable of producing six generations per year for spring wheat, barley, chickpea and pea, and four generations per year for canola 1 . Testing is needed for any plant species before implementation, but this approach is promising for other cereal, pulse and legume crops. Seed of wheat and barley produced under SB conditions can be harvested prematurely at 2 weeks post anthesis, followed by a short period of drying and chilling to achieve high and uniform germination rates and healthy plants 1 . Approaches involving embryo rescue are important and useful for breeding and research programs if the required infrastructure is available 30 , particularly for species that are recalcitrant to other parameters used to accelerate generation advancement, such as temperature or photoperiod manipulation 31 , 32 , 33 . In comparison, the SB approach outlined here is less labor intensive, especially with large populations, and laboratory facilities are not required, making the procedures more accessible. Plant growth can also be promoted by increasing the CO 2 concentration. For example, for C 3 plants such as rice and wheat, photosynthetic efficiency increases with increasing CO 2 levels, leading to an increase in biomass and early flowering. In fact, there are documented methods for rapid generation advance in rice that combine restricted root growth and canopy thinning with high CO 2 concentration, followed by early harvest and embryo rescue to cut down generation times of many rice varieties 34 . Doubled-haploid (DH) technology, in which haploid ( n ) embryos are rescued and undergo chromosome doubling (2 n ), is extensively and routinely used in the breeding of several crop species, thus reducing the number of generations required to achieve homozygous lines from six or more to just two generations 35 . Despite this, DH technology has some disadvantages: it can be expensive, it requires specialist skills, it restricts recombination to a single round of meiosis, and it has a variable success rate that may be genotype dependant 36 . The approach can also be labor intensive for large populations, especially those requiring removal of the embryos from the seed coat. Notably, there is the potential for SB to further accelerate the production of DH lines by speeding up the crossing, plant regeneration and seed multiplication steps. We have presented a design for building a low-cost benchtop growth cabinet to trial SB. Compared to other published approaches for self-made growth chambers 37 , 38 , our design makes use of a more widely available control system that uses a Raspberry Pi and compatible sensors, with codes for the user interface (UI) freely available from GitHub ( ). The cabinet was trialed for the SB photoperiod (22 h/2 h (light/dark)) and temperature (22 °C/17 °C) regime, and successfully reproduced the accelerated development of one rapid-cycling variety each of wheat and pea (Supplementary Tables 1 and 2 ). The component costs for constructing such a cabinet are provided in Supplementary Table 3 . Limitations of the approach Different plant species can have markedly different responses when exposed to extended photoperiods. For LD plants, time to flowering is often accelerated under extended photoperiods because the critical daylength is generally exceeded. This is also the case with day-neutral plants, in which flowering will occur regardless of the photoperiod. By contrast, short-day (SD) plants require the photoperiod to be less than the critical day length to flower 39 , which could be at odds with SB conditions. However, there are exceptions, and some species show a facultative response in which, although flowering is promoted by a particular photoperiod, flowering will still occur in the opposite photoperiod. Furthermore, the time difference between being classified as an SD or an LD plant can be a matter of minutes 40 . These factors highlight both a limitation of SB and a point of flexibility. In cases in which the photoperiod response is unknown or complex in nature, experimentation with light and temperature parameters is required to optimize an SB strategy, for example, by using the benchtop growth cabinet. For instance, applying extended light before and following a shortened photoperiod, to induce flowering, could hasten initial vegetative growth and accelerate maturity, respectively, thus producing an overall shorter generation time. Such an approach has been successfully applied to amaranth ( Amaranthus spp. L.), an SD species, in which a 16-h LD photoperiod was used to initiate strong vegetative growth, after which plants were transferred to an 8-h SD photoperiod to induce flowering 41 . The overall effect was a shorter lifecycle and the ability to produce eight generations per year rather than two in the field. The need for vernalization, such as in winter wheat, creates a situation similar to the above. Young plants require chilling for a number of weeks to trigger the transition to flowering. Once the vernalization requirement is met in winter wheat, exposing the plants to extended photoperiod is likely to accelerate growth 42 , 43 . Overall, the ‘SB recipe’ is more straightforward and easier to implement for LD and day-neutral species that do not require vernalization. Experimentation and optimization of parameters are highly recommended for each species. The SB procedures presented here take place in an enclosed, artificial environment, which differs significantly from the field where eventual crop production may occur. Although this is acceptable for many activities, such as crossing, SSD and screening for some simple traits 1 , other activities, such as selection for adaptation in the target environment must still occur in the field. Nevertheless, programs alternating between SB and the field save time overall. The ability to shorten generation time further through early harvest of immature seed can interfere with the phenotyping of some seed traits. For this reason, in spring wheat breeding programs, in which dormant and nondormant genotypes must be differentiated, phenotyping of grain dormancy under SB conditions is limited to only four generations per year 9 . The initial investment to build a glasshouse or purchase a growth chamber with appropriate supplementary lighting and temperature-control capabilities is substantial if these facilities are not already available. However, depending on the budget of the research or breeding program, the benefits may outweigh the costs. For instance, an economic analysis performed by Collard et al. 18 compared the rapid generation advance (i.e., no phenotypic selection at each generation) with the pedigree-based breeding method (i.e., with phenotypic selection at each generation) for rice and determined that rapid generation (achieved through restricted soil access and canopy thinning) was more cost effective, and advantages would be realized after 1 year even if new facilities were constructed. Nevertheless, most breeding programs have pre-existing glasshouse facilities that can be converted for SB applications, but careful selection of energy-efficient lighting and temperature-control systems is needed to minimize operating costs. Research activities often do not require the high plant numbers needed in breeding, so growth chambers are common. The cost of these starts at tens of thousands of dollars, making them inaccessible for many projects and a barrier for implementing SB. In addition, the energy needed to provide extended supplementary lighting is significant. A cost–benefit analysis should be carried out to determine feasibility, although there are areas in which cost savings can be made. Supplemental LED lighting provides more efficient power usage and reduced heat as compared with other lighting types, such as SVLs. An estimate of the maintenance and energy costs associated with LED lighting is provided in the supplementary material of Watson et al. 1 . Investing in solar panels is another strategy to offset the increased energy costs, depending on availability and location. The investment in SB must be weighed in terms of the potential benefits to variety development and research output. As with most technologies, determining the optimal way to integrate SB into a crop improvement program needs careful consideration and may require significant re-design or restructure to the overall program. Before implementing such changes, computer simulations are a good way to evaluate the different breeding programs incorporating SB. Experimental design To set up an effective SB system, certain factors require careful consideration. These include the following. Lighting requirements Many lighting sources are appropriate for SB, including SVLs and LEDs 1 . Even incandescent lighting has been shown to accelerate flowering in clover 6 . However, selection should be based on the space available, the plant species and energy resources. For example, LED lighting may be preferred, owing to its energy efficiency, although simple incandescent lighting may be suitable within a smaller area, with sufficient cooling to counteract the higher heat output. Plant species may also differ in their responses to the different spectra of wavelengths emitted by different lighting sources, so this should be carefully considered. The lighting setup for glasshouses and growth chambers detailed in this protocol can act as a starting point but by no means represents the final conditions that may be optimal for another situation. The procedures outlined here have been successful for the species trialed, but a modified approach may be more suitable for another crop. We recommend mining existing literature and studies on suitable light spectra (particularly with regard to blue/red ratios, red/far-red ratios and the proportional level of UV light that can be introduced into the system) for the crop and trait of interest. Initial light calibrations Requirements in terms of light quality and intensity for a particular species, a cultivar of that species and the desired phenotype, should be determined before application on a large scale or use within an experiment. Several ‘dummy’ or ‘test’ growth cycles are recommended to initially assess the rate of growth and quality of the plants so that alterations can be made to enable optimal outcomes (Box 1 ). For this purpose, we recommend starting with the benchtop growth cabinet option—the costs of which are low enough to build several and trial, in parallel, different light combinations, photoperiods and temperatures to determine the optimal conditions to implement on a larger scale, such as in a glasshouse, for your crop and trait. Germplasm As detailed above, not all plant species (or indeed cultivars within a species) are amenable to extended photoperiods. Care should therefore be exercised in selection of the germplasm to be grown under SB, and appropriate modifications should implemented to ensure optimal conditions for each species. End-use requirements The intended end use of the resultant plants can affect all aspects of the initial setup of the SB approach, such as glasshouse space and sowing density. For example, within an SSD program, large numbers of plants are grown within a defined space, so an appropriate sowing density must be determined. Conversely, growth of a small number of plants needed for a research experiment under variable lighting parameters is more appropriate for a small growth chamber experiment with flexible settings. Control conditions Before beginning an SB experiment, it is important to have replicates of your germplasm growing under the conditions you would normally use in your breeding program or institute. This will allow you to directly compare plant growth parameters (including generation time), operational costs (e.g., electricity) and plant quality. For popular varieties grown for many generations in the field or glasshouses, the control data may be readily available. Materials Biological materials Seeds. If the reader wishes to replicate any of our experiments with the same germplasm, information on where the relevant seed can be obtained is listed in Supplementary Table 7. Reagents Soil Critical Soil mixtures should be chosen based on the crop of interest. Soil mixtures that have previously been shown to work for certain crops in SB conditions are provided in Table 1 . UQ Compost Mix (designed by K. Hayes; Central Glasshouse Services, UQ; composition outlined in Supplementary Table 4 ) JIC Cereal Compost Mix (prepared by Horticulture Services at the JIC; composition outlined in Supplementary Table 5 ) JIC Peat and Sand Mix (prepared by Horticulture Services at the JIC; composition outlined in Supplementary Table 6 ) Table 1 Soil mixes that have been demonstrated to be compatible with speed breeding using our approach Full size table Nutrients Vitafeed Balanced 1-1-1 (Vitax, ) Calcium nitrate (Sigma, cat. no. C1396) Gibberellic acid (GA 3 ; Sigma, cat. no. G7645) Equipment Benchtop growth cabinet: hardware Critical This section provides an overview of the equipment required for constructing a small benchtop cabinet for SB, which can be used for small-scale pilot trials before investing in a larger system, such as a glasshouse. The cabinet has a footprint of 0.225 m 2 and comfortably accommodates eight 1-L square pots. To construct your low-cost growth cabinet, the components listed below are required. 12-V, 50-A DC power supply, 600 W (Amazon, cat. no. B072M7P7QJ) 12–5 V, 3-A DC/DC converter module (Amazon, cat. no. B00G890MIC) USB extension cable, 30 cm (Amazon, cat. no. B002M8RVKA) Ethernet extension cable, 30 cm (Amazon, cat. no. B077V421QH) Arduino UNO (Amazon, cat. no. B00CGU1VOG) Raspberry Pi 3 model B (CPC, cat. no. 2525225) Raspberry Pi display 7-inch touchscreen (CPC, cat. no. 2473872) Arduino base shield v2, SeeedStudio (CPC, cat. no. SC13822) Benchtop growth cabinet: cabinet structure Aluminum composite panel (757 × 307 × 3 mm, quantity = 6; Cut Plastics, cat. no. CP027-03) Aluminum composite panel (757 × 357 × 3 mm; Cut Plastics, cat. no. CP027-03) Aluminum composite panel (757 × 107 × 3 mm; Cut Plastics, cat. no. CP027-03) Aluminum composite panel (757 × 757 × 3 mm; Cut Plastics, cat. no. CP027-03) PVC foam board (757 × 157 × 3 mm, quantity = 2; Cut Plastics, cat. no. CP015-03) PVC foam board (757 × 141 × 3 mm; Cut Plastics, cat. no. CP015-03) PVC foam board (757 × 307 × 3 mm, quantity = 2; Cut Plastics, cat. no. CP015-03) Perspex clear acrylic sheet (757 × 307 × 3 mm; Cut Plastics, cat. no. CP001-03) OpenBeam (1,000 mm, quantity = 4; Technobots Online, cat. no. 4451-900) OpenBeam (750 mm, quantity = 13; Technobots Online, cat. no. 4451-750) OpenBeam (300 mm, quantity = 10; Technobots Online, cat. no. 4451-300) Corner bracket, MakerBeam (quantity = 4; Technobots Online, cat. no. 4446-013) L-joining plate, OpenBeam (quantity = 36; Technobots Online, cat. no. 4450-003) T-joining plate, OpenBeam (quantity = 2; Technobots Online, cat. no. 4450-004) Benchtop growth cabinet: lighting system Full-spectrum grow light LED bulb (quantity = 16; Amazon, cat. no. 071J3BC1W) E27 lamp holder (quantity = 16; Sinolec Components, cat. no. E27-SD04-2) Solid-state relay, Grove, SeeedStudio (Mouser, cat. no. 713-103020004) Benchtop growth cabinet: temperature and humidity control system 12-V, 10-A thermoelectric cooler (quantity = 3; Amazon, cat. no. B01M2ZBBVM) Temperature and humidity sensor pro, Grove, SeeedStudio (CPC, cat. no. MK00343) Relay, Grove, SeeedStudio (quantity = 4; CPC, cat. no. MK00330) 12-V cooling fan (50 mm; Amazon, cat. no. B00HPKC5MO) Software: Arduino IDE (v.1.8.5, ) LED-supplemented glasshouse setup Critical This section provides an overview of the equipment required for setting up SB in a glasshouse using LED lamps for supplementary lighting. Its efficacy is demonstrated for a range of crop species, along with some examples of how single-seed descent for wheat and barley can be carried out. Glasshouse: a well-located glasshouse with the required space and sufficient ambient lighting. We recommend fitting it with a temperature-control system and programmable lights. Controllable blinds are also an option, if blocking out high irradiance on very sunny days is required. LED lamps: Although any kind of lighting system can be used to supplement the ambient lighting in the glasshouse, we recommend LED lamps above all because of the significant savings these provide in terms of maintenance and energy consumption. The glasshouse-based SB experiments detailed in our previous paper 1 were based on SVLs, but we have obtained similar results with LED lighting at both UQ and JIC. The LED supplemental lighting within glasshouses at JIC and UQ was supplied by the same company, Heliospectra. Details of both setups are provided, along with the results of experiments carried out at both locations. The lighting system configuration, and the make and model of the lights for both locations are provided in the Equipment setup. SSD trays: For demonstration, at UQ, three seedling tray types with increasing sowing densities were used. The dimensions and volumes are given in Supplementary Table 8 . The soil media compositions are given in Supplementary Table 4 . Critical Energy tariffs can vary according to the time of day, depending on peak energy-usage patterns in the location. Substantial savings can be achieved by programming the dark period to coincide with the energy tariff imposed during peak electricity consumption. Additional equipment needed Photosynthetically active radiation (PAR) meter: The PAR is measured in either photosynthetic photon flux density (PPFD) or Lux. Any off-the-shelf PAR meter can be used, as long as it provides PPFD levels and relative wavelength composition. We used the MK350S Spectrometer from UPRtek and the Spectrum Genius Essence Lighting Passport light sensor (AsenseTek, Taiwan) at JIC and UQ, respectively. Energy meter: This allows measurement of the energy consumption for lighting and temperature maintenance, thereby providing insight into SB operational costs. Any off-the-shelf energy meter can be used for this purpose. To obtain energy consumption data for both the lights employed and the controlled environment rooms (CERs) at JIC, we used a clamp-on Current Transformer meter (Panoramic Power, Centrica Business Solutions) with the capacity to store and download data. The instrument provided half-hourly readings and as such was highly accurate in determining energy costs. Reagent setup Soil Soil mixtures that have previously been shown to work for certain crops in SB conditions are provided in Table 1 . Please refer to this table to pick the most appropriate mix for your crop, and prepare the mix using the necessary components in the required proportions. Details of the soil mixture composition, along with information on proportions and suppliers, can be found in Supplementary Tables 4 , 5 and 6 . Some components, for example, the wetting agent, may need to be adjusted depending on local watering regimes and practices. Critical The JIC Cereal Mix and Peat and Sand Mix composts must be prepared fresh in order to eliminate the potential for inconsistent fertilizer spread through the soil and a buildup of salts occurring in the stored compost, as the slow-release fertilizer starts to break down and leaches to the bottom. Nutrient feed Depending on the size of the pots and the type of soil, the plants may need a nutrient feed. If the pots are small (~100 mL), a single or fortnightly application of a liquid nutrient feed should be considered to prevent the plant leaves from turning yellow prematurely, with concomitant reduced vigor and seed set. In the JIC glasshouses and growth chambers, we have successfully used Vitafeed Balanced 1-1-1 from Vitax for wheat growing in high-density trays. Critical Owing to the rapid growth of plants under SB, fertilizer application and swift amelioration of nutrient deficiencies are of utmost importance. Appropriate slow-release fertilizer within the soil media is recommended for growth to maturity, and maintenance of soil pH is important to avoid restriction of nutrient absorption; e.g., a pH that is too acidic can inhibit calcium uptake. Foliar fertilizer applications may be required for rapid access of nutrients to the leaves, although some level of calcium deficiency is common. See Supplementary Fig. 2 for common symptoms of calcium deficiency. In our experience, for wheat, barley and Brachypodium , symptoms are more common at early growth stages during the period of prolific vegetative growth and are relieved at later growth stages. See the section for specific suggestions on calcium applications. Troubleshooting Equipment setup Benchtop growth cabinet: hardware Connect the display to the Raspberry Pi, using the provided cables as instructed by the manufacturer. The Arduino connects to the Raspberry Pi via USB ports. Sensors and relay modules are connected using the Grove system (SeeedStudio). Benchtop growth cabinet: cabinet structure Assemble the beam profile, using the joining plates. Position the panels, boards and sheets before fully assembling each side. Benchtop growth cabinet: lighting system The photoperiod with the full-spectrum LED light bulbs is controlled by a solid-state relay connected to the Arduino microcontroller. Sixteen 57-mm-diameter holes must be drilled into one of the 757 × 307 × 3-mm aluminum composite panels, to fit the E27 lamp holders. The lamp holders are then inserted and wired in parallel. Benchtop growth cabinet: temperature and humidity system Pre-assembled thermoelectric cooling modules are used to simplify the construction of the benchtop growth cabinet. These are composed of fans, aluminum heat sinks and Peltier elements. The cooling modules are controlled by relays connected to the Arduino. Airflow is used to control the humidity; i.e., the humidity sensor will trigger the 12-V fan to circulate air from outside the cabinet in order to reduce the humidity inside. Benchtop growth cabinet: software installation and setup The SB cabinet is controlled by three main subsystems: the Arduino microcontroller that monitors and controls the environment according to a desired optimum; a Python daemon that stores the current conditions and reads the expected conditions from a MongoDB database; and a graphical interface written in ReactJS that allows the users to set up the expected conditions in a 24-h range. The circuit diagram for making the connections is provided in Supplementary Fig. 3 , and a photograph of the assembled cabinet is provided in Supplementary Fig. 4 . The cabinet has an available area of 0.225 m 2 . For the lamps we have used, the spectrum is provided in Supplementary Fig. 5 , with the light levels in PPFD being on an average of ~120 µmol/m 2 /s at 16 cm above the base where the pots are kept, and ~320 µmol/m 2 /s and 220 µmol/m 2 /s from a 10-cm and 20-cm distance, respectively, from the top of the cabinet where the lights are situated. The energy consumption of the mini cabinet is 6.24 kWh per day. A step-by-step guide for constructing the cabinet and installing the software is available at , along with troubleshooting tips. Caution The construction of the cabinet requires the use of sharp cutting and drilling tools that may cause physical injury if handled improperly. Many steps involve electrical components, which can cause fire if operated without being grounded. Ensure that all necessary safety steps are followed, and use personal protective equipment when constructing the cabinet. LED-supplemented glasshouse Table 2 provides the lighting arrangement in two glasshouse configurations. Both setups have been demonstrated to successfully support SB for the species listed. Table 2 LED-supplemented glasshouse setups for speed breeding at JIC and UQ Full size table A summary of the crops for which we have successfully demonstrated a shortening of generation time using SB, including information on which specific SB setups were used, and where the reader can find more information on the key growth stages and other growth parameters of the crop grown under those conditions is provided in Table 3 . Table 3 Speed breeding approaches that have been demonstrated for different species, along with pointers for locating the associated data Full size table Critical Weather and ambient light vary by location and season, especially at higher latitudes. Thus, for the glasshouse setups listed here, the light spectrum is determined not just by the presence of the LED lights but also by the ambient light. To ensure reproducibility, consider setting up your experiment in a way that mitigates these environmental variables. For example, use programmable lights that allow intensity modification based on sensor feedback, or use controllable blinds to regulate photoperiod. Provision of a short dark period is recommended for optimal plant health. We highly recommend setting up a temperature monitoring and control system. Procedure Preparation of seed for sowing 1 To increase germination efficiency, some seeds may need a pretreatment either by cold stratification (prolonged imbibition in the cold) or scarification (physical or chemical weakening of the seed coat). In the case that pretreatment is required, follow option A; if pretreatment is not required, follow option B. (A) Germination with pretreatment to break seed dormancy Timing 5–7 d Critical The requirements for germination pretreatments are specific to each species, and accessions of that species, and should be determined on an individual basis. (i) Place dormant seed on moistened filter paper in a Petri dish to imbibe for 24 h and then chill at 4 °C for ~3 d (longer times may be required, depending on the level of dormancy) in the dark. In a large-scale scenario, directly sow seeds in high-density trays and place the trays in a cold (~4–5 °C) room. (ii) Leave the seeds at room temperature (~20–25 °C) for 1–3 d to germinate in the dark before transferring to soil. In the large-scale scenario, trays can now be moved to the growing environment in the glasshouse (see section for tips on handling seed-germination issues). Troubleshooting troubleshooting (iii) Grow the plants under the desired SB conditions (Box 1 ). (B) Germination without pretreatment to break seed dormancy Timing 3–5 d (i) If pretreatment is not required, germinate the seed in a Petri dish on moistened filter paper in the dark before transferring to soil. In a large-scale scenario, seed may be sown directly in soil in the glasshouse/growth chamber. Note that for some crop species, such as pea or grass pea, you must scarify the seeds by chipping off a tiny bit of the seed coat with a scalpel to facilitate better imbibition. Take care not to chip on or around the hilum of the seed, to avoid damaging the embryo. Critical step If seeds germinate in a Petri dish and become too well established (i.e., develop green leaves) before transplanting to soil, the shift to SB conditions, especially the presence of intense light, can shock the plants, resulting in a strong hypersensitive response and possibly death. Take care to transplant them into soil early, or if they are already established, transfer them to soil and place a mesh over the plants to reduce light intensity while they adapt to the new environmental conditions. troubleshooting (ii) Grow the plants under the desired SB conditions (Box 1 ). Monitoring of key growth stages and growth parameters, and phenotyping Timing variable Critical Timing depends on crop, cultivar/genotype and SB setup used. Refer to Table 3 for guidance timelines in the associated Supplementary Tables. 2 To enable comparison to normal development, monitor the key growth stages of the plants. For many crops, for example, cereal crops 44 , canola 45 , quinoa 46 and legumes 47 , defined growth stages have been published. Take note of the heading times and earliest time point to harvest viable seeds. We also advise monitoring of the height and general physiology of the plants. Plants growing at such a rapid pace may start to exhibit micronutrient deficiencies. The manifestation of some of these deficiencies can interfere with plant phenotyping, and reduce seed set. Some of these issues (particularly for wheat and barley) are highlighted in the section. Troubleshooting Experiments performed using a LED-supplemented glasshouse setup at the JIC involved an SB glasshouse compartment (i.e., 22-h day length, as detailed in Table 2 ), and a twin compartment with a 16-h day length to measure the effect and value of increased day length. Growth parameters and harvest times are provided for both lighting regimes where available. Critical step For wheat and barley, we have previously demonstrated how SB conditions do not interfere with the phenotyping of a number of key traits 1 , and how variations of the SB approach can be used to rapidly screen wheat and barley for resistance to a number of major diseases or disorders (Table 4 ). Table 4 Protocol modifications for phenotyping diseases and disorders under speed breeding conditions Full size table troubleshooting Harvesting of the seed Timing variable Critical Timing depends on crop, cultivar/genotype and SB setup used. Refer to Table 3 for guidance timelines in the associated Supplementary Tables. 3 Shortened generation times can also be achieved in some species by harvesting premature seed. To do this, one should first wait until the seeds have set in the plant (indicated by filled seed in spikes for wheat, or filled pods for legumes). After this has occurred, either increase the temperature or withhold water from the plant to hasten seed ripening and drying. After a week of this stress application, harvest the seeds. For experiments performed using the LED-supplemented glasshouse setup (at the JIC), early harvest times are provided for both lighting regimes where available. If not indicated, the harvest time outlined is for harvest at physiological maturity. Critical step Freshly harvested seed may display dormancy. See for more details on how to overcome this issue. Troubleshooting troubleshooting Monitoring of energy use Timing: a few hours 4 At the end of one cycle, review the energy costs for your SB system. This is particularly useful in evaluating the generation time versus cost trade-off when multiple conditions have been tested concurrently (e.g., different day lengths). For the LED-supplemented glasshouse setup at JIC, there were two rooms set up concurrently with 16- and 22-h photoperiods. An example of the energy calculations for running each of these setups per month is given in Supplementary Table 9 , along with a comparison of how much it would cost to run a similar setup with SVL. Troubleshooting Troubleshooting advice can be found in Table 5 . Table 5 Troubleshooting table Full size table Timing Step 1A, germination with pretreatment: 5–7 d Step 1B, germination without pretreatment: 3–5 d Step 2, monitoring of key growth stages and growth parameters, and phenotyping: variable, depending on crop, cultivar/genotype and SB setup used; refer to Table 3 for guidance timelines in the associated Supplementary Tables Step 3, harvesting of the seed: variable, depending on crop, cultivar/genotype, and SB setup used; refer to Table 3 for guidance timelines in the associated Supplementary Tables Step 4, monitoring of energy use: a few hours Anticipated results As demonstrated in our previous study, under SB conditions with a 22-h photoperiod, it should be possible to produce up to 6 generations per year in spring wheat and barley, and up to 4 and 4.5 generations per year in canola and chickpea, respectively 1 . However, it is important to remember that results are highly dependent on the crop species and can vary greatly between cultivars. The light quality, duration of the photoperiod and temperature regime also impact the extent to which the generation time is reduced. It should also be noted that ambient sunlight strength and duration will vary with location and season, thus resulting in differences in the rate of development. These factors, in addition to basic growing conditions, such as soil type, can be manipulated to obtain the optimal parameters for the crop of interest. The various procedures outlined above are designed to facilitate this process. Speed breeding using the benchtop cabinet The self-made, benchtop SB cabinet will facilitate identification of conditions that enable rapid cycling of wheat and pea, and by extension, the other crops listed (Supplementary Fig. 4 ). We demonstrated the efficacy of this cabinet design by growing rapid-cycling varieties of pea ( P. sativum cv. JI 2822) and wheat ( T. aestivum cv. USU Apogee), and showing the shortened time from seed to seed, without compromising the viability of early-harvested seed (Supplementary Tables 1 and 2 ). These data are comparable to data from our previous study 1 in which we evaluated the same pea variety (JI 2822) under SB conditions using a commercial CER. Speed breeding using LED-supplemented glasshouses The time taken for reproductive development to occur for a range of crop species under the LED-fitted, SB glasshouse (JIC) is provided in Table 6 . Two extended photoperiods are represented to give an approximate expectation of the rapid development of these species under SB, and to give the reader an idea of what a 6-h difference in photoperiod can produce in a range of crops and cultivars. The much slower rate of development under control or regular glasshouse conditions without supplemental lighting was reported for some of these species in our previous study 1 . Table 6 Mean days to anthesis under speed breeding using LED-supplemented glasshouses at JIC, UK Full size table Plants grown under SB can be expected to look healthy (Fig. 1 ), with minor reductions in seed set (refer to Table 3 to view the related data for the crop of interest) and spike size (Supplementary Fig. 6 ) or pod size (Supplementary Figs. 7 and 8). In some crop species, the SB conditions can produce a slight reduction in height and/or internode length. In our experience, while working on M. truncatula and P. sativum , we found the plants grown under SB produced leaves with much smaller surface areas. Occasionally, micronutrient deficiencies manifest themselves because of the rapid growth and change in soil pH; some of these issues (in particular for wheat and barley) are highlighted in the section. Despite efforts to optimize soil composition, there may be a cultivar that responds very poorly to the long photoperiod and high irradiance. Fig. 1: Accelerated plant growth and development under speed breeding compared to standard long-day conditions. a – k , Plants on the left are grown under SB (22-h photoperiod conditions), and plants on the right are grown under standard long-day (16-h photoperiod) conditions in LED-supplemented glasshouses at the John Innes Centre, UK. a , Winter-growth-habit wheat ( T. aestivum cv. Crusoe) at 112 d after sowing (DAS), including 12 d of growth under 16-h photoperiod conditions followed by 56 d of vernalization at 6 °C with an 8-h photoperiod; b , spring wheat ( T. aestivum cv. Cadenza) at 57 DAS; c , spring barley ( H. vulgare cv. Manchuria) at 35 DAS (scale bar, 20 cm; applies to a – c ); d , grass pea ( L. sativus cv. Mahateora) at 35 DAS (red arrows indicate positions of flowers); e , B. distachyon (accession Bd21) at 34 DAS; f , pea ( P. sativum accession JI 2822) at 34 DAS (red arrows indicate pea pods) (scale bar, 20 cm; applies to d – f ); g , quinoa ( C. quinoa accession QQ74) at 58 DAS; h , Brassica oleracea (line DH1012) at 108 DAS; i , Brassica napus (line RV31) at 87 DAS; j , Brassica rapa (line R-0-18 87) at 87 DAS; k , diploid oat ( A. strigosa accession S75) at 52 DAS (scale bar, 60 cm; applies to g – k ). All plants were sown in October or November 2017, except for the quinoa, which was sown in February 2018. Full size image Troubleshooting We have previously demonstrated that wheat, barley and canola plants grown under SB are suitable for crossing and phenotyping a range of adult plant traits 1 . That said, complex phenotypes such as yield and abiotic stress (heat or drought stress) resilience are best evaluated in the field, particularly for breeding objectives. We have also demonstrated how SB can be combined with transformation of barley to speed up the process of obtaining transformed seeds 1 . Speed breeding in single-seed descent programs In breeding programs, SSD is often an important step in cultivar development that requires high-density plantings. The SB approach provided for glasshouses is ideal for SSD programs, particularly cereal crops. Increasing sowing density under SB can enable rapid cycling of many lines with healthy plants and viable seed. Figure 2 shows an example of the plant condition, spike lengths and seed sizes that can be expected at various sowing densities in SB. Under the UQ-GH-LED (LED-supplemented glasshouse setup, UQ, Australia (Equipment setup, ‘LED-supplemented glasshouse’)) approach, at a density of 1,000 plants/m 2 , up to six generations of wheat and barley can be expected per year (Supplementary Tables 10 , 11 , 12 and 13 ). At higher densities, plant height and seed numbers can be reduced, owing to the greater competition and low soil volume. Despite this, even at the highest sowing density shown here, all plants produced a spike with at least enough seeds to perform SSD, and in most cases many more. Large differences in the speed of development can be achieved by extending the photoperiod from 16 to 22 h. Under the JIC-GH-LED (LED-supplemented glasshouse setup, JIC, UK (Equipment setup, ‘LED-supplemented glasshouse’)) approach, spring and durum wheat were >10 d faster in development with an additional 6 h of photoperiod. Table 7 provides the approximate development times for several cereal crops at a range of sowing densities appropriate for intensive SSD. The SSD SB approach was performed under two extended photoperiod and temperature regimes at either JIC or UQ. These results demonstrate that plants can be grown at high densities under SB conditions to produce plants suitable for effective and resource-efficient generation turnover in SSD programs. Fig. 2: Single-seed descent sowing densities of spring wheat (bread and durum) and barley. All plants were grown under an LED-supplemented glasshouse setup at the JIC, UK, or the UQ, Australia. a , b , Durum wheat ( T. durum cv. Kronos) grown under the LED-supplemented glasshouse setup, JIC, in 96-cell trays: 43 d after sowing, under a 16-h photoperiod ( a , left); 43 d after sowing, under 22-h photoperiod ( a , right); 79 d under a 16-h photoperiod ( b , left); 79 d under a 22-h photoperiod ( b , right). Scale bar, 20 cm (applies to a , b ). c , Spring wheat ( T. aestivum cv. Suntop) grown under an LED-supplemented glasshouse setup, at the UQ, at 37 d after sowing: plants in a 30-cell tray (left); plants in a 64-cell tray (center); plants in a 100-cell tray (right). d , Barley ( H. vulgare cv. Commander) grown under an LED-supplemented glasshouse setup, at the UQ, at 34 d after sowing: plants in a 30-cell tray (left); plants in a 64-cell tray (center); plants in a 100-cell tray (right). Scale bar, 20 cm (applies to c , d ). e , Mature spikes of spring wheat ( T. aestivum cv. Suntop) grown under LED-supplemented glasshouse setup, at the UQ: spikes from plants in a 30-cell tray ( e , left); spikes from plants in a 64-cell tray ( e , center); spikes from plants in a 100-cell tray ( e , right). f , Mature spikes of barley ( H. vulgare cv. Commander) grown under an LED-supplemented glasshouse setup, at the UQ: spikes from plants in a 30-cell tray ( f , left); spikes from plants in a 64-cell tray ( f , center); spikes from plants in a 100-cell tray ( f , right). Scale bar, 3 cm (applies to e , f ). g – i , Mature seeds of spring wheat ( T. aestivum cv. Suntop) grown under an LED-supplemented glasshouse setup, at the UQ: seeds from plants in a 30-cell tray ( g ); seeds from plants in a 64-cell tray ( h ); seeds from plants in a 100-cell tray ( i ). j – l , Mature seeds of barley ( H. vulgare cv. Commander) grown under an LED-supplemented glasshouse setup, at the UQ: seeds from plants in a 30-cell tray ( j ); seeds from plants in a 64-cell tray ( k ); seeds from plants in a 100-cell tray ( l ). Scale bar, 1 cm (applies to g – l ). Full size image Table 7 Mean days to reproductive stages of single-seed descent sowing densities under speed breeding using the JIC-GH-LED or UQ-GH-LED approach Full size table Reporting Summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Data and code availability statement The authors confirm that all relevant data are included in the paper and/or its Supplementary Information files as summary statistics. Any request for raw data collected by researchers should be made to the corresponding authors. All relevant code required for running the small customized SB growth cabinet is provided at the public GitHub link: . | Plant speed breeding could be part of the solution to minimise the devastating effects of drought and climate change on crops in the future, according to a University of Queensland researcher. UQ Queensland Alliance for Agriculture and Food Innovation (QAAFI) Senior Research Fellow Dr. Lee Hickey said the technique can enable researchers and plants breeders to deliver more tolerant varieties of crops to farmers sooner. "It can take up to 20 years to develop an improved crop variety, but the speed breeding technique can slash this time because it enables growing up to six plant generations in a single year, rather than just one in the field," Dr. Hickey said. "This technique works for a range of crops like wheat, barley, chickpea and canola, and uses specially modified glasshouses fitted with LED lighting to grow plants under extended photoperiods – accelerating crop research and the development of more robust plant varieties through rapid cross breeding and generation advance. "With scientists from the John Innes Centre in the UK, we've now taken the next step in our research and developed the protocols to scale-up speed breeding to large glasshouse facilities as well as instructions on how to build your own low-cost speed breeding cabinet. "Information on speed breeding has been in high demand, so by sharing our protocols it means researchers and plant breeders around the world can help tackle the impacts of climate change by accelerating their research or development of better crops, even on a shoestring budget." Credit: University of Queensland Climate change is presenting a huge challenge for food production globally – currently many farmers in Australia and Europe are experiencing severe crop losses due to drought and heat. With extreme weather expected to be more common in the future, there is a need to develop drought-resistant and more tolerant varieties of crops such as wheat, barley, oats, canola and chickpea rapidly. John Innes Centre wheat scientist Dr. Brande Wulff said the international team's protocols could be adapted by researchers to work in vast glass houses or in scaled-down inexpensive desktop growth chambers. "We built a miniature speed breeding cabinet with bits and pieces we got off the internet and it was very cheap," he said. "We know that more and more institutes across the world will be adopting this technology and by sharing these protocols we are providing a pathway for accelerating crop research." The paper has been published in Nature Protocols. | 10.1038/s41596-018-0072-z |
Medicine | 25-year study shows that incidence of type 1 diabetes is increasing by more than 3 percent per year in Europe | Christopher C. Patterson et al. Trends and cyclical variation in the incidence of childhood type 1 diabetes in 26 European centres in the 25 year period 1989–2013: a multicentre prospective registration study, Diabetologia (2018). DOI: 10.1007/s00125-018-4763-3 Journal information: Diabetologia | http://dx.doi.org/10.1007/s00125-018-4763-3 | https://medicalxpress.com/news/2018-11-year-incidence-diabetes-percent-year.html | Abstract Aims/hypothesis Against a background of a near-universally increasing incidence of childhood type 1 diabetes, recent reports from some countries suggest a slowing in this increase. Occasional reports also describe cyclical variations in incidence, with periodicities of between 4 and 6 years. Methods Age/sex-standardised incidence rates for the 0- to 14-year-old age group are reported for 26 European centres (representing 22 countries) that have registered newly diagnosed individuals in geographically defined regions for up to 25 years during the period 1989–2013. Poisson regression was used to estimate rates of increase and test for cyclical patterns. Joinpoint regression software was used to fit segmented log-linear relationships to incidence trends. Results Significant increases in incidence were noted in all but two small centres, with a maximum rate of increase of 6.6% per annum in a Polish centre. Several centres in high-incidence countries showed reducing rates of increase in more recent years. Despite this, a pooled analysis across all centres revealed a 3.4% (95% CI 2.8%, 3.9%) per annum increase in incidence rate, although there was some suggestion of a reduced rate of increase in the 2004–2008 period. Rates of increase were similar in boys and girls in the 0- to 4-year-old age group (3.7% and 3.7% per annum, respectively) and in the 5- to 9-year-old age group (3.4% and 3.7% per annum, respectively), but were higher in boys than girls in the 10- to 14-year-old age group (3.3% and 2.6% per annum, respectively). Significant 4 year periodicity was detected in four centres, with three centres showing that the most recent peak in fitted rates occurred in 2012. Conclusions/interpretation Despite reductions in the rate of increase in some high-risk countries, the pooled estimate across centres continues to show a 3.4% increase per annum in incidence rate, suggesting a doubling in incidence rate within approximately 20 years in Europe. Although four centres showed support for a cyclical pattern of incidence with a 4 year periodicity, no plausible explanation for this can be given. Working on a manuscript? Avoid the common mistakes Introduction The increasing incidence of childhood type 1 diabetes has been well documented both in Europe, with an estimated annual increase of 3.9% (95% CI 3.6%, 4.2%) during the period 1989–2003 [ 1 ], and worldwide, with an estimated annual increase of 2.8% (95% CI 2.4%, 3.2%) in the period 1990–1999 [ 2 ]. Recent reports have, however, suggested a slowing or stabilisation in the rate of increase. In the USA, pooled data from five centres for children and adolescents under 20 years of age indicated a 1.8% (95% CI 1.0%, 2.6%) annual increase during 2002–2012 after adjustment for age, sex and race or ethnic group [ 3 ], and a similar rate of increase of 1.3% (95% CI 0.0%, 2.5%) has been reported for the Canadian province of British Columbia in the period 2002–2013 [ 4 ]. In Australia, a non-significant annual increase of 0.4% (95% CI −0.1%, 0.9%) was reported in the under-15-year-old population during the period 2000–2011, although a significant increase of 1.2% (95% CI 0.4%, 2.1%) was observed in the 10- to 14-year-old age group [ 5 ]. Within Europe, no increase was found in Sweden during the period 2005–2007 despite a prolonged period of uniform increase during the previous 15 years [ 6 ]. Very similar levelling incidence rates, beginning at about the same time and with longer periods of observation, were subsequently reported in two other high-incidence Scandinavian countries, Finland [ 7 ] and Norway [ 8 ]. In contrast, a report from Zhejiang province in the low-incidence region of China described a very rapid 12.0% (95% CI 7.6%, 16.6%) increase in annual incidence rate among those aged under 20 years during the period 2007–2013 [ 9 ]. There have also been reports in the literature of a cyclical variation in year-to-year incidence rates. The earliest report was from the Yorkshire regional registry in England during the period 1978–1990, which described a marked epidemic pattern with peaks at 4 year intervals [ 10 ]. A subsequent brief report from a neighbouring area of north-east England in the period 1990–2007 described a 6 year cyclical pattern with an amplitude of ±25% [ 11 ], but there is no established register in the region and no support for the claim of high ascertainment. A sinusoidal cyclical pattern with peaks observed every 5 years and an amplitude of ±14% has also been reported from Western Australia for the period 1985–2010 [ 12 ], and was subsequently replicated in an Australia-wide analysis during the period 2000–2011 [ 5 ]. A report from five regions of Poland during the period 1989–2012 using Fourier series methods found a 5.33 year periodicity in rates, with an amplitude of ±8% [ 13 ]. To help clarify the recent trends in European incidence rates, an analysis of EURODIAB registry data from over 84,000 children in 26 European centres representing 22 countries is presented for the 25 year period 1989–2013, with separate estimates of incidence rate increases derived in each of five subperiods. This dataset also provides an excellent opportunity to investigate the claims of cyclical variation in incidence rates. Methods The establishment of the registries and case definition used has previously been described [ 14 ]. Type 1 diabetes was defined on the basis of a clinical diagnosis made by a physician, omitting cases that were secondary to other conditions (e.g. cystic fibrosis or high-dose corticosteroid treatment). Registries attempt to capture prospectively all newly diagnosed individuals in a geographically defined region. Primary and secondary sources of ascertainment were recorded for each child, and these were used to estimate completeness by capture–recapture methodology. The completeness findings for 1989–2008 have previously been reported as being considerably in excess of 90% in most of the registries (as reported by ESM Table 2 from the 20 year report) [ 15 ]. The geographical coverage of the 26 registries is shown in Fig. 1 and represents 23% of the estimated European childhood population in 2011 (excluding Belarus, Ukraine and the Russian Federation). Ethics approval was obtained by individual centres where required. Fig. 1 Map of 26 participating EURODIAB centres (whole nations unless a region is specified). Administrative boundaries: ©EuroGeographics 2018; adapted with permission. FYR, Former Yugoslav Republic Full size image Incidence rates were obtained by dividing the numbers of registered children by annual population estimates. Standardisation of rates was obtained by the direct method with a standard population comprising equal numbers in each of six subgroups defined by age group (0–4 years, 5–9 years and 10–14 years) and sex. Standard errors for the directly standardised rates were also calculated [ 16 ]. Trends in annual incidence rates in each country were investigated in the 25 year period using Poisson regression incorporating an adjustment for age group and sex. Comparisons of trends between age groups and sexes were obtained within each country by incorporating interactions into the Poisson regression model. The Joinpoint regression analysis program version 4.2 (Statistical Methodology and Applications Branch and Data Modeling Branch, National Cancer Institute, Bethesda, MD, USA) was used to fit segmented regression lines to the logarithmically transformed directly standardised incidence rates, taking account of their standard errors. Pooled estimates of rates of increase across all 26 centres were obtained using a mixed effects Poisson regression model with centre treated as a random effect and age and sex as fixed effects. Motivated by reports in the literature of 4, 5 and 6 year cycles in incidence rate, sine and cosine terms representing such cycles were added to Poisson regression models for annual age-/sex-specific incidence rates, along with terms for age group and sex as well as the segmented log-linear trends with year as identified by the Joinpoint analysis. The sine and cosine terms are similar to those described for the study of seasonal variation in month-to-month counts [ 17 ] but were adapted for the detection of cyclical variation in yearly rates. Statistical analyses were performed in SPSS version 24 (IBM Corp, Armonk, NY, USA) and Stata release 14 (StataCorp, College Station, TX, USA). Unless otherwise stated, hypothesis testing was performed at the 5% significance level ( p < 0.05). Results Ascertainment rates remained in excess of 90% for most centres, although data were not available for all of these (see electronic supplementary material [ESM] Table 1 ). Table 1 shows the total numbers of children registered during the 25 year period 1989–2013 in each of the 26 centres, and the age- and sex-standardised incidence rates (with standard errors) in the 5 year subperiods 1989–1993, 1994–1998, 1999–2003, 2004–2008 and 2009–2013. The age- and sex-specific incidence rates for each period used in the calculations are shown in ESM Table 2 . Table 1 Incidence rates per 100,000 person-years (with standard errors) standardised for age group and sex in 5 year periods for 26 EURODIAB centres Full size table As illustrated in Fig. 2 , the trends in age-standardised rates differed little between the sexes. Two of the centres (Denmark and Germany–North Rhine-Westphalia) expanded their geographical coverage in 1999, and the lines for these centres are therefore shown with a break at that point, although in both cases the degree of any discontinuity appears to be minimal. In a preliminary analysis, incidence rate increases were estimated using Poisson regression analysis assuming a constant rate of increase throughout the period. Fig. 2 Trends in age-standardised incidence rates, plotted on a logarithmic scale, by sex for type 1 diabetes in 26 European centres during 1989–2013. Blue lines, boys; red lines, girls. Breaks are shown for Denmark and Germany (North Rhine-Westphalia) between 1998 and 1999 because of increased coverage of these registers, but any discontinuities appear to be very minor. Macedonia (FYR), Former Yugoslav Republic of Macedonia Full size image Figure 3 shows that the rate of increase was highest in the Poland–Katowice centre (6.6% per annum) and the lowest in the Spain–Catalonia centre (0.5% per annum). Except for the Ireland and Italy–Marche centres, all rates of increase were significantly greater than zero. A significant inverse relationship was found between the rate of increase in each centre and its directly standardised rate during the entire period (Spearman’s rank correlation coefficient, r s = −0.45, p = 0.02), indicating that the percentage increase in rate tended to be lower in centres with higher rates. A comparison of rates of increase between the sexes within each centre revealed differences in three centres, each showing a significantly higher rate of increase in boys than girls. A comparison of rates of increase between age groups within each centre revealed differences in nine centres, and in six of the nine the highest rate of increase was found in the 0- to 4-year-old age group. Full details are available in ESM Table 3 . Fig. 3 Estimated rates of annual increase in type 1 diabetes in 26 European centres. Rates of increase in individual centres were derived from Poisson regression analyses with adjustment for age, sex, age × sex interaction and inclusion of a log-linear term for year in the model. The overall pooled estimate was derived from a Poisson regression with centre as a random effect. FYR, Former Yugoslav Republic; N., North Full size image Mixed effects Poisson regression provided estimated rates of increase in the pooled data from the 26 centres, as shown in Table 2 . Overall, the annual rate of increase was estimated to be 3.4% (95% CI 2.8%, 3.9%). Rates of increase were similar in boys and girls in the 0- to 4-year-old age group (3.7% and 3.7% per annum, respectively) and in the 5- to 9-year-old age group (3.4% and 3.7% per annum, respectively), but were higher in boys than girls in the 10- to 14-year-old age group (3.3% and 2.6% per annum, respectively). The estimates of overall rate of increase by period suggested a slowing in 2004–2008, but the rate of increase appeared to have almost returned to previous levels in the 2009–2013 period. Table 2 Annual increases in incidence rate over a 25 year period pooled over centres as estimated by mixed effects Poisson regression in subgroups defined by age group, sex and time period Full size table The fitted Joinpoint segmented regression analyses for each centre are presented in ESM Fig. 1 . The best fit for 18 of the 26 centres throughout the period was a log-linear increase in the age-standardised rate. Six centres showed more rapid increases in an early period followed by lower rates of increase in a later period. In two Central European centres (Czechia and Poland–Katowice), the change took place in 2002, at roughly the same time as in two UK centres (UK–Oxford in 2000 and UK–Northern Ireland in 2003). In two Scandinavian centres (Finland and Norway), the levelling off took place a little later, in the years 2005 and 2007, respectively. Only in a single centre (Lithuania) was an initially low rate of increase followed by a period after 1996 with a higher rate of increase. The final centre (Germany–Baden-Württemberg) showed a more complex pattern, with steady rates of increase in the early and late part of the 25 year period separated by a short period of more rapid increase in 2001–2004. Poisson regression results provided most support for a 4 year periodicity, with four centres giving likelihood ratio tests that attained significance at the reduced p < 0.01 level (to allow for multiple testing) compared with none for a 5 year periodicity and two for a 6 year periodicity (tests of significance summarised in ESM Table 4 ). Plots of the observed age-standardised annual incidence rates and the fitted rates for 4 year cycles are shown in Fig. 4 for these four centres. One of the four centres showed its most recent peak in fitted incidence rate in 2011 (Switzerland, with an amplitude of ±10% superimposed on the log-linear increasing trend), while the three remaining centres showed the most recent peaks in 2012 (Germany–North Rhine-Westphalia with an amplitude of ±5%, Germany–Saxony with an amplitude of ±15% and UK–Oxford with an amplitude of ±9%). Fig. 4 Observed (continuous blue line) and fitted (red dashed line) standardised incidence rates (per 100,000 person-years) obtained by Poisson regression in four centres that showed significant ( p < 0.01) 4 year periodicity when superimposed on long-term Joinpoint segmented regression trends: ( a ) Germany–North Rhine-Westphalia; ( b ) Germany–Saxony; ( c ) Switzerland; ( d ) UK–Oxford Full size image Discussion Our analyses of individual centre results confirmed the recent slowing of incidence rate increases in some high-incidence areas such as Finland [ 7 ] and Norway [ 8 ], but using only data from Stockholm County we were unable to detect the same pattern that had previously been reported from Sweden [ 6 ]. Two of the three centres from the UK, another country with high rates, also showed reducing rates of increase, although these seemed to have begun a few years earlier than in Scandinavia. Our pooled estimates suggest that, despite some high-risk countries showing some slowing in the rate of increase in recent years, the overall pattern is still one of an approximately 3% per annum increase, although with a possible temporary slowing in the 2004–2008 period. As previously noted in our 15 year analysis, the rate of increase in girls aged 10–14 years is less marked than in other age/sex subgroups [ 1 ]. Our analysis shows that, in the majority of centres, a steady log-linear increase in rates with time provided a good description of the temporal changes, with only a few (mainly high-incidence) areas showing some evidence of non-uniformity. The cyclical pattern in incidence observed in four of our 26 centres is consistent with the earliest report of a 4 year cyclical incidence pattern [ 10 ], but subsequent reports have described 5 year or 6 year periodicities [ 11 , 12 , 13 ], for which we found little support in our data. No clear rationale for periodicity has yet been proposed and, to the authors’ knowledge, no climatological factor [ 18 ], viral infection [ 19 ] or other environmental exposure has yet been firmly established that exhibits such a cyclical pattern. Since autoimmunity and progressive beta cell destruction typically start long before the clinical diagnosis of type 1 diabetes, the periodicity in diagnosis could be indicative of cycles of infectious disease that accelerate the diagnosis rather than initiate the disease. Regular cycles of infectious diseases are well known from classic work done before population-wide vaccination for measles, an extremely contagious viral disease of childhood; this research showed that, in an otherwise stable population, epidemic cyclicity depends on community size [ 20 ]. It is also unclear why only a small proportion of the 26 centres showed this periodicity and, although we acknowledge that power may be limited in smaller centres, it was not apparent in many of the largest centres that might be expected to have had a high power to detect it. This could perhaps suggest that it may have more localised origins. What determines this localisation remains enigmatic, as cyclical patterns were absent in Austria, Czechia and Germany–Baden-Württemberg, three large registers each with neighbouring areas where pronounced cyclical patterns were noted. It is possible that not only the size of the population, but also its spatial structure (i.e. the size of the communities, and their mutual links) may play an important role in the ability of the hypothetical infectious accelerator to be transmitted [ 21 ]. To our knowledge, among autoimmune conditions, only incidence cycles in juvenile idiopathic arthritis have been correlated to cycles of serologically confirmed microbial agents—in a Canadian study, peak incidences of arthritis were concurrent with peaks of Mycoplasma pneumoniae , whereas no such phenomenon was noted for the incidence of seronegative (i.e. non-immune mediated) spondyloarthropathies [ 22 ]. The recent report of a twofold risk of type 1 diabetes diagnosed by the age of 30 years among those with laboratory-confirmed pandemic influenza A (H1N1) [ 23 ] may stimulate interest in less consistent patterns of incidence peaks in type 1 diabetes since localised seasonal influenza epidemics (as opposed to much rarer pandemics) can occur at irregular intervals [ 24 ]. Most of the participating registers have maintained their completeness of coverage at levels in excess of 90% in the most recent 5 year period, but these estimates of completeness rely on an assumption of independence in the primary and secondary sources that is very difficult to verify. As more sophisticated information systems for drug prescribing and clinical management become available, it seems likely that the traditional approach based on notification of individual new diagnoses will give way to more automated approaches that take advantage of these information systems. Although it could be argued that the diagnosis of type 1 diabetes should ideally be confirmed by the presence of one or more specific autoimmune markers [ 25 ], this is seldom done in clinical practice, and we have therefore continued to use a pragmatic definition of type 1 diabetes based on clinical judgement. A UK study found that all but 8 (3%) of 256 clinically diagnosed cases of type 1 diabetes in individuals aged 20 years or younger were positive for one or more of four antibodies [ 26 ], but the case for routine antibody testing at diagnosis is not compelling [ 27 ]. Individuals diagnosed before 6 months of age now tend to be routinely investigated for monogenic forms of the disease [ 28 ], but the number of such cases is very small. Findings in the literature on whether or not type 2 diabetes is becoming more common in children and adolescents are inconsistent [ 29 , 30 , 31 ], but the distinction between the two types of diabetes is generally not difficult in the paediatric age group. Furthermore, European studies [ 30 , 31 , 32 , 33 ] confirm that the rate of type 2 diabetes is a small fraction of that of type 1 diabetes, and we do not therefore feel that misclassification of type 2 diabetes represents a serious challenge to the validity of our findings. The use of mixed effects Poisson regression, in which age group and sex are considered as fixed effects but centre is treated as a random effect, gives similar estimates of the increase in incidence rate to the more conventional fixed effects analysis that we have used in previous analyses; however, confidence limits for the mixed effects model tend to be rather wider and should give a fairer reflection of uncertainty in the estimates of incidence rate increase. Taking into account the uncertainty associated with our overall incidence rate increase of 3.4% (95% CI 2.8%, 3.9%), we may expect to see a doubling in European incidence in between 18 and 25 years if the trends evident in the last 25 years are maintained. The steadily increasing number of children being diagnosed with this chronic disease, which is associated with well-documented, life-long increases in morbidity and mortality, has important implications for those planning and delivering healthcare. The limited success in identifying either environmental causes or gene–environment interactions that could eventually lead to disease prevention means that efforts must continue to improve quality of care to help reduce long-term complications and diabetes-related deaths. Key to this is the improvement in glycaemic control that will be achieved not only by more sophisticated methods of insulin delivery, but also by an increased investment in services to support well-trained and dedicated care teams in sufficient numbers to meet the growing needs of this group of children and their families. The EURODIAB childhood type 1 diabetes registers, with their wide, population-based coverage of European regions of differing incidence, and their high levels of case ascertainment, will continue to provide a valuable source of data for monitoring the future incidence of childhood type 1 diabetes. Data availability Much of the data generated or analysed during this study are included in this article and its accompanying electronic supplementary material (ESM) files. Requests for further data should be sent to the corresponding author. | New research published in Diabetologia (the journal of the European Association for the Study of Diabetes [EASD]) shows that new cases of type 1 diabetes are rising by 3.4% per year across Europe. If this trend continues, incidence would double in the next 20 years. The study is coordinated by Professor Chris Patterson, Centre for Public Health, Queen's University Belfast, UK. Against a background of a near-universally increasing incidence of childhood type 1 diabetes, recent reports from some countries suggest a slowing in this increase. Occasional reports also describe cyclical variations in incidence, with periodicities of between 4 and 6 years. In this study, the authors analysed age/sex-standardised incidence rates for the 0- to 14-year-old age group, reported for 26 European centres (representing 22 countries) that have registered newly diagnosed individuals in geographically defined regions for up to 25 years during the period 1989-2013. The data showed significant increases in incidence in all but two small centres, with a maximum rate of increase of 6.6% per annum in a Polish centre. Several centres in high-incidence countries, including Finland and Norway, along with 2 centres in the UK, showed reducing rates of increase in more recent years. Despite this, a pooled analysis across all centres revealed a 3.4% per annum increase in incidence rate, although there was some suggestion of a reduced rate of increase in the 2004-2008 period, where it fell to 1.1% per annum (see table 2, full paper). Rates of increase were similar in boys and girls in the 0- to 4-year-old age group (3.7% and 3.7% per annum, respectively) and in the 5- to 9-year-old age group (3.4% and 3.7% per annum, respectively), but were higher in boys than girls in the 10- to 14-year-old age group (3.3% and 2.6% per annum, respectively). Significant 4-year periodicity was detected in four centres (Germany-Saxony, Germany North Rhine-Westphalia, Switzerland, and UK Oxford) with three centres showing that the most recent peak in fitted rates occurred in 2012. However, the authors could find no plausible reason for this cyclical 4-year variation. The authors say: "The steadily increasing number of children being diagnosed with this chronic disease, which is associated with well-documented, life-long increases in morbidity and mortality, has important implications for those planning and delivering healthcare. The limited success in identifying either environmental causes or gene-environment interactions that could eventually lead to disease prevention means that efforts must continue to improve quality of care to help reduce long-term complications and diabetes-related deaths." They add: "Key to this is the improvement in blood sugar control that will be achieved not only by more sophisticated methods of insulin delivery, but also by an increased investment in services to support well-trained and dedicated care teams in sufficient numbers to meet the growing needs of this group of children and their families." | 10.1007/s00125-018-4763-3 |
Medicine | Prolonged oxygen exposure causes long-term deficits on hippocampal mitochondrial function in newborns | Manimaran Ramani et al. Early Life Supraphysiological Levels of Oxygen Exposure Permanently Impairs Hippocampal Mitochondrial Function, Scientific Reports (2019). DOI: 10.1038/s41598-019-49532-z Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-019-49532-z | https://medicalxpress.com/news/2019-10-prolonged-oxygen-exposure-long-term-deficits.html | Abstract Preterm infants requiring prolonged oxygen therapy often develop cognitive dysfunction in later life. Previously, we reported that 14-week-old young adult mice exposed to hyperoxia as newborns had spatial and learning deficits and hippocampal shrinkage. We hypothesized that the underlying mechanism was the induction of hippocampal mitochondrial dysfunction by neonatal hyperoxia. C57BL/6J mouse pups were exposed to 85% oxygen or room air from P2–P14. Hippocampal proteomic analysis was performed in young adult mice (14 weeks). Mitochondrial bioenergetics were measured in neonatal (P14) and young adult mice. We found that hyperoxia exposure reduced mitochondrial ATP-linked oxygen consumption and increased state 4 respiration linked proton leak in both neonatal and young adult mice while complex I function was decreased at P14 but increased in young adult mice. Proteomic analysis revealed that hyperoxia exposure decreased complex I NDUFB8 and NDUFB11 and complex IV 7B subunits, but increased complex III subunit 9 in young adult mice. In conclusion, neonatal hyperoxia permanently impairs hippocampal mitochondrial function and alters complex I function. These hippocampal mitochondrial changes may account for cognitive deficits seen in children and adolescents born preterm and may potentially be a contributing mechanism in other oxidative stress associated brain disorders. Introduction Many extremely preterm infants often require prolonged periods of supraphysiological oxygen (hyperoxia) exposure for their survival. In addition, even preterm infants not receiving supplemental oxygen are exposed to a relatively hyperoxemic environment compared to the hypoxemic normal intrauterine environment (PO 2 25–35 mm Hg) during a critical developmental period for many organ systems. Preterm infants who require prolonged periods of oxygen supplementation are at higher risk of morbidities such as retinopathy of prematurity 1 , 2 and chronic lung disease like bronchopulmonary dysplasia (BPD) 3 , 4 , probably as a consequence of chronic oxidative stress (OS). Children with BPD frequently exhibit deficits in executive function and cognition even in the absence of apparent brain injuries such as intraventricular hemorrhage and periventricular leukomalacia 5 , 6 , 7 , 8 . Although direct effects of OS and lung injury-induced systemic inflammation on the developing brain have been considered as possible etiologies, the exact mechanism(s) by which children with BPD develop cognitive dysfunction despite no apparent brain injury is not known. Although the long-term detrimental effects of early hyperoxia exposure on lung development and function have been studied in depth, little is known about long-term effect of early hyperoxia exposure on brain development and function. Previously, we have shown that in C57BL/6J mice, hyperoxia (85% oxygen [O 2 ]) exposure during the neonatal period (P2–14) (neonatal hyperoxia) leads to spatial memory and learning deficits, increased exploratory behavior, and shrinkage of area CA1 of the hippocampus when assessed at young adult age (14 weeks) 9 . Recently, our proteomic analysis of hippocampal homogenates from neonatal mice (P14) exposed to hyperoxia from P2–14 indicated impairments in hippocampal protein synthesis and translation and predicted mitochondrial dysfunction 10 . Hyperoxic exposure can cause cell death 11 and impair cell survival 12 in the developing brain. Chronic OS due to O 2 supplementation may negatively affect neuronal mitochondrial function and lead to neurodegenerative disorders 13 . Area CA1, a region of the hippocampus crucial for the acquisition of long-term memory 14 , 15 , 16 , is highly vulnerable to OS 17 . Mitochondria isolated from CA1 neurons have been shown to generate more reactive oxygen species (ROS) than any other regions of the hippocampus 18 . Adequate mitochondrial function is essential for mechanisms required for learning and memory 19 in the hippocampus. While mitochondrial dysfunction is associated with the pathogenesis of several neurodegenerative diseases in adults 20 , the impact of early-life mitochondrial dysfunction on long-term brain development and function is yet to be determined. In this study, we hypothesized that prolonged hyperoxia exposure during the critical developmental period would permanently alter hippocampal mitochondrial function. Our objective was to determine the long-term changes in hippocampal mitochondrial respiratory complex protein expression and bioenergetic function in neonatal mice (P14) and young adult mice (14 weeks) exposed to hyperoxia from P2–P14. Results Targeted and global proteomics Long-term effect of neonatal hyperoxia exposure on hippocampal mitochondrial complex I, II, and III protein expressions in young adult mice Young adult mice exposed to hyperoxia as neonates had reduced amounts of complex I NADH Dehydrogenase [Ubiquinone] 1 Beta Subcomplex 8 (NDUFB8), and complex I NADH Dehydrogenase [Ubiquinone] 1 Beta Subcomplex 11 (NDUFB11) subunit proteins (Table 1 ). The levels of other detected complex I subunits and of complex II subunits were comparable between the groups (Table 1 ). Complex III cytochrome b-c1 complex subunit 9 was increased in the hyperoxia-exposed group compared to air-exposed group (Table 1 ). Table 1 Long-term Effect of Neonatal Hyperoxia Exposure on Hippocampal Mitochondrial Complex I, II and III Proteins in Young Adult Mice (n = 5 in Air group, 6 in Hyperoxia group). Full size table Long-term effect of neonatal hyperoxia exposure on hippocampal mitochondrial complex IV and V protein expressions in young adult mice Young adult mice exposed to hyperoxia as neonates had less cytochrome C oxidase subunit 7B (COX7B) protein compared to air-exposed groups (Table 2 ). The amounts of other detected complex IV and V subunits were similar between the groups (Table 2 ). Table 2 Long-term Effect of Neonatal Hyperoxia Exposure on Hippocampal Mitochondrial Complex IV and V Proteins in Young Adult Mice (n = 5 in Air group, 6 in Hyperoxia group). Full size table Bioinformatic analysis of differentially expressed hippocampal proteins Differentially expressed hippocampal proteins in young adult mice following neonatal hyperoxia exposure With a cut-off of ±1.5 fold-change with P-value < 0.05 (by analysis of variance), and a false discovery rate of 5%, we identified a total of 196 hippocampal proteins that were differentially expressed in neonatal hyperoxia-exposed young adult mice compared to air-exposed young adult mice. Of these 196 proteins, 48 proteins were increased, and 148 were decreased following neonatal hyperoxia exposure. The heat map of the differentially expressed hippocampal proteins is shown in Supplemental Fig. S1 . The full list of upregulated and downregulated proteins (limited to fold-change 1.5) in young adult mice exposed to neonatal hyperoxia are listed in Supplemental Tables S1 and S2 , respectively. The top 10 differentially expressed hippocampal proteins in the hyperoxia-exposed young adult mice are listed in Table 3 . The protein classes that are upregulated and downregulated by hyperoxia exposure are shown in Fig. 1A,B , respectively. Table 3 Top 10 Differentially Expressed Hippocampal Proteins in Neonatal Hyperoxia-Exposed Young Adult Mice (n = 5 in Air group, 6 in Hyperoxia group, P = Hyperoxia vs. Air group). Full size table Figure 1 Distribution of hippocampal proteins in young adult mice exposed to neonatal hyperoxia. ( A ) Graphical representation of the distribution of upregulated hippocampal proteins by class in the hyperoxia-exposed group. ( B ) Graphical representation of the distribution of downregulated hippocampal proteins by class in the hyperoxia-exposed group. ( C ) Graphical representation of the distribution of upregulated hippocampal proteins by biological processes in the hyperoxia-exposed group. ( D ) Graphical representation of the distribution of downregulated hippocampal proteins by biological processes in the hyperoxia-exposed group. Full size image Long-term effect of effect of neonatal hyperoxia exposure on hippocampal biological processes in young adult mice Differentially expressed hippocampal proteins were predominantly involved in biological processes of cellular process, metabolic process, biogenesis, and protein localization (Fig. 1C,D ). Among the upregulated hippocampal proteins, functions of 20 (29.9%) proteins were associated with cellular process, 18 (26.9%) were proteins associated with metabolic process, 9 (13.4%) were proteins associated with the biogenesis process, and 6 (9%) were proteins associated with the localization process (Fig. 1C ). Among the downregulated proteins, functions of 62 (27.8%) proteins were associated with the cellular process, 39 (18.1%) were proteins associated with metabolic process, 26 (12.1%) were proteins associated with biogenesis process, and 20 (9.6%) were proteins associated to localization process (Fig. 1D ). Top canonical hippocampal pathways regulated by neonatal hyperoxia exposure in young adult mice The top canonical pathways that were most impacted by neonatal hyperoxia exposure are listed in Table 4 . Bioinformatic analysis of differentially expressed hippocampal proteins predicated that mitochondrial function (P = 2.03E-06), oxidative phosphorylation (P = 1.25E-05), GABA receptor signaling (P = 1.26E-04), amyotrophic lateral sclerosis signaling (P = 4.09E-04), and amyloid processing (P = 5.34E-04) were impacted in young adult mice that had neonatal hyperoxia exposure. Fifteen proteins were associated with mitochondrial function, 11 were related to oxidative phosphorylation, 9 were related to GABA receptor signaling, 9 were proteins related to amyotrophic lateral sclerosis signaling, and 6 were proteins related to amyloid processing and were differentially expressed in the hyperoxia-exposed group. Table 4 Top Canonical Pathways Involved in Neonatal Hyperoxia-Exposed Young Adult Mice by Ingenuity Pathway Analysis (Using Proteomics Data, n = 5 in Air group, 6 in Hyperoxia group, P = Hyperoxia vs. Air group). Full size table Mitochondrial studies Effects of neonatal hyperoxia exposure on hippocampal mitochondrial bioenergetics in neonatal mice (P14) Neonatal hyperoxia (exposure from P2–P14) exposure decreased both pyruvate/malate mitochondrial ATP linked (P = 0.01) and complex I enzyme activity (P = 0.01) at P14 (Fig. 2A,D , respectively). No differences were observed between hyperoxia-exposed and air-exposed controls in ATP linked O 2 consumption rates that utilized succinate (complex II substrate; P = 0.73) or complex IV activity (P = 0.99) (Fig. 2C,E , respectively), consistent with the hypothesis that the observed differences were related to a complex I defect. Examination of state 4 minus basal O 2 consumption rates also suggested increased oxygen consumption in the hyperoxia-exposed group (P = 0.03), which could be linked to increased proton leak and/or oxidant generation (Fig. 2B ). Hyperoxia-exposed neonatal mice also had reduced citrate synthase activity (P = 0.01) (Fig. 2F ), compared to air-exposed neonatal mice. Figure 2 Effects of neonatal hyperoxia exposure on hippocampal mitochondrial bioenergetics in neonatal mice. A = ATP linked oxygen consumption, B = State 4 respiration proton leak, C = Succinate induced oxygen consumption, D = Complex I activity measured by assay, E = Complex IV activity measured by assay, and F = Citrate synthase activity measured by assay. Air-exposed: cyan bars with horizontal stripes and hyperoxia-exposed: solid red bars; means ± SEM; n = 9 in air and 9 in hyperoxia. * p < 0.05 vs. air-exposed mice. Full size image Effects of neonatal hyperoxia exposure on hippocampal mitochondrial bioenergetics in young adult mice (14 weeks) Similar to the observations in neonates exposed to hyperoxia, adult mice (14 weeks old) that underwent neonatal hyperoxia exposure from P2–P14 had decreased mitochondrial ATP linked O 2 consumption in the presence of complex I substrates (pyruvate/malate) (P = 0.01) (Fig. 3A ) whereas in the presence of succinate (P = 0.74), no differences were observed relative to air-exposed controls (Fig. 3D ). Similarly, no differences were observed in complex IV activity (P = 0.45) between exposed and unexposed control groups (Fig. 3G ). Also, similar to the hyperoxia-exposed newborn mice, oligomycin induced state 4 O 2 consumption rates minus basal O 2 consumption rates were increased (P = 0.05) (Fig. 3C ), consistent with increased proton leak and/or oxidant generation. However, unlike in neonates, complex I activity (P = 0.002) was significantly increased in the 14-week old mice exposed to hyperoxia as neonates (Fig. 3E ). Subgroup analysis by sex also showed that young adult male mice exposed to hyperoxia as neonates had decreased ATP linked O 2 consumption (One Way ANOVA, mean difference 36.7, P = 0.01) (Fig. 3B ) and increased complex I activity (One Way ANOVA, mean difference 13.66, P = 0.03) (Fig. 3F ) compared to air-exposed young adult male mice. The difference in citrate synthase activity (P = 0.78) seen in neonatal mice exposed to hyperoxia were no longer observed when assessed as young adults (Fig. 3H ). Figure 3 Effects of neonatal hyperoxia exposure on hippocampal mitochondrial bioenergetics in young adult mice. A = ATP linked oxygen consumption, B = ATP linked oxygen consumption by sex, C = State 4 respiration proton leak, D = Succinate induced oxygen consumption, E = Complex I activity measured by assay, F = Complex I activity by Sex, G = Complex IV activity measured by assay, and H = Citrate synthase activity measured by assay. ( A,C,D,E,G and G ); air-exposed: cyan bars with horizontal stripes and hyperoxia-exposed: solid red bars; means ± SEM; n = 7–8 in air and 7–9 in hyperoxia. * p < 0.05 vs. air-exposed mice. ( B,F ); Air-exposed females: solid cyan bars, air-exposed males: cyan bars with horizontal stripes, hyperoxia-exposed females: solid red bar, and hyperoxia-exposed males, red bars with angled stripes; means ± SEM; n = 4/sex/group. * p < 0.05 = air-exposed mice vs. hyperoxia-exposed males by One Way ANOVA. Full size image Effects of neonatal hyperoxia exposure on hippocampal mitochondrial copy number in neonatal (P14) and young adult mice (14 weeks) No difference in the mitochondrial copy number was observed among the air- and hyperoxia-exposed neonatal mice (Fig. 4A ). Similarly, no difference in the mitochondrial copy number was observed among air- and hyperoxia-exposed young adult mice (Fig. 4B ). Figure 4 Effects of neonatal hyperoxia exposure on hippocampal mitochondrial DNA copy number in neonatal and young adult mice. A = Mitochondrial copy number relative to air exposed controls in neonatal (P14) mice and B = Mitochondrial copy number relative to air exposed controls in young adult (P14) mice. Air-exposed: cyan bars with horizontal stripes and hyperoxia-exposed: solid red bars; means ± SEM; n = 6 in air and 6 in hyperoxia. * p < 0.05 vs. air-exposed mice. Full size image Discussion This is the first preclinical study to demonstrate the long-term adverse effect of early life hyperoxia on hippocampal mitochondrial function and mitochondrial respiratory chain protein expression. We discovered that hyperoxia exposure during a critical developmental period permanently impairs hippocampal mitochondrial function, alters the expression of specific respiratory chain subunits for complexes I and III, and impairs complex I activity in the hippocampus. As spatial memory deficits and other cognitive problems in the mouse model of bronchopulmonary dysplasia (BPD) correspond to the cognitive deficits seen in adolescents with BPD, these new observations suggest that permanent hippocampal mitochondrial dysfunction induced by early life oxygen exposure as a contributor to the pathophysiology of BPD associated cognitive dysfunction. This study has several strengths. We have used unbiased proteomic analysis of whole hippocampal tissue using highly sensitive mass spectrometric methods. Rather than being limited to only mitochondrial proteins, our study also evaluated the long-term impact of early life oxygen exposure on all hippocampal proteins and used sophisticated bioinformatics analysis to define long-term changes in the hippocampal signaling pathways. This study also evaluated mitochondrial bioenergetics induced by hyperoxia exposure during the critical developmental period (P14) and young adult age (14 weeks, the age at which we observed cognitive dysfunction in our previous study). In addition to citrate synthase assay, a surrogate measure of mitochondrial content, we also measured mitochondrial copy number, an alternative measure for mitochondrial content. There are also a few limitations to this study. The proteomics and the mitochondrial bioenergetic studies were performed on the whole hippocampus instead of specific hippocampal subfields which are known to play different roles in memory and learning. Furthermore, since proteomic and bioenergetic studies were done from the whole hippocampus, it is not possible to determine whether these early life oxygen-induced changes in hippocampal proteins and mitochondrial function were predominantly derived from neurons or glial cells or a combination of both. In addition, proteomic methods often require large sample sizes, as there is often much variation from one sample to another, and even large differences may not end up being statistically significant. In this study, we focused on proteomics and mitochondrial bioenergetics only from the hippocampal homogenates, and not from other regions of the brain such as cerebellum, amygdala, corpus callosum, and white matter tracts which might also have impacted by hyperoxia exposur 21 , 22 , 23 . Even though hippocampal complexes I and IV activities were measured, complex III and complex V activities were not measured due to technical difficulties and the size of the hippocampus. Though lung and brain development in newborn mouse pups corresponds to 24–28 weeks of gestation in human preterm infants, the highly efficient redox and gas exchange system of the C57BL6 mice 24 requires supraphysiological concentrations (85% O 2 ) and a longer duration (P2–14) of oxygen exposure to induce human BPD-like lung pathology 9 , 25 . Our model, while not an exact simulation of the human preterm infant in the neonatal intensive care unit, reproduces both structurally the hippocampal shrinkage and functionally the associated memory deficits 26 seen in adolescents and young adults with BPD. The hippocampus, a region of the brain that plays a vital role in consolidating short memory into long-term memory 14 , 15 , 16 , is highly vulnerable to oxidative stress 17 . Oxygen exposure causes neuronal cell death in developing brain 11 , 12 , and prolonged oxidative stress impairs neuronal mitochondrial function 13 . Neurons in the hippocampus are critically dependent on their mitochondrial function for the strengthening of synapses, a cellular response responsible for the formation and maintenance of long-term memory 27 , 28 , 29 . In neurons, mitochondria generate about 90% of the ATP by oxidative phosphorylation. In oxidative phosphorylation, oxygen is the terminal electron acceptor of the mitochondrial electron transport chain (ETC). ETC transfers electrons from high energy metabolites through a series of electron acceptors (carriers) to drive the generation of ATP from ADP 30 . The redox state of the respiratory chain is governed by the trans-membrane proton gradient and the membrane potential 31 . The redox energy used for ATP generation also leads to the production ROS 32 . Excessive ROS production following hyperoxia exposure can potentially overwhelm antioxidant defense mechanisms and leads to mitochondrial damage 33 , 34 and cellular death 35 . Our mitochondrial functional assessments show that early life hyperoxia exposure not only reduces ATP linked oxygen consumption in the hippocampus in the neonatal period (P14) but also in the young adult (14 weeks). In addition, our study also shows a persistent increase in the rate of oxygen consumption at state 4 respiration, (a surrogate measure of proton leak) in young adults exposed to hyperoxia as neonates, and suggests uncoupling between substrate oxidation and ATP synthesis. Alterations in mitochondrial coupling can alter ROS production 36 , 37 , 38 and ATP synthesis 39 . Though the amount of ATP produced by the hippocampal tissue was not measured in this study, the decrease in ATP linked oxygen consumption and increase in state 4 proton leak both at P14 and 14 weeks suggest that early life oxygen exposure permanently impairs mitochondrial efficiency in the generation of ATP. The neonatal hyperoxia-induced hippocampal mitochondrial dysfunction measured through bioenergetic studies in young adult mice is consistent with the mitochondrial dysfunction predicted through proteomic analysis. Complex I (NADH: ubiquinone oxidoreductase), the first and largest enzyme in the ETC, has been consistently shown to be vulnerable to oxidative stress-mediated dysfunction 40 . It is also thought to be the main site of ROS production 41 , 42 , and its impairment leads to an increase in ROS production 43 . Decreased complex I activity seen in hyperoxia-exposed neonatal mice suggests that oxygen exposure either directly or indirectly impairs complex I function. At 14 weeks, the targeted hippocampal proteomic analysis determined decreases in complex I NDUFB8 and NDUFB11 subunits in neonatal hyperoxia-exposed mice, inner membrane subunits that are located in the membrane arm of complex I together along with proton pumping subunits. While neither of these subunits is thought to be directly involved in catalysis, decreased levels of NDUFB8 associated with AD in rodent model 44 . However, mitochondrial bioenergetic studies indicated an increase in complex I activity at 14 weeks. The persistent decreased ATP linked O 2 consumption and increased state 4 proton leak at 14 weeks despite the increase in complex I activity in the young adult mice exposed to hyperoxia suggest persistent mitochondrial dysfunction and inadequate compensation by the later increase in complex I activity following hyperoxia-induced decreases in the newborn. Neonatal hyperoxia did not affect complex IV activity either at P14 or 14 weeks suggesting that either the complex IV enzyme is not as highly vulnerable to oxidative stress as complex I or it is well adapted to oxidative stress-induced injury. Though changes in hyperoxia-induced complex III and V activity are possible, comparable succinate-induced oxygen consumption between hyperoxia and air-exposed neonatal and young adult mice indicate that dysfunction in oxygen consumption noted with hyperoxia exposure is mainly induced by alterations in complex I function. Mitochondrial metabolism and signaling pathways that regulate cell death are sexually dimorphic 45 . Compared to the female, the male hippocampus has a lower level of endogenous antioxidant defense systems 46 and produces more ROS 47 . We determined that hyperoxia-exposed young males had reduced ATP linked O 2 consumption and increased complex I activity compared to hyperoxia-exposed young females. These observations are clinically important because prematurity associated neurodevelopmental outcomes 48 and neurodevelopmental disorders (e.g., Autism) 49 preferentially affects male sex. Citrate synthase, a surrogate marker for mitochondrial volume 50 , was reduced by hyperoxia in neonates (P14) and normalized in young adults (14 weeks). However, when we independently verified the citrate synthase (mitochondrial content) results with mitochondrial copy number by qPCR, we did not observe any differences in mitochondrial copy number among the groups both either in neonatal or young adult mice. This suggests that even though early life hyperoxia exposure permanently impairs hippocampal mitochondrial function, it may not have significant short-term or long-term impact on hippocampal mitochondrial biogenesis. In addition, hyperoxia-induced changes in the expression of Ras-related protein Rab-8A (involved in vesicular trafficking and neurotransmitter), Teneurin-1 (increases hippocampal dendritic arborization and spine density) and their roles in hyperoxia-induced cognitive dysfunction need further investigation. Also, reduction in hippocampal glucose-6-phosphate 1-dehydrogenase X (G6PD1) level in hyperoxia-exposed young adult mice suggests that early life hyperoxia exposure permanently impairs cytosolic oxidative phosphorylation, a process that is critical for the NADPH production. Since adequate NADPH level is essential for cellular oxidative stress regulation 51 , it is possible that impaired hippocampal oxidative phosphorylation in hyperoxia-exposed young adult mice could have also impaired its ability to defend against oxidative stress even under normal ambient air conditions. Our data also indicate that aberrant GABAergic signaling 52 and amyloid processing are associated with cognitive deficits, and these pathways have been linked to neurodegenerative conditions 53 . Additional studies are needed to evaluate the contribution of these canonical pathways to impaired memory and hippocampal dysfunction induced by oxidative stress and to define how they interact with mitochondrial dysfunction. Since mitochondrial biogenesis is not impacted by early hyperoxia, we speculate that early oxidative stress possibly alterted complex I protein structure leading to an increase in mitochondrial ROS production which in turn may contribute to oxidative damage to mitochondrial DNA, altered mitophagy, and mitochondrial structure leading to long-term changes in complex I function and overall mitochondrial function (Supplemental Fig. S2 ). It is also possible that neonatal hyperoxia-induced phenotype might have originated not only from the initial insult (direct oxidative stress) to the complex I and other ETC enzymes but also due to the signaling pathways such as mitochondrial UQCR9, MRPL11 RAB8A, G6PD, and Teneurin-1 that are poorly compensated at a later point. Conclusion This study demonstrated that supraphysiological oxygen exposure during a critical period in neonatal development has a permanent negative impact on hippocampal mitochondria. The pathophysiology of neonatal hyperoxia-induced permanent mitochondrial dysfunction is complex. Future studies designed to quantitate mitochondrial DNA damage, ATP, and ROS levels are needed to determine the mechanisms by which early hippocampal complex I dysfunction induces permanent complex I dysfunction and the development of spatial memory deficits. Materials and Methods All protocols were approved by the UAB Institutional Animal Care and Use Committee (IACUC) and were consistent with the PHS Policy on Humane Care and Use of Laboratory Animals (Office of Laboratory Animal Welfare, Aug 2002) and the Guide for the Care and Use of Laboratory Animals (National Research Council, National Academy Press, 1996). Animal model C57BL/6J dams and their pups of both sexes were exposed to either normobaric hyperoxia (85% O 2 , N = 6) or normobaric 21% O 2 ambient air (Air, N = 6) from the second postnatal day (P2) until postnatal day 14 (P14), returned to room air, and maintained on standard rodent diet and light/dark cycling in microisolator cages until 14 weeks of age (Fig. 5A ) 25 . An additional set of mice were exposed to either 85% O 2 (Hyperoxia, N = 6) or 21% O 2 (Air, N = 6) and sacrificed at P14 (Fig. 5A ). Figure 5 Schematics of the animal model and mitochondrial respiratory protocol. ( A ) Represents schematic of hyperoxia exposure from P2–14 and experimental studies done at P14 and 14 weeks. ( B ) Represents schematic of the mitochondrial respiratory protocol used in the high-resolution respirometry with sequentially added substrates and the calculations to assess the mitochondrial bioenergetic function using whole hippocampal tissue. ST2PM = State 2 respiration with pyruvate and malate, ST3PM = State 3 respiration with pyruvate and malate, ADP = Adenosine diphosphate, ST3PMS = State 3 respiration with pyruvate, malate and succinate, ST4 = State 4 respiration following oligomycin, ATP Linked O 2 = Adenosine triphosphate linked O 2 consumption, ST4 Proton Leak = State 4 respiration proton leak, ROX = Residual O 2 consumption, and Baseline = Baseline O 2 consumption. Full size image At 14 weeks, hippocampal proteins were analyzed using unbiased proteomic profiling using mass spectrometry. Initially, the targeted protein analysis was performed for hippocampal mitochondrial respiratory complex proteins. Subsequently, bioinformatics analysis was conducted on all other differentially expressed hippocampal proteins between hyperoxia and air-exposed groups. At P14 and 14 weeks of age, hippocampal tissues were analyzed for mitochondrial bioenergetic functions, complex I, IV, citrate synthase activity, and mitochondrial copy number. Mass spectrometry At 14 weeks, following cervical dislocation, the whole brain was harvested from the mice, and hippocampi were removed in a sterile manner 10 . Then, Tissue was homogenized using Qiagen tissue lyser (Qiagen, MD, USA) in T-PER + Halt protease inhibitors + PMSF solution, and protein assay was performed using BCA protein assay kit (Thermo Fisher Scientific, MA, USA) 54 . The mass spectrometric analysis of hippocampal proteins was performed as previously described 10 . Proteomics data assessment Differentially expressed proteins (fold change ± 1.5 fold and p < 0.05) were identified using T-test and further analyzed. As previously done 10 , functional analysis was performed using PANTHER (Protein ANalysis THrough Evolutionary Relationships) 55 and Ingenuity Pathway Analysis (QIAGEN Inc. MD, USA). Heat maps were generated using pheatmap package V.1.0.7 in R program. Mitochondrial bioenergetic studies (high-resolution respiratory) Whole hippocampus (right) was harvested and placed in ice cold artificial cerebrospinal fluid that contained glucose, BSA, EGTA, pyruvate, and mitochondrial respiration buffer, as previously described 56 . Briefly, hippocampal tissue was permeabilized with saponin (5 mg/mL, 30 minutes) and high-resolution respirometry performed using a two-channel respirometer (Oroboros Oxygraph-2k with DatLab software; Oroboros, Innsbruck, Austria). Reactions were conducted at 37 °C in a 2 ml chamber containing air-saturated mitochondrial respiration buffer (MiR03) under continuous stirring. As illustrated in Fig. 4B , O 2 consumption rates were measured in the presence of substrates (5 mM malate, 15 mM pyruvate, 2.5 mM ADP,10 mM succinate), and inhibitors (0.5 μM oligomycin, 5um antimycin A) to assess state 2 (substrate alone), state 3 (substrate + ADP) and oligomycin induced state 4 respiration rates. Non-mitochondrial oxygen consumption was determined in the presence of antimycin A. Adenosine triphosphate (ATP) linked O 2 consumption rate was determined by State 3 (substrates + ADP) - State 4 (oligomycin) = ATP linked rate. Non-ATP linked O 2 consumption rate was determined by State 4 (oligomycin) – non-mitochondrial oxygen consumption (antimycin A). Potential differences in O 2 consumption rates based upon substrate utilization at complex I or II were assessed using pyruvate/malate or succinate, respectively, in the presence of ADP. Complex I, IV and citrate synthase activity assays Complex I, IV, and citrate synthase activities were measured from hippocampal (left) homogenates as previously described 57 , 58 , 59 ; complex I activities were measured from freshly extracted tissues. Mitochondrial DNA copy number (qPCR) Mitochondrial DNA copy number was determined by QPCR as previously described 60 . Briefly, DNA was extracted from hippocampus homogenates from neonatal (P14) and young adult (14 weeks) mice exposed to room air or hypoxia from P2–14 using a Qiagen DNA® Mini Kit (Qiagen). DNA was quantified via fluorescence using Quant-iT™ PicoGreen™ dsDNA Assay Kit (Invitrogen). 15 ng of DNA from each sample was used for quantitative PCR (qPCR). PCR products underwent electrophoresis for 2 hours at 90 volts on 10% polyacrylamide gels. Gels were stained with ethidium bromide for 45 minutes and imaged on an Amersham TM Imager 600 (GE Healthcare). All samples were loaded in duplicate and mitochondrial DNA copy number was quantified by measuring band intensities for each age group relative to age-matched room air-exposed controls using ImageQuant (GE Healthcare). Statistical analysis Results were expressed as means ± SE. Multiple comparisons testing (Student-Newman-Keuls) was performed if statistical significance ( p < 0.05) was noted by ANOVA. | Findings recently published in Nature Scientific Reports by the University of Alabama at Birmingham's Manimaran Ramani, M.D., indicate that in a rodent model prolonged oxygen exposure during the critical developmental period permanently impairs long-term hippocampal mitochondrial function. Children and adolescents born preterm who require prolonged oxygen therapy in the neonatal intensive care unit often develop cognitive deficits, attention deficit disorder and autism spectrum disorder. The exact mechanism by which children and adolescents born preterm develop cognitive and behavioral disabilities is not known. Previously, Ramani, an associate professor in the Division of Neonatology, and his team showed that young adult mice that are exposed to oxygen as newborns develop memory deficits and hyperactivity, findings similar to that of adolescent born preterm. Although mitochondrial dysfunction and oxidative stress are associated with the pathogenesis of several neurodegenerative disorders such as Parkinson disease, the impact that early oxidative stress and mitochondrial dysfunction have on neurodevelopment is yet to be determined. The hippocampus is the region of the brain that plays a key role in the formation and maintenance of long-term memory and is highly vulnerable to oxidative stress. Neurons in the hippocampus are dependent on their mitochondrial function for the strengthening of synapses, a cellular response responsible for long-term memory. This is the first preclinical study shows that early life oxygen exposure has a permanent negative impact on hippocampal mitochondria. In their mouse model in which mice pups are exposed to oxygen, Ramani's team expected a recovery in hippocampal mitochondrial function when assessed at young adult age. On the contrary, the team was surprised by the fact that hippocampal mitochondrial dysfunction persists even after the initial oxidative stress is long gone. "Premature infants require oxygen supplementation for their survival and maintaining lower oxygen saturation is known to increase mortality," Ramani said. "Hence, research going forward should focus on determining the role of other modalities of therapies such as antioxidants to counteract the toxicity effects of oxygen." | 10.1038/s41598-019-49532-z |
Biology | Corals once thought to be a single species are really two | Jessie A. Pelosi et al, Fine-scale morphological, genomic, reproductive, and symbiont differences delimit the Caribbean octocorals Plexaura homomalla and P. kükenthali, Coral Reefs (2021). DOI: 10.1007/s00338-021-02175-x Journal information: Coral Reefs | http://dx.doi.org/10.1007/s00338-021-02175-x | https://phys.org/news/2021-10-corals-thought-species.html | Abstract Octocorals are conspicuous members of coral reefs and deep-sea ecosystems. Yet, species boundaries and taxonomic relationships within this group remain poorly understood, hindering our understanding of this essential component of the marine fauna. We used a multifaceted approach to revisit the systematics of the Caribbean octocorals Plexaura homomalla and Plexaura kükenthali , two taxa that have a long history of taxonomic revisions. We integrated morphological and reproductive analyses with high-throughput sequencing technology to clarify the relationship between these common gorgonians. Although size and shape of the sclerites are significantly different, there is overlap in the distributions making identification based on sclerites alone difficult. Differences in reproductive timing and mode of larval development were detected, suggesting possible mechanisms of pre-zygotic isolation. Furthermore, there are substantial genetic differences and clear separation of the two species in nuclear introns and single-nucleotide polymorphisms obtained from de novo assembled transcriptomes. Despite these differences, analyses with SNPs suggest that hybridization is still possible between the two groups. The two nascent species also differed in their symbiont communities (genus Breviolum ) across multiple sampling sites in the Caribbean. Despite a complicated history of taxonomic revisions, our results support the differentiation of P. homomalla and P. kükenthali, emphasizing that integrative approaches are essential for Anthozoan systematics. Working on a manuscript? Avoid the common mistakes Introduction Sessile anthozoans form the basis of some of the most biodiverse and economically important ecosystems on Earth (Moberg and Folke 1999 ; Barbier et al. 2011 ). These communities, however, are increasingly threatened by anthropogenic disturbances (e.g., Hoegh-Guldberg et al. 2017 ). Of particular interest are Caribbean octocorals, commonly referred to as gorgonians, as their abundance has remained relatively stable or has increased in recent decades (Ruzicka et al. 2013 ; Lenz et al. 2015 ; Tsounis and Edmunds 2017 ; Lasker et al. 2020 ), while their scleractinian counterparts (i.e., stony corals) have suffered declines (e.g., Gardner et al. 2003 ; Côté et al. 2005 ; Roff and Mumby 2012 ). Our ability to conserve these ecosystems relies on assessing traits such as reproductive potential, dispersal, population connectivity, and abundance (Mumby and Steneck 2008 ), tasks that require an understanding of species boundaries and relationships among groups (Knowlton and Jackson 1994 ). Despite their importance, there are clear gaps in our understanding of Caribbean octocorals (Kupfner Johnson and Hallock 2020 ; Lasker et al. 2020 ). The most comprehensive taxonomic treatment of the group was written six decades ago and relies on a mix of qualitative and quantitative distinctions in colony form and the morphology of sclerites (i.e., calcium carbonate spicules; Bayer 1961 ). While these characters have been useful in distinguishing some octocoral species (e.g., Aharonovich and Benayahu 2012 ), subtle differences in morphological traits often limits their utility for species delimitation. Phenotypic plasticity can also influence sclerite morphology across different environments, particularly size and shape (e.g., West et al. 1993 ; West 1997 ; Prada et al. 2008 ; Joseph et al. 2015 ). The interplay between genetic and environmental variation of these characters makes them difficult to use reliably in systematic studies. Molecular approaches to classification have resulted in significant improvements in octocoral systematics (e.g., Sánchez et al. 2003 ), and much of this work has relied on mitochondrial genes. These markers have been useful in determining relationships between genera, but the slow rate of molecular evolution in Anthozoan mitochondrial DNA has limited their utility in distinguishing species (Shearer et al. 2002 ; McFadden et al. 2011 ). There are few single copy nuclear loci that reliably amplify across octocorals (McFadden et al. 2010 ) and those that are available may be low-resolution (e.g., Wirshing and Baker 2015 ), while other markers are multi-copy and show high intra-individual variation (e.g., Vollmer and Palumbi 2004 ). These obstacles have hindered researchers’ ability to draw boundaries between closely related taxa, but recent work indicates that high-throughput sequencing approaches can enhance our understanding of Anthozoan evolution and systematics, especially in recent radiations with low levels of genetic divergence among species (e.g., Pante et al. 2015 ; Combosch and Vollmer 2015 ; Quattrini et al. 2019 ; Erickson et al. 2021 ). More recently, integrated approaches have proven fruitful in clarifying taxonomic and evolutionary relationships of octocorals using combinations of morphology, ecology, and molecular traits (e.g., McFadden et al. 2014 ; Wirshing and Baker 2015 ; Calixto-Botía and Sánchez 2017 ; Prada and Hellberg 2021 ). For example, McFadden et al. ( 2001 ) used molecular, morphologic, and reproductive traits to characterize reproductive trait evolution in Alcyonium and Prada et al. ( 2014 ) found host-symbiont specificity between symbiont phylotypes and distinct host lineages of Eunicea flexuosa . Aratake et al. ( 2012 ) similarly combined analyses of morphology, host genetics, symbiont genetics, and chemotypes to clarify relationships in the genus Sarcophyton, and Arrigoni et al. ( 2020 ) developed a robust taxonomic revision of the scleractinian genus Leptastrea using genomic (host and symbiont), morphometric, and distributional patterns. Studies such as these have greatly increased systematic resolution compared to analyses based on single lines of evidence, and they provide new perspectives in Anthozoan evolution. An example of the challenges associated with octocoral species delimitation occurs with Plexaura homomalla (Esper 1792 ), one of the most common and readily recognized Caribbean octocorals. Plexaura homomalla forms colonies with dark brown branches and white polyps that make the species particularly distinctive (Bayer 1961 , Fig. 1 a). The species is widely distributed in forereef to backreef environments of the Caribbean and western Atlantic (Bayer 1961 ). The congener P. kükenthali Moser 1921 (the International Code of Zoological Nomenclature no longer recognizes diacritic marks in species names, therefore the accepted species name is P. kuekenthali, although we will use the original spelling in the species description by Moser) was originally described as a separate species on the basis of its bushier colonies, thinner branches (up to 2.5 cm vs. 4.5 cm diameter for P. homomalla ), and light brown to tan coloration (Fig. 1 b). Bayer designated the previously described P. kükenthali as a junior synonym of P. homomalla, recognizing the two groups as distinct morphotypes of P. homomalla due to the similarities in sclerite morphology (Bayer 1961 , Fig. 1 c, d). Plexaura kükenthali is distributed throughout the Caribbean and the Florida Keys (Bayer 1961 ), but has not been reported from Bermuda. The two species have been recorded in sympatry, for example, in Belize (Lasker and Coffroth 1983 ), Colombia (Velásquez and Sánchez 2015 ), Cuba (Espinosa et al. 2010 ), and Jamaica (Kinzie 1973 ). García-Parrado and Alcolado ( 1998 ) resurrected P. kükenthali, based primarily on the differences in branch thickness and color in conjunction with differences in the depth distribution of the two forms on Cuban reefs. Despite this revision, many researchers have continued to follow Bayer’s classification scheme, possibly because the publication associated with the taxonomic revision is not widely available (i.e., absent from international repositories). However, the reclassification is recognized in the World Registry of Marine Species (Cordeiro et al. 2020 ). Fig. 1 Colonies of Plexaura homomalla from the Florida Keys a and P. kükenthali from St. John b . Sclerites (spindles and clubs) isolated from P. homomalla c and P. kükenthali d Full size image In this study, we coupled morphological and reproductive data with molecular phylogenetics, genomic approaches, and symbiont diversity to revisit the status of P. homomalla and P. kükenthali. Using these species, we tested whether a multifaceted approach could clarify a taxonomic relationship where data from any single source are ambiguous. In particular we addressed: (1) differences in sclerite morphology between the two closely related groups, (2) reproductive status and timing of the two species using field observations and dissection of collected colony samples, (3) genetic differences across a wide array of molecular markers (i.e., mitochondrial, nuclear, and transcriptomes), and (4) differences in Symbiodiniaceae symbionts hosted by the two groups. This study highlights the importance of an integrative approach to octocoral biology, including the application of genomic resources, to understand the relationships between closely related species of ecologically relevant cnidarians. Materials and methods Sampling Samples of P. homomalla and P. kükenthali were collected between 2013 and 2019 from three locations at different depths: St. John, US Virgin Islands, the Florida Keys, and Puerto Rico (Table 1 , Appendix 1 in Supplementary Information). Samples were identified based on branch thickness and color following Bayer ( 1961 ) and Sánchez and Wirshing ( 2005 ). Although P. kükenthali is reported to occur in the Florida Keys (Bayer 1961 ), we did not find colonies after searching numerous patch reefs in the Middle Keys; therefore, only P. homomalla is represented in the Florida Keys collections. A total of 44 P. homomalla and 42 P. kükenthali colonies were sampled for genetic and morphological analysis. During 2018 and 2019, small branch samples were collected from 10 to 15 colonies of each species on seven occasions from St. John to assess reproductive status. We did not note pronounced differences among colonies collected at the depths in this study; the importance of depth varies between octocoral lineages (see Calixto-Botía and Sánchez 2017 and Prada and Hellberg 2021 for examples in Antillogorgia and Eunicea, respectively). Table 1 Listing of sample collection sites, number of colonies sampled, depth, and sampling year(s) for colonies used in this study Full size table Sclerite analysis Sclerites from each colony were isolated by digesting 2–4 mm branch fragments taken several centimeters below the tip of branches with commercial laundry bleach, rinsing the sclerites with water followed by ethanol, and then mounting sclerites onto glass slides using Permount. Plexaura homomalla and P. kükenthali , like all Plexaura spp., contain a variety of sclerite types. In discussing the two species, Bayer noted differences in size of the clubs, and Lasker et al. ( 1996 ), in their description of the congener P. kuna , reported small differences in spindle lengths and widths of the two forms of P. homomalla . Thus, our analysis focused on clubs and spindles (Fig. 1 c, d). Ten randomly selected spindles and clubs from each of 29 P. homomalla (ten from the Florida Keys, 16 from St. John, three from Puerto Rico) and 27 P. kükenthali (21 from St. John, six from Puerto Rico) colonies were measured for tip-to-tip length and maximum width (excluding tubercles) using a Wild compound microscope at 200 × magnification with an eyepiece micrometer or an Olympus BX51 compound microscope and Infinity camera at 200 × magnification. Comparisons of length, width, and length/width (L/W) ratios were made with generalized linear models (SPSS ver. 26) using a linear model and the natural log of lengths and widths and the square root of the L/W as the response variable. Colony averages of lengths and widths for both sclerite types (i.e., four predictor values per colony) were used in a stepwise discriminant analysis with validation (SPSS ver. 26) to determine if those traits alone could differentiate the species. Length and width provide a simple characterization of the complex morphology of sclerites, whereas elliptical Fourier analyses provide a more detailed description of the shape, which also enables quantitative comparisons of form within and between colonies and species (Carlo et al. 2011 ; Joseph et al. 2015 ). Sclerites were chosen by scanning the slides and selecting sclerites that were clearly clubs or spindles and had minimal contact with other sclerites. Multi-focus composite images were created using photographs taken at ten different foci for each sclerite. A 50 μm scale was transposed onto each composite image, and the images were processed as in Carlo et al. ( 2011 ). Elliptical Fourier descriptors (EFDs) for clubs and spindles were independently determined using SHAPE ver. 1.3 (Iwata and Ukai 2002 ). EFDs were developed using 20 harmonics for 65 spindles from 18 P. homomalla colonies (eight from St. John, eight from Florida Keys and two from Puerto Rico) and for 46 spindles from 14 P. kükenthali colonies (all from St. John). EFDs were generated as above for 40 clubs from 17 P. homomalla colonies (eight from St. John, nine from Florida Keys) and 20 clubs from ten P. kükenthali colonies (all from St. John). We generated the EFDs both with and without normalizing by the first harmonic. The dimensions of EFDs were then summarized in a principal components analysis (PCA). The two species were compared with analysis of variance of the principal component (PC) scores followed by discriminant analyses (SPSS ver. 26) based on both the PCs and the EFDs to determine if elliptical Fourier analysis could differentiate the species. Discriminant analyses were repeated using EFDs that were averaged for each colony. Reproductive analysis Studies of the two species (Goldberg and Hamilton 1974 ; Behety-Gonzalez and Guardiola 1979 ; Fitzsimmons-Sosa et al. 2004 ) suggest peak reproduction from June through August. In the summers of 2018 and 2019, samples of P. homomalla and P. kükenthali branches were randomly collected from colonies that were at least 30 cm in height at 7–10 m depth in South Haulover Bay, St. John, US Virgin Islands. Collections were made at intervals of two to three weeks. A single 5 cm branch fragment was collected from 15 colonies of each species (with the exception of June 20, 2018, when only 11 P. kükenthali were sampled). Samples were stored in 70% ethanol. Reproductive status was assessed from dissections of 15 non-adjacent polyps from each sample. Polyps were randomly selected for dissection starting 1 cm below the tip, to avoid rapidly growing areas that are not reproductive (Gutiérrez-Rodríguez and Lasker 2004 ). Each polyp was carefully dissected, the number of eggs or spermaries counted, and their diameters measured with an eyepiece micrometer using a Wild stereomicroscope at 25 × magnification. Gonad diameters were used to estimate volume, assuming gonads as perfect spheres. As it is difficult to differentiate small, developing spermaries and eggs, we only determined the sex of the colony if the gonads were > 200 µm in diameter. Reproductive status is reported as the average volume of the gonads per polyp in individual colonies, as well as the number of mature eggs per each sample. Histology of a subset of samples was conducted to assess reproductive status in greater detail and to verify the sex of samples that had been identified in dissections. In an attempt to observe spawning, night dives (19:00–21:00) on a fringing reef in Lameshur Bay at 4.5–9 m depth were conducted three to five days after the full moon in July 2016 and July 2017. Colonies of both species were located and mapped prior to the night dives and the colonies were then repeatedly surveyed during each dive. Plexaura kükenthali released planulae in 2016 and planulae were collected from the water column immediately above the colony using 50 mL syringes. In July 2019, branches from 12 P. homomalla colonies that had visible gonads were collected at South Haulover Bay, transported to and maintained in flowing seawater at the Virgin Islands Environmental Research Station (VIERS). Colonies of P. kükenthali at the South Haulover site did not have visible gonads at that time and were not collected other than for gonad analysis. In June 2021, branches from 7 P. kükenthali colonies that had gonads were collected and maintained in flowing seawater at VIERS. DNA extraction and PCR For single locus analyses, DNA was extracted from 0.25 cm branch sections of P. homomorphically and P. kükenthali using the 2X CTAB method (Coffroth et al. 1992 ) or the Qiagen DNEasy Blood and Tissue Kit following the manufacturer’s protocol (Qiagen, Venlo, Netherlands). Extracted DNA was diluted to five to ten ng/μL and kept at − 20 °C until amplified via PCR. The mitochondrial genes MutS and ND2 and the nuclear introns EF1a , CT2 , and I3P were amplified using PCR following Sánchez et al. ( 2003 ) and Prada and Hellberg ( 2013 ), respectively. Amplicons were sequenced by TACGEN (Richmond, California, USA) via Sanger sequencing. Each locus was aligned using the MAFFT ver. 7.833 plugin (Katoh and Standley 2013 ) in Geneious Prime 2020.1.2 ( ). We constructed phylogenetic trees for each gene using IQTREE2 ver. 2.1.2 with 1000 ultrafast bootstraps (Nguyen et al. 2015 ; Kalyaanamoorthy et al. 2017 ; Hoang et al. 2018 ) and MrBayes ver. 3.2.6 (Ronquist et al. 2012 ). Phylogenies were built from a concatenation of all five genes for samples with at least four of the five regions sequenced. Transcriptome isolation and sequencing Samples used in the transcriptome analysis were preserved in at least a 1:4 volume solution of RNAlater (Thermo Fisher Scientific, Waltham, MA) at 6 °C for twelve hours, RNAlater replaced, and stored at − 80 °C. mRNA was extracted from ~ 25 mg of tissue from each colony using the NEB mRNA magnetic isolation kit (New England BioLabs, Ipswich, MA) following the manufacturer’s protocol. Library preparation was done with the NEBNext RNA Library Preparation Kit, following the manufacturer’s recommendations, performed at half volume. Libraries were examined with an Agilent Bioanalyzer 2100 (Agilent Technologies, Santa Clara, CA) and quantified with a fluorescent plate reader. To avoid lane effects, all samples were barcoded with unique identifiers, pooled, and sequenced across six lanes of Illumina HiSeq 2000 (100 PE) at the Vincent J. Coates Genomics Sequencing Laboratory (GSL), University of California, Berkeley. Transcriptome assembly and SNP analysis We retained sequences that had a score of Q30 or higher, and the first six bases were trimmed to remove adaptors using Trimmomatic ver. 0.33 (Bolger et al. 2014 ). Sequences were examined for quality and successful adaptor removal using Fast-QC ver. 0.10.1 ( ). Prior to assembly, bacterial and algal contamination were removed with DeconSeq-graph ver. 0.1 (Schmieder and Edwards 2011 ), using BWA ver. 0.5.9-r16 (Li and Durbin 2009 ). NCBI’s bacterial rRNA submissions, Breviolum minutum genome (Shoguchi et al. 2013 ), and Symbiodiniaceae transcriptomes (Bayer et al. 2012 ) were used as the contaminant database. Clean reads from all collected individuals were used to assemble the transcriptome for each species using Trinity ver. 2.1.1 (Grabherr et al. 2011 ) with normalization and Trimmomatic flags. Both de novo transcriptomes were annotated using a three-step approach using blastx with e -value < 10 –5 and > 100 bp sequence overlap. First, we annotated transcripts with two Cnidarians ( Nematostella vectensis , Thelohanellus kitauei ) available on Ensembl (release 47). Transcripts without hits were then annotated with the complete Ensembl Metazoa database and the Cnidarians in UniprotKB (SwissProt and Trembl; The UniProt Consortium 2019 ). We identified the corresponding gene ontology (GO) terms for each annotated transcript from blastx results. We assessed the completeness of the assembled transcriptomes using BUSCO ver. 3, with the eukaryote and metazoan odb 9 databases (Simão et al. 2015 ). From these results, we chose to use P. homomalla as our mapping reference, given that it had higher BUSCO completeness. To call single-nucleotide polymorphisms (SNPs), reads from all samples were mapped to the P. homomalla transcriptome using Bowtie2 ver. 2.3.5.1 (Langmead and Salzberg 2012 ) with default parameters for single-end reads. Resulting files were sorted and indexed using SAMtools (Li et al. 2009 ), and SNPs were identified using the mpileup and call commands in BCFtools. SNPs were filtered using vcfutils.pl in SAMtools, to exclude variants that had < 30X coverage. VCFtools was used to remove sites with missing data and sites within 5,000 bp of each other to avoid biases due to linkage (Danecek et al. 2011 ). The final SNP dataset was analyzed using the R packages Adegenet ver. 2.1.2 (Jombart 2008 ) and Hierfstat ver. 0.4–22 (Goudet 2005 ) in R ver. 3.6.1 (R Core Team 2019 ). For phylogenetic reconstruction, all SNPs were filtered using default parameters in SNPhylo ver. 3.695 (Lee et al. 2014 ) and aligned with MUSCLE ver. 3.8.31 (Edger 2004 ). Phylogenies were built as above. We used VCFtools and the script vcf_tab_to_fasta_alignment.pl (Bergey 2012 ) to extract an alignment of SNPs with Weir–Cockerham F ST values > 0.7 (henceforth “high F ST SNPs”), which were then used to build a second phylogeny. We performed a GO term analysis to determine if high F ST SNPs were enriched for specific functions using the R package topGO (Alexa and Rahnenfuhrer 2019 ). Reads from each colony were also mapped (as above) to the mitochondrial genome of Muricea crassa (NC029697); SNPs were called following the same pipeline for the transcriptomes with the exception that SNPs were not filtered based on location. IQTREE2 ver. 2.1.2 with 1000 ultrafast bootstraps (Nguyen et al. 2015 ; Kalyaanamoorthy et al. 2017 ; Hoang et al. 2018 ) was used to construct a phylogeny using the mitochondrial SNPs. Symbiont genotyping To assess patterns of host-symbiont (Symbiodiniaceae) specificity, we used fragment length analysis of the chloroplast 23S-rDNA hypervariable region to identify symbionts to the genus level following Santos et al. ( 2003 ). All symbionts detected were in the genus Breviolum (former Clade B). We then genotyped colonies at five Breviolum microsatellite loci: B7Sym4, B7Sym15, B7Sym34, B7Sym36 (Pettay and LaJeunesse 2007 ) and GV2_100 (Andras et al. 2009 ); primers and amplification conditions followed published protocols with annealing temperatures 54–57 °C. Amplicons were tagged with fluorescently labeled primers and visualized following Santos et al. ( 2003 ) and then scored by eye against 50–350 bp size standards (LI-COR Biotechnology Division). In some colonies, two alleles were detected at more than one locus. Given that symbionts are haploid in the vegetative state (Santos and Coffroth 2003 ), these were assumed to represent different symbiont genotypes. As it was not possible to assign a single multilocus genotype to these samples, we developed a curtailed dataset, excluding colonies where two alleles were detected at more than one locus (curtailed data = 92.7% of total dataset) and coded alleles as absence/presence data. We assessed population structure of both the original and curtailed datasets using R ST and Φ PT in Arlequin ver. 3.5.2.2 (Excoffier and Lischer 2010 ) with 10,100 permutations and GenAlEx ver. 6.51b2 (Peakall and Smouse 2012 ) with 9,999 permutations. We corrected for multiple comparisons using Bonferroni correction. We found no significant differences in these metrics between the total and curtailed datasets and performed the remaining analyses with the curtailed dataset. Bayesian clustering analysis was conducted in STRUCTURE ver. 2.3.4 (Pritchard et al. 2000 ) with 100,000 burnins and 100,000 replicates. We assessed the best K based with the Evanno method (Evanno et al. 2005 ) in Structure Harvester (Earl and vonHoldt 2012 ). STRUCTURE results were fed to CLUMPP ver. 1.1.2 (Jakobsson and Rosenberg 2007 ) and visualized in DISTRUCT ver. 1.1 (Rosenberg 2004 ). To reconstruct phylogenetic relationships of symbionts within different populations, the flanking region of the B7Sym15 microsatellite was amplified as above and sequenced for a subset of the samples. Sequences for Breviolum dendrogyrum (MH727217), B. endromadracis (KT149354), B. faviinorum (MH727210), B. meandrinium (MH727212), B. minutum (JX263427), and B. psygmophilum (JN602461) were included to provide phylogenetic context. Sequences were aligned and a maximum likelihood tree was constructed following the methods outlined above. Results Sclerite morphology Sclerite dimensions overlapped substantially between P. homomalla and P. kükenthali , but there were significant differences between species and collection sites and among the colonies within each species and site (Table 2 ). Plexaura homomalla sclerites differed in club length and width between the three sites: clubs from the Florida Keys were longer and wider than those from either St. John or Puerto Rico (GLM, comparison of marginal means), but had the same L/W ratio (Table 2 ). Plexaura homomalla spindles from the three locations were similar in size and L/W. For P. kükenthali, spindle length was greater among the St. John samples. As P. kükenthali was absent in Florida surveys, interspecific comparisons were restricted to the colonies from St. John and Puerto Rico, where we had sympatric collections. Interspecific differences were present in club length and width and spindle width and L/W ratio. Table 2 Morphological traits of Plexaura homomalla and P. kükenthali sclerites. Ten of each type of sclerite were measured from each colony. Values are means (standard deviation). Significant statistical tests are noted in bold Full size table Linear regression of length and width identified significant heterogeneity of slopes between the two species (ANOVA, slope x species interaction, F 1,466 = 4.09, p = 0.044; Fig. 2 a, b). Plexaura kükenthali spindles were narrower than P. homomalla spindles and L/W of the spindles differed between the two species. Discriminant analysis based on length and width of individual sclerites was able to correctly identify the source species of 64.5% of the clubs and 71.1% of the spindles (SPSS ver. 26; discriminant analysis). When colony means of sclerite lengths and widths were used, the differences between the species were clearer (Fig. 2 c, d) and discriminant analysis based on average lengths and widths using both sclerite types in a single analysis correctly identified 83.9% of the colonies. Fig. 2 Linear regressions of length and width measures of individual sclerites of Plexaura homomalla (blue) and P. kükenthali (red) clubs a and spindles b . Length and width measures of Plexaura homomalla (blue) and P. kükenthali (red) clubs c and spindles d with each point representing the average value for a single colony Full size image Inclusion of detailed shape information did not provide substantially more information than sclerite lengths and widths alone (Fig. 3 ). When EFDs of spindles were normalized to the first harmonic, the discriminant analysis found that PC3 differed between the two species (SPSS ver. 26; discriminant analysis, Wilks-Lambda, p < 0.001) and identified the species of the spindles in 78% of the cases in the cross-validated groups procedure implemented in SPSS ver. 26. Limiting the analysis to only the St. John samples reduced identifications to 65% of the cases. Reintroducing sclerite size into the analysis by including length and width did not change the proportion of sclerites correctly identified (78%). Discriminant functions for individual sclerites included PC 1 and 3 and spindle width. Among cases in which there was more than a single sclerite from a colony, the species identification based on the call for the majority of the sclerites was correct for 87% of the colonies, and repeating the discriminant analyses using colony means instead of individual sclerites led to the correct identification of 87% of the colonies. Analysis of clubs yielded results that were similar to those for spindles. EFDs that were normalized on the first harmonic identified significant differences (Wilks-Lambda, p < 0.001) between the species in PC1 scores. The discriminant analysis identified the source of 65% of the clubs, 70% for P. kükenthali and 62.5% for P. homomalla. Inclusion of club length and width data in the discriminant analysis led to the identification of 75% of the clubs, 85% for P. kükenthali , and 70% for P. homomalla. Exclusion of the Florida Keys P. homomalla colonies reduced the proportion of correct calls to 50% when only the EFD data were included in the discriminant analysis and reduced the correct calls to 63% when length and width data were included. In cases where more than a single club was analyzed per colony, the majority of the clubs were correctly identified for 58% of the colonies. Discriminant analyses based on colony means, which could combine both clubs and spindle data, increased the proportion of correct identifications to 93%. Fig. 3 Principal component (PC) analyses for a clubs and b spindles of elliptical Fourier descriptors for Plexaura homomalla (blue) and P. kükenthali (red) . PC1 and PC2 describe 31.0% and 15.8% of the total variance in club shape, respectively. PC1 and PC3 describe 23.8% and 11.1% of the total variance of spindle shape, respectively Full size image Reproduction The most striking difference between the two species was the observations of broadcast spawning of eggs and sperm by P. homomalla and the release of planulae by P. kükenthali. Release of planulae by four P. kükenthali colonies was observed in Lameshur Bay after sunset approximately 20:00 h on July 22–23, 2016 (Lunar Days 18–19, 3–4 days after the full moon). Planulae that were collected from the P. kükenthali colonies settled and metamorphosed on ceramic tiles several days after being collected. Plexaura homomalla was not observed spawning at that time. Neither spawning nor planulation was observed during the 2017 dives. A single colony of P. kükenthali maintained in the seawater system at VIERS released planulae on the evening of June 29, 2021, 5 d after the full moon. Broadcast spawning of eggs and sperm was observed among P. homomalla colonies that were maintained in sea tables at VIERS on July 19–22, 2019 (lunar days 17–20, 2–5 days after the full moon). In sea tables, P. homomalla started releasing eggs between 21:00 and 21:30, with different colonies (males and females) starting at different times over a 60 min period. Release continued for approximately 2 h. Eggs collected during the spawning events developed into planulae which successfully settled (see Tonra et al. 2021 for more details). Samples of colonies from South Haulover Bay did not exhibit reproductive synchrony; many of the colonies that were collected did not have visible gonad, and in many cases visible gonads were not mature (Fig. 4 ). Among P. homomalla colonies that developed identifiable gonads, we observed an increase in the number of large eggs (> 450 µm) and the presence of identifiable male colonies in July of both years. This was followed by a decrease in the number of large eggs in most female colonies and the disappearance of identifiable male colonies in August. Among P. kükenthali colonies, a small number of samples contained large eggs in June and early July 2018. Males were not observed in dissections of the P. kükenthali samples, but we did not conduct histology on all of the samples and that result can only be interpreted as an absence of mature males. Fig. 4 Average gonad volume per polyp (with 95% confidence interval) of 15 polyps from each colony collected on St. John, Virgin Islands. Not shown are zero values for 40 samples in which no gonad tissue was visible at 25X magnification. Solid vertical line separates species, dark vertical line separates years Full size image Only 17 of the 60 female P. homomalla and 11 of the 36 female P. kükenthali colonies sampled contained large eggs. Despite the small number of reproductive colonies in our samples, there were distinct differences between the two species. Numbers of eggs found in each polyp differed between dates and between species (Log-linear analysis: eggs per polyp by collection date by species, p < 0.001; eggs per polyp by collection date, p < 0.001; eggs per polyp by species, p < 0.001). If only large/mature eggs are considered, there was a significant difference between the two species and between collection times ( χ 2 analysis, p < 0.001). Molecular approaches Sanger sequencing of individual loci resulted in a matrix of 1,337 bp from the mitochondrial genes MutS and ND2 , and 1,177 bp from the nuclear introns EF1a, CT2 , and I3P. For the samples that had at least four regions sequenced, the matrix was composed of 23 samples (nine P. kükenthali, 14 P. homomalla ). For this subset of samples, nearly all sites (2,479 bp, 98.6%) were invariant and only 22 sites (0.88%) were parsimony informative, with 12.12% missing data. The maximum likelihood tree from the concatenated gene set (log-likelihood -3774.36, Fig. 5 a), had moderate support for P. homomalla and P. kükenthali clades (80 ML bootstrap support [BS], 0.99 posterior probability [PP]), while individual gene trees were largely unresolved or contained branches with low support (Fig. S1). The only exception was EF1a (Fig S1e), which provided moderate support for the differentiation of the two species. Fig. 5 Phylogenies constructed from a concatenated mitochondrial genes ( MutS, ND2 ) and nuclear introns ( EF1a, CT2, I3P ) and b mitochondrial SNPs. Colony identification numbers provided in parentheses. Support values given as maximum likelihood bootstraps/posterior probabilities. Support values < 75 BS not shown Full size image A total of 260 mitochondrial SNPs were retained after filtering for depth; 26 of these sites were parsimony informative with a total of 3.32% missing data. In the phylogeny (log-likelihood − 521.17, Fig. 5 b), there was strong support for the differentiation of P. homomalla and P. kükenthali (100 BS, 1 PP) . Interestingly, one P. homomalla colony (HF3) nested within the P. kükenthali clade (Fig. 5 b). By all phenotypic aspects, HF3 is unequivocally P. homomalla (Fig. S8) ; it was therefore surprising to see this individual nested within P. kükenthali. Transcriptome sequencing resulted in 73,197,074 reads after filtering of contaminants (average 9,149,634 reads per individual, SE ± 615,899 reads). The de novo transcriptome assembly of P. homomalla resulted in 66,781 contigs, 23,535 (35.24%) of which were annotated: 17,911 from Ensembl Cnidarians, 4,137 from Ensembl Metazoa, and 1,487 from UniprotKB Cnidarians. The P. kükenthali transcriptome resulted in 130,138 contigs, 41,228 (31.68%) of which were annotated: 25,645 from Ensembl Cnidarians, 7,433 from Ensembl Metazoa, and 8,150 from Uniprot KB Cnidarians (Table 3 , Fig. S2-4). We identified 26,779 SNPs, which had an average F ST of 0.2196 between the two species. Pairwise comparisons of individual genes allowed us to identify 319 SNPs with Weir–Cockerham F ST > 0.7 (ca. 1% of identified SNPs). In total, there were 15,956 genes with successful GO annotation in the P. homomalla transcriptome (4,865 biological process [BP], 5,162 cellular component [CC], 5,929 molecular function [MF]). Within the high F ST genes, there were 15 GO terms that were significantly enriched (nine BP, five CC, one MF) using Fisher’s exact test ( p < 0.01) (Table S2; Fig. S5–7). No terms were significantly enriched after correcting for multiple comparisons using the Benjamini and Hochberg method. Enriched GO terms are related to metabolic processes (e.g., GO:1,901,575 organic substance catabolic process), with genes such as Derlin and proteasome subunits (e.g., 26S proteasome non-ATPase regulatory subunit 13) associated with enriched terms. The protein products of these genes are involved in the endoplasmic reticulum-associated protein degradation (ERAD) pathway and other catabolic processes. While further work is needed to validate these differences, our transcriptome-wide analysis here suggests that genes involved in protein catabolism may be some of the first to differentiate in this system. Table 3 De novo transcriptome assembly statistics for Plexaura homomalla and P. kükenthali Full size table A total of 16,859 SNPs were retained after filtering by SNPhylo (ca. 63% of all 26,779 SNPs). Nearly all sites (16,410, 97.34%) were parsimony informative with only 449 (1.67%) invariant sites. The maximum likelihood (log-likelihood -277,075.64) and Bayesian trees from the full SNP dataset resulted in the same topology as the mitochondrial genome tree (Figs. 5 b, 6 a). In the phylogeny constructed from the full SNP dataset, one colony identified as P. homomalla (HF3) nested within P. kükenthali (Fig. 6 a, Fig. S8) , whereas the same individual nested in the P. homomalla clade in the high F ST SNP phylogeny in both maximum likelihood (log-likelihood − 2729.44) and Bayesian analyses with strong support (100 BS, 1 PP; Fig. 6 b). Fig. 6 Phylogenies constructed from the entire SNP dataset a and high F ST SNPs b . Support values given as maximum likelihood bootstraps/posterior probability. Support values < 75 BS not shown. Admixture analysis with Ohana with all SNPs c and high F ST SNPs d for optimal K = 2 Full size image In order to assess the possibility of hybridization between the two species, as suggested by the mitochondrial genome and SNP phylogenies for the putative hybrid HF3, we performed an admixture analysis with Ohana (Cheng et al. 2017 ) using default parameters. The analysis was run using both the entire SNP dataset as well as the high F ST SNPs (Fig. 6 c, d). Both analyses resulted in an optimal K = 2 (all SNPs log-likelihood − 129,558.17; high F ST SNPs log-likelihood -807.28) but the analysis with the high F ST SNPs revealed that HF3 could be a potential hybrid individual (Figs. 5 b, 6 a, S8). While all SNPs would likely be affected by admixture, the statistical power to detect admixture is much stronger for highly differentiated SNPs than when using the complete dataset (e.g., Ma et al. 2019 ). In other words, recent hybrids are easier to spot in datasets where SNPs are highly differentiated. Given the placement of the mitochondrial genome of HF3 (Fig. 5 b), if this individual is a hybrid, as suggested, the maternal parent is P. kükenthali and the paternal parent is P. homomalla. The possibility of cryptic hybrids could complicate taxonomic classification, and further highlights the importance of using genome-wide analyses for species identification. Symbiont specificity Of the 77 colonies analyzed for symbiont genotype, six harbored multiple alleles at more than one locus (three P. kükenthali from St. John; one P. kükenthali and two P. homomalla from Puerto Rico) and were excluded to create a curtailed dataset. Population pairwise R ST and Φ PT were significantly different for all populations for both datasets (full and curtailed, Tables S4 & S5), but were not significantly different between the full and curtailed datasets. The removal of those six samples (to generate the curtailed dataset) only reduced the total number of alleles by three (full = 36, curtailed = 33). Thus, the curtailed dataset was used to compare symbionts within P. homomalla and P. kükenthali . Among the five analyzed symbiont microsatellite loci, between three and ten alleles were detected at each locus; a total of 33 alleles were recovered (Table S6). Of these, eight alleles were unique to symbionts from P. homomalla colonies and ten were unique to symbionts from P. kükenthali colonies (Table S6). A total of 55 unique symbiont genotypes were identified among the 71 colonies of P. homomalla and P. kükenthali with the number of genotypes in a given region ranging from five to twenty-one (Table S7). Genotypes were not shared between regions and were rarely shared between host species. Other than four cases, genotypes were not shared between reefs or species. Bayesian clustering analysis identified two symbiont populations ( K = 2, Fig. 7 a). Plexaura kükenthali colonies harbored a single symbiont population across all regions ( P. kükenthali phylotype) (Fig. 7 a). A second symbiont population, which differed from the P. kükenthali phylotype, was found in P. homomalla colonies from Puerto Rico and Florida ( P. homomalla phylotype). However, in St. John while most P. homomalla colonies harbored the P. homomalla phylotype, the P. kükenthali symbiont phylotype were found in eight P. homomalla colonies (Fig. 7 a). Fig. 7 a Results of STRUCTURE analysis for symbionts genotyped at five microsatellite loci for K = 2. b Maximum likelihood phylogeny generated from the B7Sym15 microsatellite flanking region for several P. homomalla and P. kükenthali symbionts and additional Breviolum species. Symbionts from P. homomalla colonies that fall in the P. kükenthali phylotype are bolded and denoted with asterisks. Branch supports were generated from 1000 ultrafast bootstrap replicates; only BS > 75 are shown Full size image Phylogenetic relationships among symbiont genotypes provide robust support for two clades, representing the two host species (Fig. 7 b). One group included the majority of the P. homomalla symbionts ( P. homomalla phylotype), including those in the host HF3 that had grouped with P. kükenthali colonies based on host genetic data (Figs. 5 and 6 a). Symbionts in the P. kükenthali phylotype formed a second clade that included symbionts from two P. homomalla colonies. It is noteworthy that symbionts from these two P. homomalla grouped with St. John P. kükenthali symbionts in the STRUCTURE analysis (Fig. 7 a) and one of them harbored the same symbiont genotype as a P. kükenthali colony. Discussion We used a multifaceted approach to clarify the species boundaries of two widely recognizable Caribbean octocorals, whose taxonomic history has been uncertain for decades. Although García-Parrado and Alcolado ( 1998 ) officially resurrected P. kükenthali as a distinct species over two decades ago, they based their conclusion on depth distribution patterns of the two groups, without analyses of morphological, reproductive, or molecular divergence. Here, multiple lines of evidence reinforce the designation of these two groups as separate species, highlighting the utility of an integrated approach of multiple data sources in octocoral systematics. Furthermore, this study presents the assembly and annotation of two new reference transcriptomes of Caribbean octocorals. Sclerite morphology Traditional octocoral taxonomy relies heavily on sclerite morphology. However, these traits exhibit high levels of variability in form (Kim et al. 2004 ; Joseph et al. 2015 ). Our morphometric analyses identified differences between the two species in both the size (Table 2 ) and to a lesser extent shape of the clubs and spindles. The differences were evident when average lengths and widths of the two species were compared (Fig. 2 c, d). Clubs of P. homomalla were thicker and more robust in appearance than those of P. kükenthali (Fig. 1 c, d). However, neither EFDs nor length/width measurements unambiguously identified the source of any single sclerite. We found high levels of morphological variability of these characters, with individual colonies within a species significantly differing in length and width measures of both spindles and clubs (Table 2 ). The failure of the discriminant functions of both length/width measures and EFDs to fully distinguish between the two species highlights that morphological plasticity in these characters limits their utility in taxonomic studies. EFDs have been useful in analyses of other taxa (e.g., Carlo et al. 2011 ) but in this case quantifying sclerite form was not very different than the analyses based on length and width alone. There will be many cases in which size alone cannot differentiate species, but this study suggests the far more readily obtained size metrics should be analyzed first. It is also likely that EFDs are less effective in characterizing the shape of complex sclerites such as clubs, which exhibit little symmetry, beyond having a dominant axis. In these cases, positioning of the sclerite on the slide will dramatically affect the profile and mask similarities and differences between sclerites. Reproductive cycles Plexaura homomalla had been observed spawning in the field (Bastidas et al. 2005 ) and prior to that broadcast spawning was inferred by the absence of planulae within polyps during times when mature eggs and sperm were present (Goldberg and Hamilton 1974 ; Martin 1982 ; Fitzsimmons-Sosa et al. 2004 ). Behety-Gonzalez and Guardiola ( 1979 ) observed a similar cycle of gonad development in P. kükenthali. Thus, we were surprised to observe planulae being released by four P. kükenthali colonies in 2016. While differing reproductive systems are known within a single octocoral genus (McFadden et al. 2001 ), we are not aware of any reports of both broadcast spawning and planulation within a single octocoral species. We were unable to replicate this observation during 2019, but planulation of P. kükenthali was observed in 2021. The clear pattern of peak reproduction during the summer that has been observed elsewhere for both species (Goldberg and Hamilton 1974 ; Behety-Gonzalez and Guardiola 1979 ; Martin 1982 ; Fitzsimmons-Sosa et al. 2004 ) was not evident in the samples from South Haulover Bay. The simultaneous sampling of both species revealed few mature colonies in both 2017 and 2018. Still, the greatest gonad content in P. homomalla was observed in July, whereas P. kükenthali gonad content was greatest in June (Fig. 4 ). It is possible that in both years we sampled either after a substantial number of colonies had already spawned, or months before spawning. Complete sampling across the entire summer and possibly year is needed to fully characterize the species’ reproductive behavior, but this study suggests there are differences in their reproductive cycles. The overlap in the timing of reproduction may be sufficient to allow hybridization between the two species. Sperm from P. kükenthali and eggs and sperm from P. homomalla could be present in the water column at the same time, and this could allow for cross-fertilization, even if one of the species broods its eggs. Differences in gamete-reception proteins have been suggested to prevent hybridization during mass spawning (Babcock 1995 ). While other marine invertebrates such as sea urchins and abalone have become model systems for studying the role and evolution of bindin and lysin gamete recognition proteins (Swanson and Vacquier 2002 ), such receptor proteins remain poorly understood in Anthozoans. Future work may capitalize on the resources and data presented here to study the role of gamete recognition in octocorals with different (albeit overlapping) reproductive cycles and modes of reproduction. Molecular insights With the exception of Ef1a, gene trees constructed from single mitochondrial and nuclear loci did not provide enough resolution to confidently separate our two taxa (Fig. S1). However, the concatenation of two mitochondrial genes and three nuclear introns resolved two clades, corresponding to the two species (Fig. 5 a). SNPs across the transcriptome showed differentiation between P. homomalla and P. kükenthali ( F ST = 0.2196). The most differentiated SNPs had F ST values of ≥ 0.85 (210 out of 318 high F ST SNPs), and 30 SNPs were within loci which had annotations at the gene level. Some of these genes encode proteasome subunits (26S proteasome regulatory subunit RPN9, proteasome subunit beta and alpha type), ubiquitin-associated proteins (e.g., E3 ubiquitin-protein transferase), transport and signaling proteins (e.g., COP9 signalosome complex subunit 6), and nucleotide replication and transcription proteins (e.g., DNA-directed RNA polymerases I and III subunit RPAC1). Interestingly, inositol-3-phosphate synthase ( I3P ) was one of these highly differentiated genes, although the gene tree generated from the I3P intron did not resolve the two species (Fig. S1d). Further investigation of the role these genes will be required to elucidate if they play a role in octocoral speciation. We found a putative hybrid in our samples, indicating the potential for hybridization between the two species (Fig. 6 ). Hybridization is not uncommon in octocorals (McFadden and Hutchinson 2004 ; Quattrini et al. 2019 ; Erickson et al. 2021 ) and scleractinians (Willis et al. 1997 , 2006 ). Although hybridization may blur and complicate species boundaries, it may also facilitate diversification on reef systems by leading to novel morphologies and ecological niches (e.g., Vollmer and Palumbi 2002 ). The possible overlap in the timing of reproduction of P. homomalla and P. kükenthali (Fig. 4 ) suggests that pre-zygotic isolation might be permeable in these groups. In mass-spawning corals, where many species spawn synchronously on the same reef, other isolating mechanisms such as differing gamete-reception proteins have been suggested to prevent hybridization (Willis et al. 1997 ). Although our admixture analysis suggests that P. homomalla and P. kükenthali may form hybrids (Fig. 6 ), our results show that hybridization might be sporadic as it has not resulted in the complete swamping of the morphological and genetic differences of the two groups. Therefore, other than the putative hybrid HF3, we suggest that gross morphology based on branch thickness and color remain reliable for field identification. Symbiont specificity Studies suggest that specific host-symbiont pairings are more common than previously thought, and species within the symbiont genus Breviolum have specific host groups (e.g., Prada et al. 2014 ; Lewis et al. 2019 ). Plexaura homomalla and P. kükenthali tended to harbor distinct Breviolum lineages, with symbiont populations in P. homomalla and P. kükenthali generally clustering according to host species rather than geography (Fig. 7 ). For example, P. homomalla symbionts from Puerto Rico clustered with individuals from Florida rather than clustering with P. kükenthali symbionts from Puerto Rico (Fig. 7 ). Furthermore, the presence of unique symbiont alleles for each host species suggests a lack of gene flow between the symbionts of P. homomalla and P. kükenthali (Table S6). Finally, phylogenetic analysis based on the flanking region of the B7Sym15 microsatellite locus also supports two host-specific symbiont phylotypes (Fig. 7 b). Despite these clear clustering patterns, further study is needed into the presence of shared symbiont phylotypes between the host species in St. John. It is unlikely that changes in symbiont types due to switching or shuffling (sensu Baker 2003 ) would be responsible for these shared phylotypes as bleaching is very rare in Plexaura (Lasker 2003 ; Prada et al. 2010 ). It is possible that symbiont loss does occur and is not observed due to the host coloration; however, in most cases where symbiont genotypes were followed across bleaching events or stress in octocorals, the symbionts did not change (Goulet and Coffroth 2003 , Kirk et al. 2005 , Coffroth unpubl. data; but see Lewis and Coffroth 2004 for an exception). Expanding the number of loci examined might differentiate these, however, sequence analysis of the flanking region of the B7Sym15 microsatellite suggests a close relationship between some St. John P. homomalla symbionts and the P. kükenthali symbiont phylotype (Fig. 7 b). Alternatively, local microenvironments might have selected for this pairing as all P. homomalla colonies that harbored the P. kükenthali symbiont phylotype were collected in St. John, USVI; however, all P. homomalla with the P. kükenthali symbiont phylotype were collected from the same reef and at the same depth where P. homomalla with the P. homomalla symbiont phylotype were also found. Host specialization is recognized as an important factor in the speciation of these symbionts (e.g., LaJeunesse 2005 ; Finney et al. 2010 ; Thornhill et al. 2014 ) and is likely a result of selection for a given host habitat or resource, where symbionts distinguish between the host species, based on differences in the intracellular environments (Thornhill et al. 2014 ; Lewis et al. 2019 ). The factors that influenced the observed symbiont genotypes shared between the host species are unclear and an area for investigation. Regardless, the separation of P. homomalla and P. kükenthali symbionts by host species over a broad geographic range further delimits these two octocoral species. Conclusions Using an integrative approach that included morphological analyses, reproductive observations, molecular evaluations, and symbiont diversity, we were able to reinforce that P. homomalla and P. kükenthali are, in fact, distinct species. While there are key external morphological characters that differentiate the two species, differences in reproductive timing and microscopic morphological traits can also be used to distinguish the species. Molecular techniques used to identify and delimit taxa have become commonplace in octocoral systematics and were an invaluable tool in this study. This work also presents two new transcriptomes of Caribbean octocorals that can be used to further the understanding of topics ranging from octocoral phylogenomics to fine-scale plexaurid population structure. Finally, it is important to emphasize the need to take a multifaceted approach to investigate octocoral systematics as individual data sources can fail to identify sufficient evidence to inform taxonomic classifications. Data Availability Sequences have been deposited on GenBank (accession numbers MW239684-MW239875). RNASeq reads are available on SRA under project number PRJNA675782. Code, alignments, and transcriptome assemblies and annotations are available at . | On a night dive off the coast of St. John in the U.S. Virgin Islands in 2016, two coral reef researchers saw something unexpected: A coral colony with slender, waving branches was releasing larvae into the water. While this method of reproduction isn't unusual, the behavior in question was surprising because the coral was one called Plexaura kükenthali. For decades, scientists had debated whether P. kükenthali was its own species, or the same species as another coral called Plexaura homomalla. Because P. homomalla was known to send sperm and eggs into the water—not fully formed larvae—the 2016 sighting added a new dimension to the conversation. "While I had always wondered if the two forms were really two species, that was the moment we realized they had to be different," says University at Buffalo professor and coral scientist Howard Lasker, who made the 2016 observation with UB Ph.D. student and coral scientist Angela Martinez Quintana. A more definitive call, however, would require additional evidence. So are these corals one species, or two? It seems like an esoteric question: What, exactly, is a species? "You'll never get to a great answer, and different people will tell you different things," says Jessie Pelosi, a 2019 UB graduate in environmental geosciences and biological sciences. "With P. kükenthali and P. homomalla, there's been this conflict in terms of are they different species, or are they just one and the same in reality? They were described as separate species, then merged together, and then separated again." A colony of the coral Plexaura kükenthali in the U.S. Virgin Islands. Credit: Howard Lasker Pelosi set out to learn more after hearing about this history from Mary Alice Coffroth, another UB professor and coral scientist, and Lasker. Both encouraged Pelosi to explore the topic further through an undergraduate research project at UB, with the goal of assessing, more comprehensively, whether the two corals were indeed two different species, as people were now saying. With Coffroth and Lasker as advisers, Pelosi assembled an interdisciplinary team from UB, The University of Rhode Island and Auburn University to take an in-depth look at P. kükenthali and P. homomalla. The result is a new study published on Sept. 20 in the journal Coral Reefs. Yes, they are two species, scientists say "I think all the data suggests that they are two separate species," says Pelosi, the first author, who completed the research at UB and is now pursuing a Ph.D. in biology at the University of Florida. "We looked at a bunch of different factors, including morphological, genomic, reproductive and symbiotic differences." Calcium carbonate structures called clubs and spindles isolated from Plexaura homomalla, as viewed under a microscope. This visual is a composite that combines focus-stacked microscope images of individual sclerites. Areas of light and shadow have been enhanced to better visualize the structures. A new study finds variation in the shape and size of sclerites produced by P. kükenthali and P. homomalla. Credit: Jessie Pelosi, adapted from an image published in Coral Reefs. A colony of the coral Plexaura homomalla in the Florida Keys. Credit: Mary Alice Coffroth, as published in Coral Reefs. Morphologically, the team saw variation in the shape and size of calcium carbonate structures called clubs and spindles that each species produces. In addition, the scientists identified "strong genetic differences across the genome between the two species," and observed differences in the timing and method of reproduction, Pelosi says. Finally, he adds, P. kükenthali and P. homomalla tend to host different types of algae as symbionts. The research builds on a 1998 paper in the journal Avicennia by Pedro García-Parrado and Pedro M. Alcolado, who characterized P. kükenthali and P. homomalla as separate species after examining some aspects of their morphology, and depth preferences. Pelosi points out that "there are some interesting intricacies happening," with some overlap between the two species' features. For example, some clubs and spindles on P. kükenthali and P. homomalla are very similar, even if overall trends in shape and size point to a difference in morphology. Also, researchers discovered one coral colony that appears to be a hybrid between P. kükenthali and P. homomalla, suggesting that interbreeding between these closely related species still occurs, although the frequency of this phenomenon is unknown. Still, taken together, the evidence points to P. kükenthali and P. homomalla being distinctive species, Pelosi and Lasker say. "It's a really nice study because it cuts across so many different modes," says Lasker, Ph.D., a professor of geology and of environment and sustainability in the UB College of Arts and Sciences, and senior author on the new paper. "It just points in some ways to how little we know about these animals." (Yes, corals are animals.) Pelosi notes that beyond simple curiosity, understanding the biology of soft corals like P. kükenthali and P. homomalla is important for practical reasons: "We're seeing that in some areas, octocorals, or soft corals, seem to be able to better withstand the effects of climate change than stony corals, or hard corals. So, soft corals are really important players in preserving reef biodiversity and providing habitat for reef fishes and small invertebrates such as snails and shrimps." | 10.1007/s00338-021-02175-x |
Nano | Imaging electric charge propagating along microbial nanowires | Nature Nanotechnology. DOI: 10.1038/nnano.2014.236 Journal information: Nature Nanotechnology | http://dx.doi.org/10.1038/nnano.2014.236 | https://phys.org/news/2014-10-imaging-electric-propagating-microbial-nanowires.html | Abstract The nanoscale imaging of charge flow in proteins is crucial to understanding several life processes, including respiration, metabolism and photosynthesis 1 , 2 , 3 . However, existing imaging methods are only effective under non-physiological conditions or are limited to photosynthetic proteins 1 . Here, we show that electrostatic force microscopy can be used to directly visualize charge propagation along pili of Geobacter sulfurreducens with nanometre resolution and under ambient conditions. Charges injected at a single point into individual, untreated pili, which are still attached to cells, propagated over the entire filament. The mobile charge density in the pili, as well as the temperature and pH dependence of the charge density, were similar to those of carbon nanotubes 4 and other organic conductors 5 , 6 , 7 . These findings, coupled with a lack of charge propagation in mutated pili that were missing key aromatic amino acids 8 , suggest that the pili of G. sulfurreducens function as molecular wires with transport via delocalized charges, rather than the hopping mechanism that is typical of biological electron transport 2 , 3 , 9 . Main The proteinaceous filaments of some bacteria, generically referred to as ‘microbial nanowires’, appear to be capable of transporting electrons over multiple cell lengths 9 , 10 , 11 . Long-distance extracellular electron transport along microbial nanowires has been proposed to play an important role in the global cycling of carbon and metals, several bioenergy strategies 9 , 10 and in the growth of some pathogenic microbes 12 . This is because microbial nanowires can facilitate electron transfer to environmentally significant extracellular electron acceptors, such as insoluble Fe( III ) oxides 13 and can provide cell-to-cell electrical connections that promote the exchange of energy and possibly other information 9 , 14 . The mechanisms for long-range electron transport in microbial nanowires have primarily been studied in Shewanella oneidensis and G. sulfurreducens . Several lines of investigation have suggested that electron transport along S. oneidensis nanowires is via electron hopping between cytochromes aligned along the pili filaments, representing an extended version of typical electron transport chains inside microorganisms 9 , 15 . In contrast, the cytochromes associated with the pili of G. sulfurreducens 16 are spaced too far apart for electron tunnelling/hopping 17 and scanning tunnelling microscopy (STM) has further suggested that pili conductivity cannot be attributed to cytochromes 18 . The conductivity of G. sulfurreducens outer surface protein preparations enriched in pili exhibited responses to temperature and proton doping similar to those observed in organic metals 19 such as disordered polyaniline 20 , 21 and carbon nanotubes 22 . Further evidence for electron transport via delocalized electrons in G. sulfurreducens pili was provided by the loss in pili conductivity when key aromatic amino acids were genetically removed from the pilin monomer 8 . However, the presence of other proteins in the outer surface preparations and the lack of prior precedence 2 , 3 have led to continued scepticism about this proposed electron transport mechanism 11 . We therefore directly examined charge flow and distribution along individual native pili of G. sulfurreducens with ambient electrostatic force microscopy (EFM). EFM has emerged as a powerful tool for imaging charge propagation at the nanoscale, because EFM-based techniques are very sensitive to local charges 4 and can image charge distribution within a single molecule 23 . Although not previously applied to proteins, EFM has been effectively used to visualize charge delocalization in synthetic nanomaterials such as carbon nanotubes 4 and nanostructures composed of organic molecules 5 , 7 . G. sulfurreducens strain KN400 was grown for charge propagation studies as described previously 17 , 19 . Transmission electron microscopy (TEM) revealed that, as expected, the cells produced both pili and flagella ( Fig. 1a ). Cells were adsorbed onto freshly cleaved mica and gently air dried, maintaining the pili under hydrating conditions 17 . Intact filaments still attached to cells were located and their charge distribution was mapped with the previously described two-pass technique 24 , 25 . In the first pass of the scan, the filament topography was imaged by scanning the conductive tip near the surface in non-contact atomic force microscopy (AFM) mode at a distance where the van der Waals forces are dominant ( Fig. 1b,c ). Height profiles of the pili ( ∼ 3 nm) and flagella ( ∼ 12 nm) were consistent with the expected diameters of these filaments ( Fig. 1d ). In the second pass of the scan, the tip was lifted to a height of 20 nm above the filament, where the electrostatic forces are dominant. The initial charge state of the sample was imaged in the lift mode ( Fig. 1e ) 4 , 24 , 25 using a tip biased at the probe voltage, V EFM = ±3 V, typically used for mapping charge delocalization in carbon nanotubes 4 . In the lift mode, local electric force gradients acting on the tip cause EFM phase shifts in the oscillation frequency of the cantilever, corresponding to the different regions of the sample. These EFM phase shifts are represented as different colours, with a bright colour in the EFM scan indicating attractive interaction between the tip and the sample and a dark colour representing repulsive interaction (see Methods for details). EFM phase shift measurements can therefore be used to map the charge distribution of a sample 4 , 24 , 25 . Figure 1: Strategy for direct visualization of charge propagation along native bacterial proteins with ambient EFM. a , TEM image of cells expressing pili and flagella filaments. Scale bar, 200 nm. b , Schematic of AFM used to image intact pili and flagella that were attached to cells adsorbed on a mica surface. c , AFM image of cells with pili and flagella filaments. Scale bar, 1 µm. d , Height profile of pilus (red) and flagellum filament (black) at the locations indicated in c . Pilus height, ∼ 3 nm; flagellum height, ∼ 12 nm. e – g , Schematic of set-up used to visualize charge propagation. In e , the initial charge distribution in filaments (black) is mapped in the first EFM scan. In f , charges are injected into the filaments by gently contacting the conductive AFM tip with a single point on the filaments (injected charges are shown as a white dot). In g , the propagation of injected charges (white) is visualized with the second EFM scan. Full size image In the next step, the EFM cantilever was biased at an injection voltage of V INJ = +10 V, the voltage typically used for visualization of charge delocalization in carbon nanotubes 4 . Charges were injected at a single point by gently touching the pili ( Fig. 1f ). Propagation of the injected charges was then visualized in the lift mode using another EFM scan carried out under the same scan conditions 4 , 5 , 7 ( Fig. 1g ; for full methodology see Methods ). With this approach, the propagation of the injected charges can be observed as a sharp change in the colour of the EFM phase shift. Upon charge injection there was a propagation of charges along the pili ( Figs 2 and 3 ), which was similar to that reported previously for carbon nanotubes 4 . Charges injected at a single point on pili propagated along the entire filament in the scan area, even in regions in which there were no other proteins associated with the pili ( Figs 2 and 3 ). The injected charges propagated rapidly over micrometre distances along the pili, with nearly homogeneous distribution over the whole length, suggesting that the injected charges are relatively mobile over the full pili filament. This visualization of charge propagation along the pili demonstrated that the pili function as molecular wires. Figure 2: EFM imaging demonstrates charge propagation along pili filaments. a – d , AFM height images ( a , b ) and corresponding EFM phase images ( c , d ), before and after charge injection, respectively. Charges were injected at the point indicated by the red arrow. All scale bars, 100 nm. EFM parameters: V INJ = +10 V, V EFM = +3 V. Upon charge injection ( c , d ), charges spread along the filaments (shown as a black charge cloud due to the positive EFM probe polarity used) and also accumulated along the cytochrome-like globule attached to the pilus filament. Dotted yellow lines are drawn to guide the eye. e , EFM phase shift profile of pili before (black) and after (red) charge injection at the locations shown in c and d . Full size image Figure 3: Visualization of charge propagation along pili filaments. a , AFM image of pili filaments connected across two cells. The region in the yellow square is shown in b – d . Scale bar, 400 nm. b – d , EFM phase shift images of pili before ( b ) and after ( c , d ) charge injection. Charges were injected at the point indicated by the red arrow. EFM parameters: V INJ = +10 V, V EFM = −3 V. Scale bars in b – d , 100 nm. b , EFM image of pili before charge injection. c , EFM image taken ∼ 45 min after charge injection. The propagation of injected charges along the pili is visible as a white charge cloud due to the negative EFM probe polarity used. Charges could not propagate along the bottom left pilus, which was broken before charge injection in the region indicated by the red square. d , EFM image taken ∼ 80 min after charge injection showing the partial dissipation of injected charges under ambient conditions. The dotted yellow lines are drawn to guide the eye. e , EFM phase shift profile of pili before (black) and after (red) charge injection at the locations shown in b and c . Upon charge injection there was no change in EFM phase shift for the left, broken filament, but the right filament shows a large increase in phase shift. Full size image In contrast, if charges were injected on the insulating mica substrate at a position just outside the pili, there were no changes in EFM phase shifts ( Supplementary Fig. 1 ). There was no charge propagation along flagella ( Supplementary Fig. 2a–d ), which are not expected to be conductive, nor along the pili of the Aro-5 strain of G. sulfurreducens , which was genetically altered to produce pili with low conductivity 8 ( Supplementary Fig. 2e–g ). The lack of charge propagation in flagella as well as in pili of the Aro-5 strain demonstrates that the charge propagation can be attributed to the internal structure of the pili filaments. The charges only propagated along pili when a positive voltage bias, V INJ = +10 V, was applied for charge injection. When a negative voltage bias of V INJ = –10 V was applied to the tip during injection experiments, the charges failed to propagate along the pili ( Supplementary Fig. 3 ). These experiments confirmed the previous finding that the charge carriers in pili are holes 19 . In some instances, globular structures were associated with the pili ( Fig. 2a ). These structures have been observed previously 17 and were probably the c -type cytochrome OmcS, which is known to be localized on the pili 16 . When charges were injected into such pili, the charges propagated along the pili and also accumulated in the cytochrome-like globules associated with the pili ( Fig. 2d ). This pattern of charge propagation is consistent with the model of extracellular electron transfer for G. sulfurreducens in which electrons move out of the cell and flow along pili to provide electrons to OmcS, which can serve as the terminal reductase for electron acceptors such as Fe( III ) oxide 16 , 17 , 26 . When multiple pili were in contact with each other, charges propagated from one filament to another ( Fig. 3 ). This observation is consistent with previous finding that networks of pili can conduct electrons over distances that exceed the length of individual pili 19 . In contrast, charges did not propagate from one pilus to another when they were near one another, but not in direct contact ( Fig. 3 ). Charge propagation along two connected pili was interrupted when a pilus was intentionally broken using an AFM tip ( Fig. 3c , red square). Injected charges did not propagate through the broken filament; only continuous filament structures were capable of propagating charges. As expected, over time, injected charges partially dissipated under the ambient conditions used in these experiments ( Fig. 3d ) 27 . The differences in the distribution of charges around the filaments ( Fig. 3 ) and within the filaments ( Fig. 2 ) were previously studied in detail for carbon nanotubes (ref. 4 , and references therein). As a result of the ambient conditions used in our experiments, the pili filaments discharge some of the charges in a manner similar to carbon nanotubes, thus creating a halo around the filaments ( Fig. 3 ). The extent of discharge is determined by conditions such as the humidity of the environment. Charge distribution was quantified by measuring the EFM phase shift Δ Φ as a function of different tip biases V EFM . The charge density in the pilus before and after charge injection was extracted by fitting the equation to the measured EFM phase shift as a function of EFM probe bias to carefully separate the charge and capacitive coupling effects. The first term in equation (1) denotes the contribution of coulombic (Δ Φ Q ) effects, whereas the second term represents the contribution of induced polarization effects (Δ Φ C ), with A (° V −1 ) and B (° V −2 ) as fitting parameters 4 , 5 , 7 . There was quantitative agreement between the experimental data and equation (1) ( Fig. 4a ), enabling the estimation of charge density in the pili. Plots for multiple pili had the same features. Upon charge injection there was a prominent change in the functional relationship between the EFM phase shift and the probe bias, demonstrating significant charge injection into the pili. The ratio R = (Δ Φ Q )/(Δ Φ C ) was used to compute the charge density in pili before and after charge injection 4 , 5 , 7 . Details of the quantification procedure are presented in the Methods. Figure 4: Quantitative measurements of charge propagation in pili filaments with EFM. a , Representative EFM phase shift for pili as a function of EFM tip voltage before and after charge injection into the pili. Fitting results were obtained using equation (1). b , Temperature dependence of charge density of pili. The charge density in pili before injection (red curve) was computed using the procedure described in the main text. The blue curve represents the total increase in charge density upon injection as a function of temperature. c , Increase in the charge density of pili filaments at lower pH. All values represent mean ± s.d. of three biological replicates. Full size image The charge density of the pili increased approximately tenfold upon charge injection ( P = 0.035) ( Fig. 4b ). After charge injection, the increase in the mobile surface charge density in pili was ∼ 6 × 10 5 charges per µm 2 . This is comparable to the charge density in metallic carbon nanotubes 4 as well as nanostructures composed of crystalline organic semiconductor pentacene 7 and π-conjugated oligomers 5 , which have been reported to exhibit charge delocalization using a quantitative EFM procedure similar to that described here. The volume charge density of pili is on the order of 1 × 10 21 charges per cm 3 , which is comparable to that in organic metals, such as the benchmark conducting polymer poly(3-hexylthiophene), which, when electrochemically doped, has shown signatures of metallic-like transport 6 . Notably, the injection of charges into the pili was reversible. The charge density returned to its pristine value when the charged pili were touched by a grounded conductive tip to remove the injected charges ( Supplementary Fig. 4 ). This reversibility of charge injection into the pili further demonstrates that injected charges are causing the increase in the charge density of pili filaments. The pristine charge density before charge injection, as well as the increase in charge density of pili as a result of charge injection, were comparable over a range of temperatures ( Fig. 4b ). Despite greater variability in the measurements at temperatures above 290 K, the values were not significantly different ( P = 0.14). This temperature independence of the charge density is a hallmark of metallic-like conductivity 28 (charge density decreases significantly in semiconductors as temperature is lowered 28 ). Hopping transport is strongly temperature-dependent, and the hopping rate decreases upon cooling 6 , 20 , 28 . Remarkably, charge propagation in pili upon charge injection was not hindered at lower temperatures over the duration of the experiment. The lack of decrease in charge density in pili upon cooling further suggested that the transport mechanism is not hopping. There was an approximately 100-fold increase in the charge density of pili upon lowering the pH from pH 7 to pH 2 ( Fig. 4c ) ( P = 0.028), indicating that pili can be doped at lower pH values, similar to carbon nanotubes 22 and other organic metals such as disordered polyaniline 21 . These results also suggest that the previously reported pH-dependent increase in the conductivity of pili networks 19 can be attributed to the increase in charge density of pili due to the doping effect at lower pH. The direct observation reported here that charges injected into pili at a single point were able to propagate instantaneously and almost uniformly over the entire pili filament, in a manner similar to that previously reported for carbon nanotubes 4 as well as nanostructures of crystalline organic molecular films 7 and π-conjugated oligomers 5 , suggests that the charges are relatively mobile and delocalize, possibly due to the π-conjugation of the aromatic residues that are required for pili conductivity 8 . These results indicate that the pili of G. sulfurreducens act like molecular wires, which can explain their ability to effectively enable long-distance electron transport for the reduction of insoluble electron acceptors 13 and electrodes 19 as well as for cell-to-cell electron exchange 14 . The capacity for efficient long-range extracellular electron transfer via protein filaments with high conductivity is likely to benefit the function of a wide diversity of microorganisms. The EFM-based method described here provides a strategy for rapidly evaluating the conductive properties of microbial filaments under physiologically relevant conditions. This method may also enable the direct visualization of charge flow and distribution in a variety of other biomolecules as well as among complex biological systems, such as cell-to-cell electron transfer, to study electron transfer reactions under physiological conditions with nanoscopic resolution. Methods Bacterial strains and culture conditions G. sulfurreducens strains 8 KN400 and Aro-5 were obtained from our laboratory culture collection. The cultures were maintained under strictly anaerobic conditions in growth medium supplemented with acetate (10 mM) as the electron donor and with fumarate (40 mM) as the electron acceptor, as described previously 13 , 19 . The expression of pili and flagella was confirmed by TEM ( Fig. 1a ). Freshly cleaved mica surface (Muscovite, V-1 quality, Electron Microscopy Sciences) was used as a substrate for all charge propagation studies. The cells were adsorbed onto the mica surface for 10–15 min, and excess moisture was removed with a micropipette. For pH dependence experiments, buffer containing the cell culture was equilibrated with aqueous HCl 21 and pH-equilibrated samples were adsorbed on the mica for charge injection experiments. AFM The adsorbed samples were imaged with an AFM (Asylum Research, MFP-3D) with conductive tips (SCM-PIT, Bruker Nano) as described previously 17 , 24 , 25 ( Fig. 1b,c ). The conductive tips had an oscillation frequency of f 0 ≈ 75 kHz, spring constant of ∼ 1–5 N m −1 , nominally 2.8 N m −1 , and a quality factor of 186. The tips were coated with 20 nm Pt/Ir to provide an electrical connection from the cantilever to the tip apex. EFM EFM measurements were performed as described previously using the two-pass technique 24 , 25 , with the first scan performed to measure the topography by scanning the conductive tip near the surface in non-contact AFM mode, in the region where the van der Waals forces are dominant. In the second scan the tip was lifted where the electrostatic forces are dominant, so the only source of signal change was due to electrostatic forces ( Fig. 1e ). To avoid any contribution from the sample topography to the measured EFM phase shift, lift height was calibrated in the following manner: (1) the oscillation amplitude of the tip was reduced to 20 mV in the EFM mode; (2) the unbiased tip was gradually lowered until the EFM phase shift showed the topography of the sample (this region corresponds to a purely topographic signal); (3) the unbiased tip was lifted 20 nm above the region of the topographic signal (at this lift height, EFM phase shift due to topography completely diminishes, confirming a topography-free signal). For all EFM imaging, a lift height of 20 nm above the sample was used to maximize the EFM signal and avoid the topography contribution 24 , 25 . Note that the phase–force response of the Asylum Research MFP-3D system is opposite to some previously reported literature 24 , 25 . In the MFP-3D system, an attractive force results in an increase in phase, yielding a brighter colour, and a repulsive force leads to a decrease in phase, yielding a darker colour 24 , 25 ( Figs 2 and 3 ). Charge injection Charge injection experiments were performed in a manner similar to that previously used to observe charge delocalization in carbon nanotubes and other nanostructures 4 , 5 , 7 . First, individual filaments were localized from the topography acquired using AFM ( Fig. 1b,c ). The pristine charge distribution of pili was then mapped using EFM ( Fig. 1e ). Charge injection was performed using local approach–retract curves by biasing the EFM cantilever at an injection voltage of V INJ = +10 V with respect to the substrate ( Fig. 1f ). In this approach, the distance between the biased EFM tip and the substrate was swept with an approach duration of 2 s and the retract duration was 5 s for a total duration of 80 s. In the approach mode, the cantilever oscillation amplitude becomes zero, enabling gentle contact with the filament. The mechanical contact between the tip and the filament was monitored by the cantilever deflection during the charge injection process and corresponds to a force of 150 nN applied for 2 s. This force was determined by measuring actual spring constants and the sensitivities of individual probes using force–distance measurements and by performing thermal tuning. This tip–filament contact force was kept identical for all charge injection experiments. Topography images acquired after the charge injection confirmed that no damage or alteration was caused to the filaments during the charge injection process. Thus, the force applied to the filaments during charge injection was enough to assure good electrical contact between the tip and the filament, but did not cause any measurable deformation of the filament. Visualization of the propagation of injected charges was achieved by additional EFM scans performed after charge injection ( Fig. 1g ). Note that the experimental set-up for these measurements differed significantly from that used in typical transport measurements (such as conducting-probe AFM) or STM studies. There were no electrical connections to the insulating mica substrate in any of the experiments ( Fig. 1f ), and no d.c. electrochemical currents generated during charge injection, so measurements of voltage-dependent electrochemical currents are not applicable to these experiments. Quantitative charge density measurement procedure The ratio R = (Δ Φ Q )/(Δ Φ C ) was used to compute the charge density in pili before and after injection. This normalization procedure of using R to compute the charge density allows truly quantitative charge measurements, because ratio R does not depend on the properties of the cantilever and EFM set-up, such as the quality factor, spring constant and lateral resolution, enabling direct comparison between charge density measurements on different pili. This method has been applied previously for metallic carbon nanotubes, crystalline organic semiconductors and π-conjugated polymers to estimate their charge density after the delocalization of injected charges 4 , 5 , 7 . Using numerical simulations, it has been demonstrated that ratio R remains valid for arbitrary sample and tip shapes by introducing two correction factors. The sample shape factor g is 1 for cylindrical samples such as pili, and the tip shape factor α is 1.5 for a standard conical tip 4 , 5 , 7 . By extracting parameters A and B with equation (1) and using the relation Q = e × σ × S where e is the electronic charge (1.6 × 10 −19 C), σ is the surface charge density, and S is the area of capacitance between the tip and substrate, the charge density can be computed as where z is the tip–substrate distance (20 nm), ɛ R is the dielectric constant for the pili ( ∼ 4 for proteins) 29 , 30 , and ɛ 0 is vacuum permittivity. Note that the surface potential is negligible in our case 4 , 5 , 7 . Furthermore, with the ratio method, any additional corrections can be neglected because the same correction factor applies in the capacitive and charge gradient forces 4 , 5 , 7 . For all quantitative measurements on each pilus filament, the same tip was employed with its measured spring constant and quality factor. Temperature dependence experiments The Cooler Heater set-up from Asylum MFP-3D, which utilizes a Peltier thermoelectric device, was used to vary the temperature of the sample stage in the available temperature regime of 300 K to 245 K. A flow-through liquid cooling system was supplied to allow the stage to reach temperatures below ambient conditions. All scans were initiated after reaching a steady-state temperature. The temperatures were continuously monitored and were stable throughout the experiments. All measurements were performed in a closed environment using a closed fluid cell design, which seals out the ambient environment with an FKM (Viton equivalent) rubber membrane and O-rings. Samples were mounted directly on the Peltier device using epoxy to minimize heat losses. All experimental results were confirmed by repeated measurements on multiple samples. Error bars represent standard deviations of at least three biological replicates. P values for all comparisons were obtained using two-tailed t -tests. | The claim by microbiologist Derek Lovley and colleagues at the University of Massachusetts Amherst that the microbe Geobacter produces tiny electrical wires, called microbial nanowires, has been mired in controversy for a decade, but the researchers say a new collaborative study provides stronger evidence than ever to support their claims. UMass Amherst physicists working with Lovley and colleagues report in the current issue of Nature Nanotechnology that they've used a new imaging technique, electrostatic force microscopy (EFM), to resolve the biological debate with evidence from physics, showing that electric charges do indeed propagate along microbial nanowires just as they do in carbon nanotubes, a highly conductive man-made material. Physicists Nikhil Malvankar and Sibel Ebru Yalcin, with physics professor Mark Tuominen, confirmed the discovery using EFM, a technique that can show how electrons move through materials. "When we injected electrons at one spot in the microbial nanowires, the whole filament lit up as the electrons propagated through the nanowire," says Malvankar. Yalcin, now at Pacific Northwest National Lab, adds, "This is the same response that you would see in a carbon nanotube or other highly conductive synthetic nanofilaments. Even the charge densities are comparable. This is the first time that EFM has been applied to biological proteins. It offers many new opportunities in biology." Lovley says the ability of electric current to flow through microbial nanowires has important environmental and practical implications. "Microbial species electrically communicate through these wires, sharing energy in important processes such as the conversion of wastes to methane gas. The nanowires permit Geobacter to live on iron and other metals in the soil, significantly changing soil chemistry and playing an important role in environmental cleanup. Microbial nanowires are also key components in the ability of Geobacter to produce electricity, a novel capability that is being adapted to engineer microbial sensors and biological computing devices." He acknowledges that there has been substantial skepticism that Geobacter's nanowires, which are protein filaments, could conduct electrons like a wire, a phenomenon known as metallic-like conductivity. "Skepticism is good in science, it makes you work harder to evaluate whether what you are proposing is correct," Lovley points out. "It's always easier to understand something if you can see it. Drs. Malvankar and Yalcin came up with a way to visualize charge propagation along the nanowires that is so elegant even a biologist like me can easily grasp the mechanism." Biologists have known for years that in biological materials, electrons typically move by hopping along discrete biochemical stepping-stones that can hold the individual electrons. By contrast, electrons in microbial nanowires are delocalized, not associated with just one molecule. This is known as metallic-like conductivity because the electrons are conducted in a manner similar to a copper wire. Malvankar, who provided the first evidence for the metallic-like conductivity of the microbial nanowires in Lovley and Tuominen's labs in 2011, says, "Metallic-like conductivity of the microbial nanowires seemed clear from how it changed with different temperature or pH, but there were still many doubters, especially among biologists." To add more support to their hypothesis, Lovley's lab genetically altered the structure of the nanowires, removing the aromatic amino acids that provide the delocalized electrons necessary for metallic-like conductivity, winning over more skeptics. But EFM provides the final, key evidence, Malvankar says. "Our imaging shows that charges flow along the microbial nanowires even though they are proteins, still in their native state attached to the cells. Seeing is believing. To be able to visualize the charge propagation in the nanowires at a molecular level is very satisfying. I expect this technique to have an especially important future impact on the many areas where physics and biology intersect." he adds. Tuominen says, "This discovery not only puts forward an important new principle in biology but also in materials science. Natural amino acids, when arranged correctly, can propagate charges similar to molecular conductors such as carbon nanotubes. It opens exciting opportunities for protein-based nanoelectronics that was not possible before." Lovley and colleagues' microbial nanowires are a potential "green" electronics component, made from renewable, non-toxic materials. They also represent a new part in the growing field of synthetic biology, he says. "Now that we understand better how the nanowires work, and have demonstrated that they can be genetically manipulated, engineering 'electric microbes' for a diversity of applications seems possible." One application currently being developed is making Geobacter into electronic sensors to detect environmental contaminants. Another is Geobacter-based microbiological computers. This work was supported by the Office of Naval Research, the U.S. Department of Energy and the National Science Foundation. | 10.1038/nnano.2014.236 |
Medicine | Characterizing the mouse genome reveals new gene functions and their role in human disease | 'Disease model discovery from 3,328 gene knockouts by The International Mouse Phenotyping Consortium' by Meehan et al., Nature Genetics. DOI: 10.1038/ng.3901 Journal information: Nature Genetics | http://dx.doi.org/10.1038/ng.3901 | https://medicalxpress.com/news/2017-06-characterizing-mouse-genome-reveals-gene.html | Abstract Although next-generation sequencing has revolutionized the ability to associate variants with human diseases, diagnostic rates and development of new therapies are still limited by a lack of knowledge of the functions and pathobiological mechanisms of most genes. To address this challenge, the International Mouse Phenotyping Consortium is creating a genome- and phenome-wide catalog of gene function by characterizing new knockout-mouse strains across diverse biological systems through a broad set of standardized phenotyping tests. All mice will be readily available to the biomedical community. Analyzing the first 3,328 genes identified models for 360 diseases, including the first models, to our knowledge, for type C Bernard–Soulier, Bardet–Biedl-5 and Gordon Holmes syndromes. 90% of our phenotype annotations were novel, providing functional evidence for 1,092 genes and candidates in genetically uncharacterized diseases including arrhythmogenic right ventricular dysplasia 3. Finally, we describe our role in variant functional validation with The 100,000 Genomes Project and others. Main With its extensive toolkit for genome modification and its capacity for recapitulating human disease, the laboratory mouse is arguably the preferred model organism for studying and validating the effects of genetic variants in mendelian disease, as well as identifying previously unsuspected disease-associated genes. Null mouse mutations have been generated and described in the literature for approximately one-half of the genes in the genome 1 . However, hypothesis-driven phenotyping of these mutants has led to discoveries in areas that largely reflect the expertise and specific research questions of individual investigators. As a result, the extent of functional annotation, the potential to fully uncover pleiotropy and the opportunity to exploit mutant mouse models for disease-agnostic interrogation is limited. Furthermore, the lack of reproducibility in knockout experiments is a well-documented challenge in drug-target development and behavioral and other translational studies 2 , 3 . This lack of reproducibility is commonly due to using poorly defined statistical methods, performing studies in only one sex and practicing bias in animal selection 4 . The development of a comprehensive reference phenotype database by using fully validated, standardized and automated phenotyping procedures across all body systems in mutants of both sexes provides a robust dataset to corroborate disease-causing factors in humans. The International Mouse Phenotyping Consortium (IMPC) is creating just such a catalog of mammalian gene function that systematically associates mouse genotype-to-phenotype data and enables researchers to formulate hypotheses for biomedical and translational research as well as purpose-driven preclinical studies 5 , 6 . The IMPC adult phenotyping pipeline analyzes cohorts of male and female knockouts on an isogenic C57BL/6N background from embryonic-stem-cell resources produced by the International Knockout Mouse Consortium comprising targeted null mutations with reporter-gene elements 7 , 8 , 9 . Homozygotes are characterized, except in those strains (approximately 30%) in which gene inactivation necessitates the use of heterozygotes to study mice that are embryonic/perinatal lethal or subviable 10 , 11 . The pipeline measures a total of 509 phenotyping parameters that encompass diverse biological and disease areas including neurological, behavioral, metabolic, cardiovascular, pulmonary, reproductive, respiratory, sensory, musculoskeletal and immunological parameters. Standardized and harmonized protocols developed by the IMPC are used to decrease phenotypic variance across the centers and build upon experience from the pilot EUMODIC project, in which a limited 7% discordant-phenotype rate has been observed for a large set of 22 common reference mutant lines 6 . Rigorous data quality control is applied to the captured data from the ten phenotyping centers, and an automated statistical analysis pipeline (PhenStat 12 ; Online Methods ) identifies mutants with statistically significant phenotypic abnormalities. In the current IMPC data release 5.0 (2 August 2016), 3,328 genes have been fully or partially phenotyped, thus generating over 20 million data points and 28,406 phenotype annotations. Complementing the physiological, behavioral and structural phenotype datasets, the IMPC also provides annotated expression of LacZ data across multiple organ and tissue systems for 1,413 genes 13 and extensive histopathological analysis of adult tissues for 333 genes 14 . The IMPC portal (URLs) provides a single point of access to phenotype data, embryonic-stem-cell and Cas9-RNA-guided nuclease resources and mutant mouse strains. Sophisticated query interfaces for both gene and phenotype data are provided, as well as tools to visualize phenotypes encompassing quantitative, categorical and image data 15 . Periodic data releases provide the most recent genotype–phenotype associations. In our current analysis, we (i) identified new mouse models for human mendelian disorders with a known genetic basis, (ii) uncovered candidate disease-associated genes for human mendelian disorders for which only a genomic location had been associated and (iii) identified new mouse disease models involving genes with little or no previous functional annotation. A summary of the results is presented in Figure 1 and is described in more detail in the following sections. Figure 1: IMPC mutant models of human disease and gene function. Human disease models were identified by measuring the degree of phenotypic similarity between IMPC null mutant mouse strains and their orthologous genetic loci associated with human diseases. Models of mendelian disease: of 889 potential disease models, 360 mutant strains had both phenotypic overlap and an orthologous null allele, as compared with diseases with known mutations described in OMIM and Orphanet. Novel mendelian disease candidates: 135 strains had phenotypic overlap and null alleles syntenic to linkage or cytogenetic regions associated with human diseases with unknown molecular mechanisms. New functional knowledge: of 2,564 genes with a nonlethal IMPC phenotype, IMPC data provided new functional experimental evidence for 1,092 of these genes, on the basis of GO annotation. Full size image Results Comparison of IMPC findings with previous knowledge We first investigated the concordance of our phenotype annotations with previously reported data for mouse lines involving the same genes. We found that 621 genes assessed by us have previous mouse-model annotations, on the basis of a literature review of knockout lines by the Mouse Genome Informatics (MGI) group 1 . An assessment of the corresponding 2,547 MGI gene–phenotype associations previously assessed by an IMPC procedure showed that 958 (38%) were detected, and 62% (385 out of 621) of the genes had at least one phenotype reproduced ( Supplementary Table 1 ). This result was is in line with previous reports describing the reproducibility of biomedical models 16 . A lack of reproducibility may be due to several factors including different genetic backgrounds and variations in experimental techniques and statistical methods. For example, evidence for the previously reported increase in circulating glucose in Gad2 mice was found in our data (URLs) but was not considered statistically valid by our robust methods. Despite our best efforts to be as broad based as possible in the context of a high-throughput project, there were an additional 10,068 MGI phenotypes for these genes that we were unable to assess or that would have required a different type of allele to be introduced to observe the effect. However, as shown below, our pipeline covers all major disease areas. Additionally, use of our resources by the research community to publish over 1,300 new publications is generating numerous new annotations from MGI's curation of the literature and upcoming changes to our pipeline, such as phenotyping a subset of genes in mice at later ages (12–18 months) and including new behavioral tests, will increase our coverage. Furthermore, we generated extensive new knowledge: 90% (8,984 of 9,942) of the gene–phenotype annotations described by IMPC have not previously been described in the literature. Models of human mendelian disease The high volume and complexity of data produced by the IMPC present challenges for identifying relevant models of human diseases. To facilitate discovery, we developed a translational pipeline to automatically detect phenotypic similarities between the IMPC strains and over 7,000 rare diseases described in the Online Mendelian Inheritance in Man (OMIM) 17 and Orphanet databases 18 . The pipeline utilizes the human phenotype ontology (HPO) 19 annotations for rare diseases, maintained by the Monarch Initiative 20 , our Mammalian Phenotype Ontology (MP) 21 annotations of phenotypic abnormalities and the PhenoDigm algorithm, which has also been developed by the Monarch Initiative 22 . The methodology is based on previous work demonstrating superior identification of disease models, as compared with defining mouse strains solely by orthology or using other methods of calculating phenotype similarity 22 , and the results provide a quantitative measure of how well an adult mouse model recapitulates the clinical features of a disease. From the ∼ 15% of mouse protein-coding genes phenotyped by IMPC to date, 889 known rare disease–gene associations represented within OMIM and Orphanet have an orthologous IMPC mouse strain and display at least one phenotype ( Supplementary Table 2 ). By comparing human and mouse phenotypes, our automated pipeline identified 185 adult disease–gene associations for which the IMPC mutant mouse strain modeled the human disease; most associations (134) involved genes for which a mutant mouse strain either had not been generated or had not been reported as a model of that disease, on the basis of the MGI curation ( Table 1 and Supplementary Tables 2 and 3 ). Each of the 889 associations had a mean of 14.7 ± 27.8 (mean ± s.d.) candidate genes associated with the disease, on the basis of the algorithm, with a median rank of 3 for the true associated gene in the 185 sets of recalled associations. Table 1 Frequency of IMPC models corresponding to mendelian disease–gene associations in OMIM or Orphanet Full size table The range of human mendelian diseases with matching mouse phenotypes was broad and included multiple biological systems ( Table 2 ). Three examples of mouse models that, to our knowledge, have not previously been reported ( Fig. 2 ) are for Bernard–Soulier syndrome, type C (MIM 231200 ), Bardet–Biedl syndrome-5 (MIM 615983 ) and Gordon Holmes syndrome (MIM 212840 ). Bernard–Soulier syndromes are bleeding disorders that result from mutations in genes encoding protein products of the glycoprotein Ib complex, which serves as the platelet-membrane receptor for von Willebrand factor. Glycoprotein Ib is composed of four subunits encoded by four separate genes, GP1BA , GP1BB , GP9 and GP5 , and mutations in all these genes are associated with an autosomal recessive (AR) disorder characterized by prolonged bleeding times, enlarged platelets, an inability to clot and incomplete penetrance of thrombocytopenia 23 . Gp9 tm1.1(KOMP)Vlcg null homozygotes have a decreased number of platelets and larger cell volumes ( Fig. 2a,b ), thus recapitulating key features of the disease and adding evidence that the point mutations associated with disease in humans lead to a functionally null complex. Bardet–Biedl syndromes are heterogeneous AR ciliopathies characterized by retinitis pigmentosa, obesity, kidney dysfunction, polydactyly, behavioral dysfunction and hypogonadism. The disorder is associated with at least 19 genes whose products form the BBSome, a protein complex involved in signaling-receptor trafficking within cilia and that may also have functions not involving cilia 24 . Twenty mutations including splice-site, missense/nonsense, insertion, indel and deletion mutations within the BBS5 gene account for 4% of all BBS cases. Bbs5 tm1b(EUCOMM)Wtsi null mice exhibit abnormal retinal morphology resembling the retinal dystrophy observed in Bardet–Biedl syndrome. Other phenotypes were also observed in null mice recapitulating many hallmarks of BBS, including obesity ( Fig. 2c,e ) as well as other features such as impaired glucose homeostasis ( Fig. 2d ). Gordon Holmes syndrome is another AR disorder characterized by hypogonadism and cerebellar ataxia, and it has been associated with RNF216 (refs. 25 , 26 ). Male infertility was observed in Rnf216 tm1b(EUCOMM)Wtsi homozygous-null mice with histopathology identifying seminiferous-tubule degeneration and atrophy characterized by the diffuse absence of most or all germ cells and the presence of occasional multinucleated spermatids with pyknotic nuclei within tubules lined by vacuolated Sertoli cells. Seminiferous changes were accompanied by diffuse interstitial cell (Leydig cell) hyperplasia. The epididymis was devoid of spermatozoa (epididymal aspermia) ( Fig. 2f ). Table 2 Examples of IMPC disease models across diverse biological systems Full size table Figure 2: Mouse models for mendelian disease. ( a , b ) Gp9 : Bernard–Soulier syndromes result from mutations in the glycoprotein Ib platelet membrane receptor complex. Gp9 tm1.1(KOMP)Vlcg homozygotes have increased platelet volume ( a ) and decreased platelet numbers ( b ). WT, wild type. In a , female control, n = 479 mice; female homozygous, n = 8; male control, n = 428; male homozygous, n = 8; linear mixed-effects model without weight, P = 0. In b , female control, n = 439; female homozygous, n = 8; male control, n = 428; male homozygous, n = 8; linear mixed-effects model without weight, P = 2.31 × 10 −6 ). Asterisks, significant difference between mutant and same-sex controls; mixed-effects-model P < 0.0000. ( c , d ) Bbs5 : Bbs5 is associated with Bardet–Biedl syndrome (BBS). Bbs5 tm1b(EUCOMM)Wtsi homozygotes have increased body-fat percentage ( c ) and impaired glucose tolerance ( d ). In c , female control, n = 1,276; female homozygous, n = 8; male control, n = 1,296; male homozygous, n = 8; linear mixed-effects model without weight, P = 1.99 × 10 −11 ). In d , blood glucose levels were determined after a 16-h fast followed by intraperitoneal glucose injection. Female control, n = 491; female homozygous, n = 8; male control, n = 509; male homozygous, n = 8; linear mixed-effects model without weight, P = 2.85 × 10 −7 ). X-ray visualization of Bbs5 homozygotes and controls. In a – d , box limits, first and third quartiles; line, median; whiskers, minimum and maximum values. ( e , f ) Rnf216 : Gordon Holmes syndrome is associated with Rnf216 and is characterized by hypogonadism and cerebellar ataxia. ( e ) X-ray image of a 14-week-old Bbs5 homozygous-null female mouse shows the large body size of this strain compared with an age-matched, wild-type control. ( f ) Rnf216 tm1b(EUCOMM)Wtsi homozygous-null male mice are infertile. Histopathology images (20× magnification) show seminiferous-tubule degeneration and atrophy with Leydig-cell hyperplasia and epididymal aspermia in null mice, as compared with the unaffected seminiferous tubules and epididymis in control mice. Scale bars, 100 μm. Full size image From the 704/889 known associations for which we did not detect an IMPC model, 48 have not yet been tested in mice for a phenotype recapitulating any of the clinical phenotypes, thus leaving 656 (74%) associations for which our automatic algorithm did not detect a potential disease phenotype from the IMPC pipeline. To evaluate the sensitivity of the automated human–mouse phenotype matching, we manually evaluated 100 randomly chosen examples of these missed associations ( Supplementary Table 4 ). This process led to 12 additional discoveries for which the phenotype matches fell below the similarity threshold used in our algorithm, e.g., the decreased startle-reflex match for deafness. PhenoDigm is optimized to maximize precision and recall ( Supplementary Fig. 1 ), and decreasing our threshold to detect such matches would have introduced many false positives. Manual assessment is not feasible in the long term, given the ever-increasing number of strains and new data for existing strains. However, future implementations will incorporate histopathological data to increase recall, as seen for the additional model detected by manual assessment. Human mendelian disease is caused by a variety of mutations resulting in complete loss or partial loss or gain of function under various modes of inheritance. The IMPC phenotypes only the null allele in a homozygous state or, in the case of embryonic/perinatal lethality, in the heterozygous state. Thus, IMPC mouse strains are suitable for identification of putative disease-associated genes, as opposed to variant identification, by identifying which genes expressing variants of unknown importance are pathogenic. The 889 human diseases associated with genes orthologous to those of the IMPC mouse model strains were inherited with approximately equal frequency by autosomal dominant (AD) or AR genetics (379 AD compared with 423 AR and 87 unknown/X-linked diseases). The frequency of inheritance by AD and AR genetics was also equivalent for the 185 adult disease–gene associations for which the IMPC mutant mouse line modeled the human disease (82 AD compared with 94 AR and 7 unknown/X-linked diseases). These results indicated that the mouse models effectively model human disease independently of the mode of human inheritance. Human AR disease is likely to be a consequence of a mutation resulting in complete or partial loss of function, for which haploinsufficiency is not adequate to produce symptoms. As would be expected, AR human disease was more frequently modeled by homozygous-null mouse mutants: 65% (61/94) of the AR models were viable and phenotyped as homozygous mice, whereas 35% (33/94) were subviable/lethal as homozygotes, and therefore heterozygous mice were phenotyped. AD inheritance can be attributed to either haploinsufficiency or gain-of-function mutations, and we found that 46% of the dominant human mutations were modeled by heterozygous mouse mutants, in agreement with a haploinsufficiency mechanism for almost half of the diseases. Interestingly, 227 of the 423 (54%) tested AR associations were homozygous lethal/subviable in mice, thus leading us to consider whether early mortality might occur in people with these diseases or might occur without extensive medical intervention. Lethality matches are not detectable by our automated algorithm, because human lethality is rarely recorded in the disease HPO annotations, and for 74 of the 889 associations (8%), homozygous lethality is the only mouse phenotype that we have detected to date. To address this potential lack of detection, we manually investigated whether the associations involving mouse homozygous lethal/subviable strains were associated in OMIM/Orphanet with human embryonic or early death (before 2 years of age) or with severe early-onset disorders in people not likely to survive through puberty without substantial medical support (for example, cleft palate is a lethal phenotype in mice but is easily treatable in humans). This procedure uncovered a further 97 new mouse–human disease associations ( Supplementary Table 2 , column J annotated with Y-L) for which human lethality was recorded and another 78 for which the disease would probably have been lethal without medical intervention ( Supplementary Table 2 , column J annotated with Y-PL). Most of these lethality matches were inherited with AR genetics (73%, 122 of the 166 diseases with reported inheritance from OMIM/Orphanet) and modeled by homozygous mouse mutants, in agreement with the conclusion that homozygous loss-of-function mutations in essential genes in humans produce either early death or severe congenital medical conditions requiring advanced medical support for survival. Examples of mouse and human embryonic/early-onset lethality include ventriculomegaly with cystic kidney disease, which results in in utero or neonatal human fatality (MIM 219730 ; gene, CRB2 ) and Stuve–Wiedermann Syndrome (MIM 601559 ; gene, LIFR ). Diseases that would probably have been lethal without medical support and that had a corresponding lethal/subviable mouse strain included COACH disease (MIM 216360 ; gene, RPGRIP1L ), Meier Gorlin syndrome 1 (MIM 224690 ; gene, ORC1 ) and human phospherine phosphatase deficiency (MIM 614023 ; gene, PSPH ). For human phospherine phosphatase deficiency, microcomputed tomography indicated that a homozygous lethal mouse mutant in Psph tm1b(EUCOMM)Wtsi has structural abnormalities at embryonic day (E) 15.5 that closely resemble the developmental and structural defects in human phosphoserine phosphatase deficiency ( Fig. 3 ). Figure 3: Mouse model of phosphoserine phosphatase deficiency. Psph : phosphoserine phosphatase deficiency (MIM 614023 ) is an AR disorder characterized by prenatal and postnatal growth retardation, psychomotor retardation and facial dysmorphologies whose severity of symptoms requires medical support for survival. ( a ) Complete preweaning lethality was observed in Psph tm1.1(KOMP)Vlcg homozygous-null mice. Number of pups, genotypes and sex ratios of heterozygous intercrosses were set to generate cohorts for phenotyping. No homozygous pups were observed, whereas 66% (54/82) of the pups surviving to weaning were heterozygous, and 34% (28/82) were wild type. Asterisks indicate no surviving homozygotes. ( b ) LacZ reporter expression regulated by the Psph promoter in an asymptomatic heterozygous E12.5 embryo shows extensive gene expression (bar, 1 mm). ( c ) Gross images of E15.5 homozygous mutant embryos confirm growth retardation, hemorrhage and facial dysmorphology (bars, 2 mm). ( d ) Imaging of E15.5 embryos by microcomputed tomography shows substantial growth retardation as well as facial dysmorphology consistent with the human mendelian disorder. Full size image When we included these manually curated lethality matches, 40.5% (360) of the disease models had phenotypic overlap with the 889 disease-associated genes ( Table 1 ), and the majority (78%; 279 of 360) represented the first report, to our knowledge, of a candidate mouse model for these diseases. The discovery rate of disease models in our analysis was comparable to that in previous reports on smaller high-throughput mouse phenotyping studies that have found modeling of 46% of 59 and 33% of 42 associations through manual investigation of data 6 , 27 . For cases in which we did not detect a model despite testing for at least one equivalent phenotype (54%; 484), the explanations included differences between human and mouse biology, differences in genetic background, a null allele not being appropriate to model the disease or differing methodologies used for annotation. For example, rarely observed phenotypes for a disease are often recorded in the HPO annotations and would probably fall below the statistical significance threshold if they were similarly nonpenetrant in mice. Finally, there is a slight possibility that some previous descriptions of alleles may have influenced disease modeling in cases in which a hypomorph rather than a null mutant is possible (e.g., the 90 tm1a mutant) or in which a retained neomycin cassette may have altered expression of genes in cis (e.g., 90 tm1a and 10 KOMP1 mutants). Functional knowledge and candidate genes associated with mendelian diseases The second major clinical use for the IMPC data is providing new data on the phenotypes and functions of genes. Thus, IMPC has prioritized genes with no known disease associations or minimal Gene Ontology (GO) annotation. On the basis of MGI's literature curation of mutant strains involving any allele type except conditional mutations, 1,830 of the 3,328 genes phenotyped in this IMPC release have not previously had a mouse mutant produced. No GO molecular-function or biological-process annotations were available for 189 genes, whereas another 903 genes had annotations inferred from computational analysis 28 ( Fig. 4a ). The phenotypes of these mutant strains provide insights into the functions of a large class of genes (sometimes described as the 'ignorome') 29 , for which there is little or no existing functional information ( Supplementary Table 5 ). Figure 4: Novel mouse models of disease. ( a ) Over 40% of IMPC strains are based on genes that lack experimental evidence for function according to the Gene Ontology Consortium (blue, red and green). ( b , c ) Fam53b : Fam53b tm1b(EUCOMM)Hmgu homozygous mutant mice had significantly decreased red blood cell counts ( b ) and enlarged erythrocytes ( c ). In b , female control, n = 597 mice; female homozygous, n = 8; male control, n = 635; male homozygous, n = 8; linear mixed-effects model without weight, P = 2.81 × 10 −11 ). In c , female control, n = 598; female homozygous, n = 8; male control, n = 634; male homozygous, n = 9; linear mixed-effects model without weight, P = 0), consistent with Diamond–Blackfan anemia (MIM 105650 ). In b and c , box limits, first and third quartiles; line, median; whiskers, minimum and maximum values; asterisks, significant difference between mutant and same-sex controls, mixed-effects-model P < 0.0000. Full size image Examples of candidate genes for human mendelian disease with previously little functional information include family with sequence similarity 53 member B ( Fam53b ), for which no phenotypic variants had been reported in humans or mice. The gene is differentially expressed in adult definitive erythrocytes compared with primitive erythrocytes, with more than a sixfold log 2 change, as shown in the Expression Atlas (URLs) 30 , 31 . Homozygous Fam53b tm1b(EUCOMM)Hmgu knockout mice showed increased mean corpuscular hemoglobin and decreased erythrocyte cell numbers ( Fig. 4b,c ), thus suggesting that the gene is involved in hematopoiesis and is a candidate for macrocytic hyperchromic anemias. PhenoDigm identified this gene as having a phenotype mimicking that of Diamond–Blackfan anemia (MIM 105650 ), a group of 15 unique anemias generally attributable to defects in ribosome synthesis but for which known mutations account for only approximately 54% of all affected people 32 . A single functional study has suggested that Fam53b is required for Wnt signaling, which is key in determining cell fate, cell proliferation, stem cell maintenance and formation of the anterior–posterior axis 33 . The Fam53b knockouts thus suggest a new candidate pathway that may account for the 46% of people with Diamond–Blackfan anemia with unknown genetic causes. As well as providing fundamental insights into the functions of genes with few or no previous functional annotations, the phenotype analyses are also identifying numerous new candidate disease models that may provide a foundation for relating gene function to disease phenotype. This new data and biological resource may be used to detect novel genotype-to-phenotype associations in diseases for which simply considering existing human data would lead to causative variants being overlooked among the overwhelmingly abundant associated variants of unknown importance, as occurs in many exome-sequencing studies. Over half of diagnosed rare diseases still have no known associated genes, and diagnostic rates in most high-throughput mendelian disease sequencing projects are 20–30%, largely because of a lack of functional information for most genes. The IMPC data can be used to remedy this deficiency and to start achieving better diagnostic rates. As a demonstration of the potential application of IMPC data to the discovery of novel disease-associated genes, we identified candidate genes potentially associated with mendelian diseases through unknown molecular mechanisms, for which broad genetic localization was available in OMIM from previous studies. Our disease-matching algorithm identified 135 associations in which our predicted disease-associated gene was found to be within these loci ( Supplementary Table 6 ). For example, adult mice heterozygous for the Klhdc2 tm1b(EUCOMM)Hmgu allele have a complex syndrome of abnormalities including altered electrocardiogram findings. The phenotypes match the clinical signs described for the AD disease arrhythmogenic right ventricular dysplasia 3 (ARVD3; MIM 602086 ) associated with cardiac arrythmias caused by fibrofatty replacement of the right-ventricle myocardium. Klhdc2 is syntenic with the ARVD3 locus, thus suggesting that it may be a candidate gene. Usmg5 -null mice recapitulate the clinical symptoms of muscle weakness and abnormal gait seen in people with the dominant intermediate A form of Charcot–Marie–Tooth disease. The human ortholog USMG5 is located within a 9.8-Mb critical region identified in people with this disease. Whereas the implicated human loci are sometimes megabases in length and encompass hundreds of genes, these examples illustrate how IMPC phenotype data allow for the scoring of candidate genes with disease-causing variants and have important implications for current genetic projects investigating rare diseases through next-generation-sequencing technologies. Discussion By analyzing phenotypic similarities between IMPC's mouse strains and human diseases, we provided new disease models and novel functional knowledge for a substantial and growing proportion of protein-coding genes. These models are readily available and can be exploited to study disease mechanisms, to develop new gene therapies and pharmacological treatments, and to improve understanding of gene function. The novelty of the IMPC lies in both the scale of the vision to produce a comprehensive catalog of mammalian gene function across all genes and the non-hypothesis-driven, standardized approach to phenotyping. This resource facilitates novel discoveries about the functions of genes and their roles in diseases, as highlighted above. The potential of IMPC phenotypic comparisons for prioritizing candidates in human mendelian syndromes is accessible to clinical researchers performing next-generation sequencing-based diagnostics through inclusion within the Exomiser software package, which combines an assessment of variant pathogenicity with gene candidacy on the basis of the similarity of human disease phenotypes to known phenotypic knowledge from humans, mice and fish 34 . Exomiser is being applied within the NIH Undiagnosed Disease Program and Network 35 as well as the 100,000 Genomes Project, which embeds genomics into a national healthcare system. The IMPC goes beyond modeling mendelian syndromes by leveraging its existing global infrastructure to address complex biological questions. IMPC mouse strains are widely used across every major biological system (URLs) and are have been used in SNP validation studies of complex traits in both humans and mice 36 , 37 , 38 , 39 , 40 , 41 . Such studies will be supported by ongoing work using the IMPC phenotyping pipeline to characterize the eight founder inbred mouse strains for the collaborative cross-resource used in the study of complex traits and in targeted noncoding mutant mouse strains (11 microRNAs and 4 long intervening noncoding RNAs in the current data release) to study regulatory activities. Other collaborative multicenter, efforts are using IMPC mouse strains to study gene function in hearing, vision, metabolism and pervasive sexual dimorphism. Starting in 2017, a substantial fraction ( ∼ 15%) of mutant strains will be rephenotyped in mice at 12–18 months of age to identify genes involved with late-onset disease. A major change in our strategy has been the adoption of CRISPR-based methods to increase the production rate and the opportunity to characterize strains containing the same single-nucleotide variants or indels as those in human patients. For example, the MRC Genome Editing Mice for Medicine (GEMM) service is characterizing patient variants identified through whole-genome sequencing by the 100,000 Genomes Project to functionally validate variants of unknown importance and/or facilitate mechanistic and therapeutic studies in collaboration with clinicians and researchers. Generation of these precision models is key to the development of new therapies for rare disease, a process lagging behind the discovery of new disease-associated genes. The approach will be expanded in the coming years to characterize candidate noncoding regulatory variants in undiagnosed 100,000 Genomes Project cases as well as other large-scale sequencing projects such as the Undiagnosed Disease Program and Network. We believe that many of the lessons that we have learned in establishing the IMPC will also be of value in recently launched precision-medicine efforts, whose goals are to improve treatment through the customization of healthcare according to the genomic information of individual patients and environmental factors. Harmonization of phenotypic traits captured in diverse formats across multiple centers will be critical to the stratification of disease populations for improved treatment as well as using model-organism data to better identify disease-causing gene variants. Whereas advances in technologies based on CRISPR and induced pluripotent stem cells have vastly expanded researchers' toolkits, the work of the IMPC highlights the continuing importance of mouse models in understanding disease mechanisms. Mice are vertebrate mammals with physiological characteristics that recapitulate those of all major human biological systems, thus allowing the study of processes that cannot be investigated in vitro , including the effects of behavioral, inflammatory, endocrine and sex-specific processes on disease. Whereas CRISPR-based methodologies now allow for genome engineering in nearly every species, mice have other characteristics that have made them a widely used model organism for over a century. Inbred mouse strains such as the C57BL/6N strain used by the IMPC have standardized, uniform genetic backgrounds that decrease phenotypic variability, and most strains have a 2-year lifespan that allows for comprehensive studies in a timely manner. Reproducibility of results in translational studies is a major issue, and we found that the overlap of phenotypes between IMPC mouse strains and previously published mutant strains was in line with results from other studies investigating reproducibility. This finding highlights the importance of high-quality phenotype annotation of human clinical records and mouse phenotypes and demonstrates the importance of open sharing of data. Toward this end, the IMPC adheres to the ARRIVE guidelines for reproducibility of animal-model experiments, including making all data available and allowing for transparent statistical analysis via free distribution of our PhenStat software 12 . In conclusion, the IMPC has established an ever-expanding knowledge base of mammalian gene function, a large resource of novel disease models and the capacity for functional validation of variants identified in disease sequencing projects. This information should be of great value to the human disease research community. Methods Mouse production. Targeted ES-cell clones obtained from the International Knockout Mouse Consortium resource 7 , 42 were injected into mouse blastocysts for chimera generation. The resulting chimeras were mated to C57BL/6N mice, and the progeny were screened to confirm germline transmission. After the recovery of germline-transmitting progeny, most strains (82%) were crossed with a coisogenic C57BL/6N transgenic strain bearing a germline expressing Cre recombinase to excise the floxed neomycin selection cassette (neo) and critical exon for EUCOMM alleles) and to generate a true deletion. For the rest, because of the requirement early on for establishment and testing of the pipeline without additional breeding, lines were characterized that contained either tm1a alleles (16% of which relied on a stop codon that could potentially be spliced around and consequently retain the neo cassette, which can alter the transcriptional activity of other genes in cis ) or the KOMP1 allele (2% of which retained the neo cassette). The resulting C57BL/6N heterozygotes were intercrossed to determine viability and generate homozygous mutants. All strains are accessible from the IMPC portal. Mouse phenotyping and experimental design. On the basis of previous analysis on appropriate sample sizes to detect significant effects with our statistical framework (described below), a minimum of seven male and seven female mice were characterized from 9 weeks of age until 16 weeks of age, by using a broad-based phenotyping pipeline that assessed every major biological system. IMPC centers used a common control strategy in which cohorts of age-matched, wild-type C57BL6/N mice were phenotyped in a continuous manner alongside mutant C57BL6/N strains. These cohorts were used in quality control (e.g., baseline drift) and in statistical analysis of the data. A centralized database of consensus IMPC standard operating procedures, IMPReSS ( URLs ), ensured that all phenotyping data and metadata were collected in a reproducible and standardized format. Cohorts of at least seven homozygous mice of each sex per line were generated. If no homozygotes were obtained from 28 or more offspring of heterozygote intercrosses during production, the strain was scored as nonviable. Similarly, if intercrossing resulted in less than 13% of the pups being homozygous, the strain was scored as subviable. For nonviable and subviable strains, heterozygous mice were committed to the phenotyping pipelines. Individual mice were considered the experimental units in the studies. Data quality control (QC). Defined criteria were established for QC failures (e.g., insufficient sample or incorrect instrument calibration) and are detailed within IMPReSS to provide valid reasons for discarding data. A second QC cycle occurred when data were uploaded from the phenotyping center to the IMPC Data Coordination Centre (DCC) through an internal QC web interface. Data were considered to fail QC and were removed from the dataset if we identified clear technical reasons for a measurement being an outlier, and QC failure was tracked within the database. Wild-type–knockout comparisons. Data sets comparing wild-type versus null were restricted to data collected at a single center and were assembled by selecting data from knockout and wild-type mice that had data collected through protocols of the same version with the same metadata parameters (e.g., instrument). Because wild-type mice were measured every week, data for a null strain were generally compared with data from hundreds of wild-type control mice. In the case in which all members of a null mouse strain were measured on the same day with an equal number of control mice, the comparison was restricted to this smaller set of data to eliminate batch effects. A data set consisted of the collection of data values (mutant and control) for a single measured variable (parameter) with the same allele, zygosity, center and experimental metadata. With IMPC data release version 5.0, (2 August 2016), IMPC has analyzed 352,729 continuous data sets and 944,270 categorical data sets produced from ten phenotyping centers. These raw data are available through the IMPC web portal with a page detailing the various methods by which data can be extracted ( URLs ). Statistical analysis. Statistical analysis was performed with the PhenStat R package developed for IMPC. PhenStat is a suite of statistical analysis tools that account for known variations in experimental workflow and design of phenotyping pipelines 12 . Briefly, categorical data analysis was completed with two-tailed Fisher's exact tests. Continuous data analysis was performed with the PhenStat linear mixed-model framework ( URLs ), which uses linear mixed models that treat a batch as a random effect. Through high-throughput phenotyping programs, such as EUMODIC, in which data were systematically collected on one genetic background, the significant sources of variation could be identified, and it became obvious that differences among batches (defined here as those readings collected on a particular day) can lead to large variations among phenotyping variables 43 . Linear mixed models include a class of statistical models that are suited to modeling multiple sources of variability, such as batch effects, on a phenotype. Details of the implementation, including the decision-tree model, descriptions and the lower false discovery rates associated with multibatch data, are available in the PhenStat package user's guide ( URLs ) and have been described in the literature 43 . For this analysis, results from one-batch, low-batch (including mutants measured in batches between two and four times) and multibatch (including mutants measured in batches five or more times) experiments were used. For viability and fertility data, the center conducting the experiment used a statistical method appropriate for the breeding scheme used at that center (exact details are available in the IMPC data portal) and supplied the analysis results to the IMPC DCC. All available wild-type and mutant mice were used in the analysis, with center-specific blinding strategies used during group allocation, no specific inclusion or exclusion criteria, and no randomization approach beyond relying on mendelian inheritance to randomize, as detailed in our ARRIVE guideline document ( URLs ). All analysis presented in this publication was based on the binary assignment of a significant deviation (or lack thereof) from wild type and the associated phenotype term. Detailed output of our statistical analysis for each test is presented in our portal pages ( URLs ), including all raw data, summaries, visualizations, variances and calculated P values for genotype–phenotype associations. Matching mouse phenotypes to OMIM and Orphanet disease descriptions. Automated PhenoDigm. We used the Human Phenotype Ontology (HPO) annotations available from the Monarch Initiative (accessed 2 September 2016) describing the clinical phenotypic features of over 7,000 diseases reported in OMIM 17 and Orphanet 18 . These HPO terms were semantically compared with the phenotypic features (MP annotations) of IMPC mouse strains by using the PhenoDigm algorithm 22 , developed by us and fellow members of the Monarch Initiative, as reflected in authorship, to generate an overall score indicating the similarity of the phenotype of a given mouse strain to that of a particular disease. PhenoDigm calculates the individual score for each HPO–MP phenotypic match, on the basis of the proximity of the two terms in the overall cross-species ontology (Jaccard index; simJ) and the observed frequency of the phenotype in common from the overall disease and mouse annotations (Information Content; IC); i.e., exact clinical and mouse phenotype matches involving rarely observed phenotypes scored highest. The geometric mean of the IC and simJ was used to generate the HPO–MP pairwise score. The overall PhenoDigm percentage score was a comparison of the best and mean scores for all the pairwise HPO–MP comparisons relative to the maximum possible scores for a mouse model perfectly matching the disease. The disease models described in this paper were selected by applying a threshold of at least one HPO–MP match with a score greater than 1.35, thus maximizing precision and recall relative to those of other similarity thresholds of 1.0, 1.25, 1.5 or 1.75 ( Supplementary Fig. 1 ). Known human genes and regions associated with diseases were extracted from OMIM and Orphanet, and matching mouse orthologs were identified from HomoloGene 44 . Comparisons to previous mouse mutants from the MGI resource 1 were achieved by downloading and processing a file named MGI_GenePheno.rpt, which contains literature curation of mouse lines associated with all allele types except those involving conditional mutations, and ALL_OMIM.rpt, which curates any literature assertions of a particular mouse line being a mouse model of a particular OMIM disease ( URLs ; downloaded 2 September 2016). Lethality matching. Screening for lethal or potentially lethal genes from data within the OMIM database could not be automated. For the set of mouse genes that were homozygous preweaning subviable or lethal and that also had OMIM records, we manually inspected the OMIM records to identify those with reported in utero or early deaths (before two years of age) and coded these in Supplementary Table 1 as yes–lethal (Y-L), thus indicating that for some human cases with mutations for these genes, the phenotype of human lethality matched the phenotype of mouse subviability. We also screened for OMIM records with severe congenital defects and/or rapid progression of early-onset severe disease in human patients requiring substantial medical support for survival. Mice with similar phenotypes would not be likely to survive through weaning in the absence of medical support and therefore were scored as yes–probable lethal (Y-PL), thus indicating a probable match of the human phenotype to the mouse subviable phenotype. Matching candidate-gene phenotypes to human traits from OMIM linkage and cytogenetic findings. Diseases with no known molecular mechanism but a narrowed-down cytogenetic region containing the likely causative gene were extracted from OMIM (accessed 2 September 2016). Ensembl was used to identify the human genes within these regions and their mouse orthologs retrieved from HomoloGene. The overlaps between these genes and candidates from the PhenoDigm analysis of the same disease were then flagged within our database and are highlighted in both our portal and Supplementary Tables 1 , 2 , 3 , 4 , 5 , 6 . Identifying novel gene–phenotype relationships from the IMPC database. An online tool in the IMPC portal ( URLs ) imports GO annotations daily from the Quick GO resource 45 and categorizes them on the basis of the evidence codes assigned by GO curators. Annotations were analyzed on 24 March 2017. We started with 2,668 genes that had IMPC nonlethal phenotypes. Categories incorporated the following evidence codes: Experimental: inferred from experiment (EXP), inferred from direct assay (IDA), inferred from physical interaction (IPI), inferred from mutant phenotype (IMP), inferred from genetic interaction (IGI) and inferred from expression pattern (IEP). Curated computational: inferred from sequence or structural similarity (ISS), inferred from sequence orthology (ISO), inferred from sequence alignment (ISA), inferred from sequence model (ISM), inferred from genomic context (IGC), inferred from biological aspect of ancestor (IBA), inferred from biological aspect of descendant (IBD), inferred from key residues (IKR), inferred from rapid divergence (IRD) and inferred from reviewed computational analysis (RCA). Automated electronic: inferred from electronic annotation (IEA), traceable author statement (TAS), nontraceable author statement (NAS) and inferred by curator (IC). No biological data available: no biological data available (ND) and not listed as a gene in GO (no evidence code). Ethical approval. Mouse production, breeding and phenotyping at each center was done in compliance with each center's ethical animal care and use guidelines in addition to the applicable licensing and accrediting bodies, in accordance with the national legislation under which each center operates. Details of each center's ethical review organization, processes and licenses are provided in Supplementary Table 7 . All efforts were made to minimize suffering by considerate housing and husbandry. All phenotyping procedures were examined for potential refinements disseminated throughout the IMPC. Animal welfare was assessed routinely for all mice. Code availability. The automated phenotype comparisons were performed with the open-source OWLtools package provided by the Monarch Initiative. Data availability. All data presented here are openly available from the IMPC portal via our FTP site ( ftp://ftp.ebi.ac.uk/pub/databases/impc/latest/ ). We also provide regular data exports to the MGI group, which provides public access to all available mouse data, and the Monarch Initiative, which integrates genotype–phenotype data from humans and numerous other species. URLs. IMPC portal, ; glucose results for Gad2 , MGI:5548938¶meter_stable_id=IMPC_IPG_010_001&metadata_group =297b1cf545aee8eea0113b14aca71ef1&zygosity=homozygote& phenotyping_center=HMGU/ ; IMPC FTP site, ftp://ftp.ebi.ac.uk/pub/databases/impc/latest/ ; IMPC publications, ; International Mouse Phenotyping Resource of Standardised Screens (IMPReSS), ; IMPC data access, ; IMPC ARRIVE guidelines, ; IMPC GO annotations, ; ExpressionAtlas result for Fam53b , ; GEMM, ; PhenStat, ; MGI, ; MGI downloads, ftp://ftp.informatics.jax.org/pub/reports/ ; Monarch Initiative, ; OWLtools, . Additional information Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | The first results from a functional genetic catalogue of the laboratory mouse has been shared with the biomedical research community, revealing new insights into a range of rare diseases and the possibility of accelerating development of new treatments and precision medicine. The research, which generated over 20 million pieces of data, has found 360 new disease models and provides 28,406 new descriptions of the genes' effects on mouse biology and disease. The new disease models are being made available to the biomedical community to aid their research. The International Mouse Phenotyping Consortium (IMPC) is aiming to produce a complete catalogue of mammalian gene function across all genes. Their initial results, now published in Nature Genetics, is based on an analysis of the first 3,328 genes (15 per cent of the mouse genome coding for proteins). Lead author Dr Damian Smedley from Queen Mary University of London (QMUL) and a Monarch Initiative Principal Investigator, said: "Although next generation sequencing has revolutionised the identification of new disease genes, there is still a lack of understanding of how these genes actually cause disease. "These 360 new disease models that we've identified in mice represent the first steps of a hugely important international project. We hope researchers will be able to use this knowledge to develop new therapies for patients, which is ultimately what we're all striving to achieve." With its similarity to human biology and ease of genetic modification, the laboratory mouse is arguably the preferred model organism for studying human genetic disease. However, the vast majority of the mouse genome remains poorly understood, as scientists tend to focus their research on a few specific areas of the genome linked to the most common inherited diseases. Development of therapies for rare disease lags far behind, with over half of diagnosed rare diseases still having no known causative gene. This is why the IMPC is aiming to build a complete database that systematically details the functions of all areas of the mouse genome, including neurological, metabolic, cardiovascular, respiratory and immunological systems. Terry Meehan, IMPC Project Coordinator at European Bioinformatics Institute (EMBL-EBI) said: "Mouse models allow us to speed up patient diagnosis and develop new therapies. But before that can work, we need to understand exactly what each gene does, and what diseases it is associated with. This is a significant effort in data collection and curation that goes well beyond the capabilities of individual labs. IMPC is creating a data resource that will benefit the entire biomedical community." The project involves going through the mouse genome systematically and knocking out a particular gene, one by one, in different mice. By looking at the mouse's resulting characteristics in a variety of standardised tests, the team then see if and how the gene knockout manifests itself as a disease, and link their findings to what is already known about the human version of the disease. The 'one by one' knockout approach lends itself to rare gene discovery, as often these diseases are caused by variants of a single gene. More than half of the 3,328 genes characterised have never been investigated in a mouse before, and for 1,092 genes, no molecular function or biological process were previously known from direct experimental evidence. These include genes that have now been found to be involved in the formation of blood components (potentially involved in a type of anaemia), cell proliferation and stem cell maintenance. For the first time, human disease traits were seen in mouse models for forms of Bernard-Soulier syndrome (a blood clotting disorder), Bardet-Biedl syndrome (causing vision loss, obesity and extra fingers or toes) and Gordon Holmes syndrome (a neurodegenerative disorder with delayed puberty and lack of secondary sex characteristics). The team also identified new candidate genes for diseases with an unknown molecular mechanism, including an inherited heart disease called 'Arrhythmogenic Right Ventricular Dysplasia' that affects the heart muscle, and Charcot-Marie-Tooth disease, which is characterised by nerve damage leading to muscle weakness and an awkward way of walking. Dr Smedley added: "In addition to a better understanding of the disease mechanism and new treatments for rare disease patients, many of the lessons we learn here will also be of value to precision medicine, where the goal is to improve treatment through the customisation of healthcare based on a patient's genomic information." | 10.1038/ng.3901 |
Physics | First observation of de Broglie-Mackinnon wave packets achieved by exploiting loophole in 1980s theorem | Layton A. Hall et al, Observation of optical de Broglie–Mackinnon wave packets, Nature Physics (2023). DOI: 10.1038/s41567-022-01876-6 Journal information: Nature Physics | https://dx.doi.org/10.1038/s41567-022-01876-6 | https://phys.org/news/2023-01-de-broglie-mackinnon-packets-exploiting-loophole.html | Abstract de Broglie wave packets accompanying moving particles are dispersive and lack an intrinsic length scale solely dictated by the particle mass and velocity. Mackinnon proposed a localized non-dispersive wave packet constructed out of dispersive de Broglie phase waves that possess an intrinsic length scale via an inversion of the roles of particle and observer. So far, the de Broglie–Mackinnon wave packet has remained a theoretical proposal. Here we report the observation of optical de Broglie–Mackinnon wave packets using paraxial space–time-coupled pulsed laser fields in the presence of anomalous group-velocity dispersion. Crucially, the bandwidth of de Broglie–Mackinnon wave packets has an upper limit that is compatible with the wave-packet group velocity and equivalent mass. In contrast to previously observed linear-propagation-invariant wave packets whose spatio-temporal profiles at any axial plane are X-shaped, those for de Broglie–Mackinnon wave packets are uniquely O-shaped (circularly symmetric with respect to space and time). By sculpting their spatio-temporal spectral structure, we produce dispersion-free de Broglie–Mackinnon wave packets in the dispersive medium, observe their circularly symmetric spatio-temporal intensity profiles and closed-trajectory spectra, and tune the field parameters that uniquely determine the wave-packet length scale. Main It is well known that there are no dispersion-free wave-packet solutions to the (1 + 1)D potential-free Schrödinger equation—with the sole exception of the Airy wave packet identified by Berry and Balazs in 1979 (ref. 1 ), which accelerates in the absence of an external force instead of travelling at a fixed group velocity 2 . The Airy wave packet has impacted all areas of wave physics (for example, optics 3 , acoustics 4 , water waves 5 , electron beams 6 and as a model for Dirac particles 7 ). Less known is that in the year preceding the discovery of the Airy wave packet, Mackinnon identified a non-dispersive (1 + 1)D wave packet that travels at a constant group velocity 8 , but is constructed out of dispersive de Broglie phase waves that accompany the motion of a massive particle and are solutions to the Klein–Gordon equation. Originally, de Broglie demonstrated that the group velocity \(\widetilde{v}\) of a wave packet constructed of phase waves is equal to the particle velocity v (ref. 9 ). However, localized de Broglie wave packets are dispersive, as are Schrödinger wave packets 10 . Moreover, because localized de Broglie wave packets necessitate introducing an ad hoc uncertainty in the particle velocity 9 and there is no upper limit on the exploitable bandwidth, such wave packets lack an intrinsic length scale (that is, a scale uniquely determined by the particle mass and velocity). Through a Copernican inversion of the roles of particle and observer, Mackinnon constructed out of dispersive de Broglie phase waves a non-dispersive wave packet 8 , which we refer to henceforth as the de Broglie–Mackinnon (dBM) wave packet. Instead of introducing an ad hoc uncertainty into the particle velocity from the perspective of a privileged reference frame, Mackinnon suggested accounting for all the possible observers, who cooperatively report observations made in their reference frames to a single agreed-on frame in which Lorentz contraction and time dilation are corrected 8 . Besides retaining the salutary features of conventional de Broglie wave packets, Mackinnon’s construction unveiled an intrinsic length scale for the dBM wave packet solely determined by the particle mass and velocity. However, despite the clear algorithmic process for constructing the dBM wave packet, it is not a solution to the Klein–Gordon equation 8 , and is instead constructed only epistemologically in the selected reference frame. As such, dBM wave packets have yet to be realized in any physical wave. Nevertheless, it has been recognized that the (1 + 1)D dBM wave packet can be mapped to physical solutions of the optical wave equation by first embedding the field in a (2 + 1)D space, which allows introducing angular dispersion 11 , 12 . This procedure enables realizing the dBM dispersion relationship for axial propagation in the reduced-dimensionality (1 + 1)D space 13 . However, observing optical dBM wave packets in free space faces insurmountable practical difficulties 14 , 15 . For example, such wave packets are produced by optical dipoles travelling at relativistic speeds when the field is recorded by stationary, coherent detectors that fully encircle the moving dipole 16 . We investigate here a different strategy that makes use of the unique characteristics of optical wave propagation in the anomalous group-velocity dispersion (GVD) regime 17 to produce paraxial dBM wave packets. In this concept, an optical dBM wave packet is a particular realization of so-called space–time (ST) wave packets 15 , 18 , 19 , 20 , 21 , 22 in dispersive media 23 , 24 , 25 , 26 , 27 , 28 , 29 . In general, ST wave packets are pulsed optical beams whose unique characteristics (for example, tunable group velocity 30 and anomalous refraction 31 ) stem from the structure of their spatio-temporal spectrum rather than their particular spatial or temporal profiles. Recent advances in the synthesis of ST wave packets make them a convenient platform for producing a wide variety of structured pulsed fields 15 , including dBM wave packets. Here we provide unambiguous observations of optical dBM wave packets in the presence of anomalous GVD. Starting with generic femtosecond pulses, we make use of a universal angular dispersion synthesizer 32 to construct spatio-temporally structured optical fields in which the spatial and temporal degrees of freedom are no longer separable. Critically, the association between the propagation angle and wavelength is two-to-one rather than one-to-one as in conventional tilted pulse fronts 11 , 12 . This feature allows folding the spatio-temporal spectrum back on itself, thereby guaranteeing the paraxiality of the synthesized dBM wave packets. Consequently, these wave packets, in the medium, retain all the characteristic features of their free-space counterparts as the abovementioned difficulties are circumvented. Such ST-coupled wave packets are dispersive in free space, but become propagation invariant once coupled to a medium in the anomalous GVD regime, where they travel at a tunable group velocity \(\widetilde{v}\) . Although all the previously observed linear, propagation-invariant wave packets have been either X-shaped at a fixed axial plane 15 , 33 , 34 , 35 or separable 36 with respect to the transverse coordinate and time, the spatio-temporal profiles of dBM wave packets are—in contrast—circularly symmetric (O-shaped). In addition to the verification of this long-theorized O-shaped spatio-temporal structure 25 , 26 , 27 , we confirm the impact of the two identifying parameters (which are equivalent to particle mass and velocity) on the bandwidth and length scale of the non-dispersive dBM wave packets. Indeed, maintaining propagation invariance in the dispersive medium constrains the maximum bandwidth (minimum wave-packet length) according to the values of these two selected parameters (whereas there was no upper limit on the bandwidth of previously synthesized ST wave packets). Finally, in contrast to Airy wave packets that are the unique non-dispersive solution to the Schrödinger equation, the axial profile of dBM wave packets can be varied almost arbitrarily, which we confirm by modulating their spatio-temporal spectral phase distribution. These results may pave the way to optical tests of the solutions of the Klein–Gordon equation for massive particles, and may even lead to the synthesis of non-dispersive wave packets using matter waves. Results Theory of de Broglie wave packets de Broglie posited two distinct entities accompanying massive particles: an internal clock and an external phase wave 37 , 38 . For a particle of rest mass m o whose energy is expressed as E o = m o c 2 = ℏ ω o , the internal clock and infinite-wavelength phase wave coincide at the same de Broglie frequency ω o in the particle’s rest frame (Fig. 1a ); here c is the speed of light in vacuum and ℏ is the modified Planck constant. When the particle moves at velocity v , the values of these two frequencies diverge as observed in the rest frame: the internal frequency drops to \(\omega \,=\,{\omega }_{{{{\rm{o}}}}}\sqrt{1-{\beta }_{v}^{2}}\) , whereas the phase-wave frequency increases to \(\omega \,=\,{\omega }_{{{{\rm{o}}}}}/\sqrt{1-{\beta }_{v}^{2}}\) and takes on a finite wavelength λ , where \({\beta }_{v}\,=\,\frac{v}{c}\) (Fig. 1b ). The wave number \(k\,=\,\frac{2\uppi }{\lambda }\) for the phase wave is determined by the de Broglie dispersion relationship \({\omega }^{2}\,=\,{\omega }_{{{{\rm{o}}}}}^{2}+{c}^{2}{k}^{2}\) (Fig. 1c ); therefore, a phase wave is a solution to the Klein–Gordon equation. Because de Broglie phase waves are extended, a particle with a well-defined velocity cannot be spatially localized. Instead, localizing the particle requires introducing an ad hoc uncertainty in the particle velocity (a spread from v to v + Δ v ) to induce a bandwidth Δ ω (from ω c to ω c + Δ ω ), and in turn, Δ k (from k c to k c + Δ k ) (refs. 8 , 9 ), thus resulting in a finite-width wave packet that is also a solution to the Klein–Gordon equation (Fig. 1c ). The wave-packet group velocity \(\widetilde{v}\,=\,1/\frac{\mathrm{d}k}{\mathrm{d}\omega }{\left\vert \right.}_{{\omega }_{{{{\rm{c}}}}}}\,=\,v\) is equal to the particle velocity, whereas its phase velocity is \({v}_{{{{\rm{ph}}}}}\,=\,\frac{\omega }{k}\,=\,\frac{{c}^{2}}{v}\) ( \({v}_{{{{\rm{ph}}}}}\widetilde{v}\,=\,{c}^{2}\) ; Methods). However, de Broglie wave packets are dispersive, that is, \(\frac{\mathrm{d}\widetilde{v}}{\mathrm{d}\omega }\,\ne \,0\) . Moreover, because there is no upper limit on the exploitable bandwidth (Fig. 1c ), de Broglie wave packets lack an intrinsic length scale. In other words, there is no minimum wave-packet length that is uniquely identified by the particle parameters (mass m o and velocity v ). Fig. 1: de Broglie phase waves and wave packets, and dBM wave packets. a , In the rest frame of a particle, the internal clock and external phase wave theorized by de Broglie have the same frequency ω o . b , When the particle moves at velocity v along z , the frequency of the internal clock in the rest frame decreases to \({\omega }_{{{{\rm{o}}}}}\sqrt{1-{\beta }_{v}^{2}}\) , whereas that of the phase wave increases to \({\omega }_{{{{\rm{o}}}}}/\sqrt{1-{\beta }_{v}^{2}}\) , where \({\beta }_{v}\,=\,\frac{v}{c}\) . c , Dispersion relationship for de Broglie phase waves \({\omega }^{2}\,=\,{\omega }_{{{{\rm{o}}}}}^{2}+{c}^{2}{k}^{2}\) plotted in \((k,\frac{\omega }{c})\) space. The group velocity evaluated at ω = ω c is \(\widetilde{v}\,=\,v\) . Constructing a localized de Broglie wave packet, therefore, necessitates introducing an ad hoc uncertainty in the particle velocity. d , Physical setting for a dBM wave packet. The particle travels at v and the observer at u , both along the z axis and with respect to a common rest frame. e , Dispersion relationship for a dBM wave packet in \((k,\frac{\omega }{c})\) space (bottom) corresponding to a stationary particle v = 0, delimited by the light lines \(| k| \,=\,\frac{\omega }{c}\) . The observer velocity u (top) is an internal parameter swept from − c to c to produce the dBM dispersion relationship. f , Same data as in e , but for v ≠ 0; here \({k}_{+}\,=\,{k}_{{{{\rm{o}}}}}\sqrt{\frac{1+{\beta }_{v}}{1-{\beta }_{v}}}\) , \({k}_{-}\,=\,{k}_{{{{\rm{o}}}}}\sqrt{\frac{1-{\beta }_{v}}{1+{\beta }_{v}}}\) , \({k}_{1}\,=\,\frac{{k}_{{{{\rm{o}}}}}}{\sqrt{1-{\beta }_{v}^{2}}}\) and \({k}_{2}\,=\,\frac{{k}_{{{{\rm{o}}}}}}{{\beta }_{v}}(1-\frac{{k}_{{{{\rm{o}}}}}}{{k}_{1}})\) (Methods). Full size image Non-dispersive dBM wave packets Mackinnon proposed an altogether different concept for constructing localized non-dispersive wave packets out of de Broglie phase waves that jettisons the need for introducing an ad hoc uncertainty in particle velocity to spatially localize it. Key to this proposal is a Copernican inversion of the roles of particle and observer. To elucidate this Copernican inversion, we re-emphasize that localizing a particle in space via a de Broglie wave packet requires introducing an ad hoc spread in the particle velocity as observed in the rest frame (indeed, a stationary particle in the rest frame cannot, in principle, be localized). Therefore, de Broglie’s approach privileges a single observer associated with the rest frame (Fig. 1c ). In other words, particle localization is driven by an uncertainty in the velocity inherent in the particle itself rather than being related to the observer. Similar to how Copernicus inverted—in his heliocentric system—the traditional roles of the Earth and the Sun, Mackinnon inverted the roles of the particle and the observer with respect to the origin of velocity uncertainty. Rather than the single privileged observer in the rest frame, Mackinnon considered a continuum of all potential observers travelling at physically accessible velocities (from − c to c ). Each one of these observers records a Lorentz-transformed particle velocity that differs from v (as recorded in the rest frame), thereby resulting in an uncertainty with regard to the particle velocity and consequently in particle frequency ω and wave number k . Consequently, a wave-packet bandwidth Δ k can be established in a particular reference frame and the particle can be localized, and a unique wave-packet length scale can be identified, even when its velocity is well defined in each observer’s frame. The physical setting envisioned by Mackinnon is depicted in Fig. 1d , where the particle moves at a velocity v and an observer moves at u , both with respect to a common rest frame. Each potential observer records a different phase-wave frequency and wavelength. The crucial step is that all the potential observers travelling at velocities u ranging from − c to c report their observations to a selected rest frame taken to be the rest frame. These phase waves are superposed in this frame—after accounting for Lorentz contraction and time dilation (Methods)—to construct the dBM wave packet, which is uniquely identified by the particle rest mass m o and velocity v (ref. 8 ). Consider first the simple scenario where the particle is at rest with respect to the selected frame ( v = 0). To the common rest frame, each observer reports a frequency \(\omega {\prime} \,=\,{\omega }_{{{{\rm{o}}}}}/\sqrt{1-{\beta }_{u}^{2}}\) and wave number \(k{\prime} \,=\,-{k}_{{{{\rm{o}}}}}{\beta }_{u}/\sqrt{1-{\beta }_{u}^{2}}\) , where \({\beta }_{u}\,=\,\frac{u}{c}\) . Accounting for time dilation results in ω ′→ ω = ω o , and accounting for Lorentz contraction yields k ′→ k = – k o β u . Therefore, the frequency in the rest frame based on the recordings of all the observers is ω = ω o , just as in the case of a de Broglie phase wave, but the wave number now extends over the range from − k o to k o to accompany the range of observer velocities u from c to − c (Fig. 1e ). In other words, the observer velocity u serves as an internal parameter that is swept to establish a new dispersion relationship whose slope is zero, thus indicating a particle at rest, namely, \(\widetilde{v}\,=\,v\,=\,0\) (refs. 8 , 13 ). The spectral representation of the support domain for this wave packet is a horizontal line ω = ω o in \((k,\frac{\omega }{c})\) space delimited by the two light lines \(k\,=\,\pm \frac{\omega }{c}\) (Fig. 1e ). In contradistinction to conventional de Broglie wave packets, a physically motivated length scale emerges for the dBM wave packet—even in the rest frame of the particle. The maximum spatial bandwidth is Δ k = 2 k o , which corresponds to a minimum wave-packet length scale of \({L}_{\min }\, \approx \,\frac{{\lambda }_{{{{\rm{o}}}}}}{2}\) , where \({\lambda }_{{{{\rm{o}}}}}\,=\,\frac{2\uppi }{{k}_{{{{\rm{o}}}}}}\) . This can be viewed as an optical theorem, where the dBM wave packet for a stationary massive particle cannot be spatially localized below the associated de Broglie wavelength λ o . Taking an equal-weight superposition across all the wave numbers, the dBM wave packet is \(\psi (z;t)\propto {\mathrm{e}}^{-\mathrm{i}{\omega }_{{{{\rm{o}}}}}t}{{{\rm{sinc}}}}(\frac{{{\Delta }}k}{2\uppi }z)\) , where \({{{\rm{sinc}}}}(x)\,=\,\frac{\sin \uppi x}{\uppi x}\) (ref. 8 ). A similar procedure can be followed when v ≠ 0, where the frequency and wave number in the selected reference frame are \(\omega \,=\,{\omega }_{{{{\rm{o}}}}}(1-{\beta }_{v}{\beta }_{u})/\sqrt{1-{\beta }_{v}^{2}}\) and \(k\,=\,{k}_{{{{\rm{o}}}}}({\beta }_{v}-{\beta }_{u})/\sqrt{1-{\beta }_{v}^{2}}\) , respectively (Methods). Because v is fixed whereas u extends from − c to c , a linear dispersion relationship between ω and k is established, namely, \(k\,=\,\frac{1}{{\beta }_{v}}(\frac{\omega }{c}-\frac{{k}_{{{{\rm{o}}}}}^{2}}{{k}_{1}})\) , where \({k}_{1}\,=\,{k}_{{{{\rm{o}}}}}/\sqrt{1-{\beta }_{v}^{2}}\) . The slope of the dBM dispersion relationship indicates that \(\widetilde{v}\,=\,v\) as in conventional de Broglie wave packets, but the dBM wave packet is now non-dispersive, that is, \(\frac{\mathrm{d}\widetilde{v}}{\mathrm{d}\omega }\,=\,0\) (Fig. 1f ). The limits on the spatial and temporal bandwidths for the dBM wave packet are Δ k = 2 k 1 and \(\frac{{{\Delta }}\omega }{c}\,=\,{\beta }_{v}{{\Delta }}k\) , respectively, leading to a reduced characteristic length scale of \({L}_{\min }\, \approx \,\frac{{\lambda }_{{{{\rm{o}}}}}}{2}\sqrt{1-{\beta }_{v}^{2}}\) as a manifestation of Lorentz contraction; a faster particle is more tightly localized. By assigning equal complex amplitudes to all the phase waves associated with this moving particle, the propagation-invariant dBM wave packet is \(\psi (z;t)\propto {\mathrm{e}}^{\mathrm{i}\frac{{{\Delta }}k}{2}(\beta z-ct)}{{{\rm{sinc}}}}(\frac{{{\Delta }}k}{2\uppi }(z-vt))\) . Crucially, unlike conventional de Broglie wave packets, the dBM wave packet is not a solution to the Klein–Gordon equation, although a modified wave equation can perhaps be constructed for it 8 . Indeed, the dBM wave packet proposed by Mackinnon cannot be produced in physical space, and is only epistemologically synthesized, as emphasized elsewhere 8 . Optical dBM wave packets in free space Despite their intrinsic interest from a fundamental point of view, as the only propagation-invariant fixed- \(\widetilde{v}\) wave packet based on de Broglie phase waves, dBM wave packets have remained, to date, theoretical entities. It has, nevertheless, been recognized that optical waves may provide a platform for their construction in physical space 13 , 14 . Because (1 + 1)D optical waves in free space are dispersion free ( \(k\,=\,\frac{\omega }{c}\) and \({v}_{{{{\rm{ph}}}}}\,=\,\widetilde{v}\,=\,c\) ), producing optical dBM wave packets requires first adding a transverse coordinate x to enlarge the field dimensionality to (2 + 1)D. The dispersion relationship, thus, becomes \({k}_{x}^{2}+{k}_{z}^{2}\,=\,{(\frac{\omega }{c})}^{2}\) , which represents the surface of a light cone 15 , 39 ; here k x and k z are the transverse and longitudinal components of the wave vector along x and z , respectively. The spectral support of any optical field corresponds to some region on the light-cone surface (Fig. 2a ). For a fixed value of \({k}_{x}\,=\,\pm \frac{{\omega }_{{{{\rm{o}}}}}}{c}\) , we retrieve the axial dispersion relationship for de Broglie phase waves as \({\omega }^{2}\,=\,{\omega }_{{{{\rm{o}}}}}^{2}+{c}^{2}{k}_{z}^{2}\) . A convenient parametrization of the field makes use of the propagation angle φ ( ω ) with respect to the z axis for the plane wave at frequency ω , where \({k}_{x}(\omega )\,=\,\frac{\omega }{c}\sin \varphi (\omega )\) and \({k}_{z}(\omega )\,=\,\frac{\omega }{c}\cos \varphi (\omega )\) . Angular dispersion is, thus, introduced into the (2 + 1)D field 11 , 12 , and its spectral support on the light-cone surface is a one-dimensional trajectory. We take optical dBM wave packets to be those whose axial dispersion relationship ω ( k z ) conforms to that of a dBM wave packet. This requires that the projection of the spectral support onto the \(({k}_{z},\frac{\omega }{c})\) plane be linear and delimited by the light lines \({k}_{z}\,=\,\pm \frac{\omega }{c}\) . Indeed, the spectral projections onto the \(({k}_{z},\frac{\omega }{c})\) plane (Fig. 2a,b ) coincide with those in Fig. 1e,f . Fig. 2: Optical dBM wave packets. From left to right: the light cone in \(({k}_{x},{k}_{z},\frac{\omega }{c})\) space intersecting with a spectral constraint in the form of a plane; the spectral projection onto the \(({k}_{z},\frac{\omega }{c})\) plane; the spectral projection onto the \(({k}_{x},\frac{\omega }{c})\) plane; the propagation angle φ ( ω ); and the real part of the spatio-temporal field profile ψ ( x , z ; t ) at a fixed axial plane z . a , Stationary monochromatic planar dipole in free space resulting from the constraint ω = ω o . b , Moving planar dipole in free space corresponding to the constraint \({k}_{z}\,=\,\frac{1}{{\beta }_{v}}(\frac{\omega }{c}-\frac{{k}_{{{{\rm{o}}}}}^{2}}{{k}_{1}})\) ; k − and k + are the same as Fig. 1f . c , Field in a dispersive medium in the anomalous regime after imposing the constraint \({k}_{z}\,=\,\frac{1}{{\beta }_{v}}\{\frac{\omega }{c}-{k}_{{{{\rm{o}}}}}(1-{n}_{{{{\rm{m}}}}}{\beta }_{v})\}\) . Full size image Consider first a monochromatic field ω = ω o whose spectral support is the circle at the intersection of the light cone with a horizontal iso-frequency plane (Fig. 2a ). This monochromatic field comprises plane waves of the same frequency ω o that travel at angles φ extending from 0 to 2π, whose axial wave numbers are \({k}_{z}(\varphi )\,=\,\pm \sqrt{{k}_{{{{\rm{o}}}}}^{2}-{k}_{x}^{2}}\,=\,{k}_{{{{\rm{o}}}}}\cos \varphi\) and extend from − k o to k o . This optical wave packet (Fig. 2a ) corresponds to the dBM wave packet for a particle in its rest frame (Fig. 1e ), and φ serves as the new internal parameter to be swept to produce the targeted dBM dispersion relationship, corresponding to the observer velocity u (Fig. 1e ). By setting the spectral amplitudes equal for all the plane-wave components, we obtain \(\psi (x,z;t)\propto {\mathrm{e}}^{-\mathrm{i}{\omega }_{{{{\rm{o}}}}}t}\,{{{\rm{sinc}}}}(\frac{{{\Delta }}{k}_{z}}{2\uppi }\sqrt{{x}^{2}+{z}^{2}})\) , where Δ k z = 2 k o (Fig. 2a ). Such a wave packet can be produced by a stationary, monochromatic planar dipole placed at the origin of the ( x , z ) plane. Observing this optical field requires coherent field detectors arranged around the 2π angle subtended by the dipole and then communicating the recorded measurements to a central station. This procedure is, therefore, not dissimilar in principle from that envisioned by Mackinnon for the dBM wave packet associated with a stationary particle, in which the measurements recorded by observers travelling at different velocities are communicated to the common rest frame (Fig. 1d ). When the dipole moves at velocity v along the z axis with respect to the stationary detectors encircling it, each constituent plane wave undergoes a different Doppler shift in the detectors’ rest frame. The field still comprises plane waves travelling at angles φ extending from 0 to 2π, but each plane wave now has a different frequency ω ( φ ). Nevertheless, the new spectral support for the dBM wave packet on the light cone is related to that for the stationary monochromatic dipole. Indeed, the Lorentz transformation associated with the relative motion between the source and detectors tilts the horizontal iso-frequency spectral plane (Fig. 2a ) by angle θ with respect to the k z axis (Fig. 2b ), where tan θ = β v (refs. 13 , 41 ,, 40 , 41 , 42 ), thus yielding a tilted ellipse whose projection onto the \(({k}_{x},\frac{\omega }{c})\) space is $$\frac{{k}_{x}^{2}}{{k}_{{{{\rm{o}}}}}^{2}}+\frac{{(\omega -c{k}_{1})}^{2}}{{({{\Delta }}\omega /2)}^{2}}=1.$$ (1) The spectral projection onto the \(({k}_{z},\frac{\omega }{c})\) plane is now the \({k}_{z}\,=\,{k}_{+}+\frac{\omega -{\omega }_{+}}{\widetilde{v}}\,=\,\frac{1}{{\beta }_{v}}(\frac{\omega }{c}-\frac{{k}_{{{{\rm{o}}}}}^{2}}{{k}_{1}})\) line, where \(\widetilde{v}\,=\,c\tan \theta \,=\,v\) is the wave-packet group velocity along z , \({k}_{+}\,=\,\frac{{\omega }_{+}}{c}\,=\,{k}_{{{{\rm{o}}}}}\sqrt{\frac{1+{\beta }_{v}}{1-{\beta }_{v}}}\) and \({k}_{1}\,=\,{k}_{{{{\rm{o}}}}}/\sqrt{1-{\beta }_{v}^{2}}\) . The spatial and temporal bandwidths are related via \(\frac{{{\Delta }}\omega }{c}\,=\,{\beta }_{v}{{\Delta }}{k}_{z}\) , where Δ k z = 2 k 1 . Each plane wave travels along a different direction in the ( x , z ) plane in such a way that their axial wave numbers k z reproduce the dBM dispersion relationship (compare Fig. 1f with Fig. 2b ). By setting the complex spectral amplitudes as constant for all the frequencies, we obtain the dBM wave packet (where \(\widetilde{v} < c\) ) as $$\psi (x,z;t)\propto {\mathrm{e}}^{\mathrm{i}\frac{{{\Delta }}{k}_{z}}{2}({\beta }_{v}z-ct)}\ {{{\rm{sinc}}}}\left(\frac{{{\Delta }}{k}_{z}}{2\uppi }\sqrt{(1-{\beta }_{v}^{2}){x}^{2}+{(z-vt)}^{2}}\right).$$ (2) The dBM, thus, propagates rigidly in free space at a group velocity \(\widetilde{v}\,=\,v\) . Two parameters uniquely identify the optical dBM wave packet: group velocity \(\widetilde{v}\) (corresponding to particle velocity) and wave number k o (corresponding to particle mass). Furthermore, the signature of the dBM wave packet (equation ( 2 )) is its circularly symmetric spatio-temporal profile in ( x , t ) space in any axial plane z . In contrast, all the other propagation-invariant wave packets that have been observed in free space are, to date, X-shaped 15 , 24 , 33 , 34 , 35 and are not circularly symmetric. Indeed, truncating the spectrum of the optical dBM wave packet obstructs the formation of the circularly symmetric profile and gives rise instead to the more familiar X-shaped counterpart 14 , 15 . The O-shaped spatio-temporal profile, as indicated by equation ( 2 ), can be observed only when the full bandwidth—delimited only by the light lines—is included. The field in the ( x , z ) plane recorded by stationary detectors encircling the moving dipole takes the form shown in Fig. 2b , as recently pointed out in a thought experiment 16 . Despite the conceptual simplicity of this optical scheme for producing dBM wave packets, it nevertheless faces obvious experimental challenges. Encircling an optical dipole moving at a relativistic speed with stationary detectors is far from practical realizability. The more realistic configuration in which the detectors are restricted to a small angular range within the paraxial regime centred on the z axis truncates the recorded field and precludes the observation of the O-shaped spatio-temporal profile 14 , 15 . For these reasons, it is not expected that the O-shaped dBM wave packet can be observed using spatio-temporally structured optical fields in free space. Optical dBM wave packets in a dispersive medium The necessity of including the entire bandwidth delimited by the intersection of the dBM dispersion relationship with the free-space light cone (Fig. 2a,b ) presents insurmountable experimental obstacles. Producing paraxial dBM wave packets necessitates confining the spectrum to a narrow range of values of k z centred at a value of k z ≈ k o . Crucially, the linear spatio-temporal spectrum projected onto the \(({k}_{z},\frac{\omega }{c})\) plane must remain delimited at both ends by the light line, to produce the circularly symmetric spatio-temporal wave-packet profile. Clearly, these requirements cannot be met in free space. Nevertheless, this challenge can be tackled by exploiting the unique features of optical wave propagation in dispersive media. Specifically, the light-cone structure is modified in the presence of anomalous GVD; therefore, the curvature of the light line has the same sign as the de Broglie dispersion relationship (Fig. 2c ). In this case, imposing the characteristically linear dBM dispersion relationship produces a spectral support domain on the dispersive light-cone surface that satisfies all the above-listed requirements: (1) k z > 0 is maintained in the entire span of propagation angles φ ( ω ); (2) the field remains within the paraxial regime; and (3) the spectrum is delimited at both ends by the light line (Fig. 2c ). The spectral support is in the form of an ellipse at the intersection of the dispersive light cone with a tilted spectral plane. The centre of this ellipse is displaced to a large value of k z , and the spectral projection onto the \(({k}_{z},\frac{\omega }{c})\) plane is a line making an angle θ with the k z axis. The resulting wave packet is propagation invariant in the dispersive medium, has a circularly symmetric spatio-temporal profile and travels at a velocity \(\widetilde{v}\,=\,c\tan \theta\) independently of the physical parameters of the medium. In the anomalous GVD regime, the wave number is given by \(k(\omega )\,=\,n(\omega )\omega /c=k({\omega }_{{{{\rm{o}}}}}+{{\varOmega }})\,\approx \,{n}_{{{{\rm{m}}}}}{k}_{{{{\rm{o}}}}}+\frac{{{\varOmega }}}{{\widetilde{v}}_{{{{\rm{m}}}}}}-\frac{1}{2}| {k}_{2{{{\rm{m}}}}}| {{{\varOmega }}}^{2}+\cdots \,\) , where n ( ω ) is the refractive index. All the following quantities are evaluated at ω = ω o : n m = n ( ω o ) is the refractive index, \({\widetilde{v}}_{{{{\rm{m}}}}}\,=\,1/\frac{\mathrm{d}k}{\mathrm{d}\omega }{\left\vert \right.}_{{\omega }_{{{{\rm{o}}}}}}\) is the group velocity for a plane-wave pulse in the medium and \({k}_{2{{{\rm{m}}}}}\,=\,\frac{{\mathrm{d}}^{2}k}{\mathrm{d}{\omega }^{2}}{\left\vert \right.}_{{\omega }_{{{{\rm{o}}}}}}\,=\,-| {k}_{2{{{\rm{m}}}}}|\) is the negative-valued anomalous GVD coefficient 17 . The dispersion relationship in the medium ( \({k}_{x}^{2}+{k}_{z}^{2}\,=\,{k}^{2}\) ) geometrically corresponds to the surface of the modified dispersive light cone (Fig. 2c ). Similar to the free-space scenario, we impose a spectral constraint of the form \({k}_{z}\,=\,{n}_{{{{\rm{m}}}}}{k}_{{{{\rm{o}}}}}+\frac{{{\varOmega }}}{\widetilde{v}}\,=\,\frac{1}{{\beta }_{v}}\left\{\frac{\omega }{c}-{k}_{{{{\rm{o}}}}}(1-{n}_{{{{\rm{m}}}}}{\beta }_{v})\right\}\) in the medium, where Ω = ω − ω o and \(\widetilde{v}\,=\,c\tan \theta\) is the group velocity of the wave packet (Fig. 2c ). The wave-packet spectrum, as defined by this constraint, is delimited by the light line at its two ends, both located, however, in the range of k z > 0, in contrast to the previous scenarios depicted in Figs. 1e,f and 2a,b (Methods). The spectral projections onto the \(({k}_{x},\frac{\omega }{c})\) and ( k x , k z ) planes of the spectral support on the dispersive light cone are ellipses (Methods): $$\frac{{k}_{x}^{2}}{{k}_{x,\max }^{2}}+\frac{{(\omega -{\omega }_{{{{\rm{c}}}}})}^{2}}{{({{\Delta }}\omega /2)}^{2}}=1,\quad \frac{{k}_{x}^{2}}{{k}_{x,\max }^{2}}+\frac{{({k}_{z}-{k}_{{{{\rm{c}}}}})}^{2}}{{({{\Delta }}{k}_{z}/2)}^{2}}=1,$$ (3) where the temporal bandwidth is \(\frac{{{\Delta }}\omega }{c}\,=\,2\frac{{k}_{{{{\rm{o}}}}}}{{\sigma }_{{{{\rm{m}}}}}}\frac{1-{\beta }_{v}^{{\prime} }}{{\beta }_{v}^{2}}\,=\,{\beta }_{v}{{\Delta }}{k}_{z}\) , σ m = c ω o ∣ k 2m ∣ is a dimensionless dispersion coefficient, \({\beta }_{v}^{{\prime} }\,=\,\frac{\widetilde{v}}{{\widetilde{v}}_{{{{\rm{m}}}}}}\) , \({k}_{x,\max }\,=\,\frac{1}{2}\frac{{{\Delta }}\omega }{c}\sqrt{{n}_{{{{\rm{m}}}}}{\sigma }_{{{{\rm{m}}}}}}\) , ω c = ω o − Δ ω /2, \({k}_{{{{\rm{c}}}}}\,=\,{n}_{{{{\rm{m}}}}}{k}_{{{{\rm{o}}}}}-\frac{{{\Delta }}{k}_{z}}{2}\) , k x ,max ≪ n m k o , Δ k z ≪ n m k o and Δ ω ≪ ω o . It is crucial to recognize that the ellipse projected onto the ( k x , k z ) plane does not enclose the origin at ( k x , k z ) = (0, 0), but is rather displaced to a central value of k c ≫ Δ k z . Therefore, the optical field comprises plane-wave components that propagate only in the forward direction within a small angular range centred on the z axis, and the field, thus, remains within the paraxial domain. Nevertheless, because the spectrum is delineated at both ends by the curved dispersive light line, the resulting spatio-temporal profile is circularly symmetric in any axial plane z . This wave packet in the dispersive medium, thus, satisfies all the above-listed desired data for an optical dBM wave packet, but can be readily synthesized and observed in contrast to its free-space counterparts. One difficulty, however, arises from the form of φ ( ω ) in the dispersive medium, which fundamentally differs from that in free space. Each frequency ω in a free-space optical dBM wave packet is associated with two propagation angles ± φ ( ω ). However, each propagation angle φ is associated with a single frequency; therefore, ∣ ϕ ( ω ) ∣ is one to one. In the optical dBM wave packet in the dispersive medium, each ω is still associated with two propagation angles ± φ ( ω ); but φ ( ω ) is now two to one such that φ ( ω ) is folded back on itself (Fig. 2c ). To synthesize such a field configuration, a synthesizer capable of sculpting φ ( ω ) almost arbitrarily is required. Experimental confirmation Setup To construct the optical dBM wave packet in free space from a generic pulsed beam in which the spatial and temporal degrees of freedom are uncoupled, we introduce angular dispersion by assigning a particular pair of angles ± φ ( λ ) to each wavelength λ , thereby coupling the spatial and temporal degrees of freedom. We carry out this task using a universal angular dispersion synthesizer 32 , in which a spatial light modulator (SLM) deflects each wavelength from a spectrally resolved laser pulse at prescribed angles (Fig. 3 ; Methods provides the experimental details). Because each wavelength λ is deflected at φ ( λ ) independently of all the other wavelengths, φ ( λ ) need not be one to one. Indeed, it can readily be a two-to-one mapping as required for paraxial optical dBM wave packets. The dBM wave packet is formed once all the wavelengths are recombined by a grating to reconstitute the pulsed field. The spatio-temporal spectrum of the synthesized wave packet is acquired by operating on the spectrally resolved field with a spatial Fourier transform and recording the intensity with a charge-coupled device (CCD) camera. This measurement yields the spatio-temporal spectrum projected onto the ( k x , λ ) plane, from which we can obtain the spectral projection onto the ( k z , λ ) plane. The spatio-temporal intensity profile I ( x ; τ ) at a fixed axial plane z is reconstructed in the frame travelling at \(\widetilde{v}\) ( \(\tau \,=\,t-z/\widetilde{v}\) ) via linear interferometry exploiting the procedure developed elsewhere 30 , 31 , 43 (Methods). The dispersive medium exploited in our measurements comprises a pair of chirped Bragg mirrors providing an anomalous GVD coefficient of k 2m ≈ −500 fs 2 mm –1 and \({\widetilde{v}}_{{{{\rm{m}}}}}\,\approx \,c\) (Methods). Fig. 3: Synthesizing and characterizing optical dBM wave packets. Starting with femtosecond laser pulses (spectrum plotted in the inset), the ST synthesis arrangement associates each wavelength λ with the prescribed propagation angles ± φ ( λ ) before traversing the dispersive medium 32 . Spectral analysis is performed at this stage to verify that the target angular profile ± φ ( λ ) is indeed produced by the phase distribution imparted by the SLM to the spectrally resolved wavefront. Interfering the synthesized optical dBM wave packets with reference plane-wave pulses from the initial laser helps reconstruct the spatio-temporal intensity profile of the dBM wave packets. Full size image Measurement results We first verify the unique signature of dBM wave packets in the presence of anomalous GVD, namely, the O-shaped spatio-temporal intensity profile at any axial plane after inculcating the dBM dispersion relationship into the field. In Fig. 4 , we verify three sought-after features: (1) the closed elliptical spatio-temporal spectrum projected onto the ( k x , λ ) plane using the spectral analysis system (Fig. 3 ); (2) the linear spectral projection onto the ( k z , λ ) plane, indicating non-dispersive propagation in the dispersive medium, which we obtain from the measured ( k x , λ ) spectral projection; and (3) the circularly symmetric spatio-temporal intensity profile I ( x ; τ ) reconstructed from the interference of the dBM wave packet with a reference pulse at a fixed axial plane ( z = 30 mm). In Fig. 4a , we plot the measurements for an optical dBM wave packet having a group velocity of \(\widetilde{v}\,=\,0.9975c\) . The temporal bandwidth is constrained to a maximum value of Δ λ ≈ 16 nm, and the associated spatial bandwidth Δ k x ≈ 0.03 rad μm –1 , thus resulting in a pulse width of Δ T ≈ 200 fs at x = 0, and a spatial profile width of Δ x ≈ 38 μm at τ = 0. The spectral projection onto the ( k z , λ ) plane is delimited at both ends by the curved light line of the dispersive medium. In other words, a larger bandwidth is incompatible with propagation invariance at this group velocity in the dispersive medium. Indeed, a further increase in the bandwidth extends the spectral projection below the dispersive light line, which only contributes to evanescent field components. The measured spatio-temporal profile I ( x ; τ ), therefore, has the smallest dimensions in space and time for a circularly symmetric dBM wave packet compatible with the selected group velocity in the medium. Fig. 4: Observation of optical dBM wave packets in the presence of anomalous GVD and tuning their group velocity. In the first column, we plot the measured spatio-temporal spectrum projected onto the ( k x , λ ) plane; the dotted curve indicates the theoretical expectation based on equation ( 3 ). In the second column, we plot the spectral projection onto the ( k z , λ ) plane using the data from the first column; the dotted curve indicates the dispersive light line. In the third column, we plot the spatio-temporal intensity profile I ( x ; τ ) acquired in a moving reference frame travelling at the group velocity of the wave packets; the dotted circles are guides for the eye. a – c , Measurements for an optical dBM wave packet having a group velocity of \(\widetilde{v}\,=\,0.9975c\) ( a ), \(\widetilde{v}\,=\,0.9985c\) ( b ) and \(\widetilde{v}\,=\,0.999c\) ( c ). The dimensionless dispersion coefficient is σ m = c ω o k 2m = 0.3, and the measurements are carried out at z = 30 mm. Full size image To the best of our knowledge, this is the first observation of an O-shaped spatio-temporal intensity profile for a dispersion-free wave packet in a linear dispersive medium. Previous realizations of dispersion-free ST wave packets in dispersive media (whether in the normal or anomalous GVD regimes) revealed X-shaped spatio-temporal profiles 29 similar to those observed in free space 30 , 33 , 44 or in non-dispersive dielectrics 31 . In these experiments, however, the wave packets were not spectrally delimited by the dispersive-medium light line, which is a prerequisite for the realization of O-shaped optical dBM wave packets. As mentioned earlier, two parameters characterize a dBM wave packet: velocity v and rest mass m o . The corresponding variables associated with the optical dBM wave packet are \(\widetilde{v}\) and λ o , which can both be readily tuned in our experimental arrangement by changing the functional dependence of propagation angle φ on λ . In this way, we can vary the first parameter, namely, the group velocity \(\widetilde{v}\) . Increasing the group velocity from \(\widetilde{v}\,=\,0.9975c\) (Fig. 4a ) to \(\widetilde{v}\,=\,0.9985c\) (Fig. 4b ) and then to \(\widetilde{v}\,=\,0.999c\) (Fig. 4c ) reduces the maximum exploitable temporal bandwidth from Δ λ ≈ 16 nm to Δ λ ≈ 8 nm and Δ λ ≈ 6 nm, respectively, but the corresponding closed elliptic spectral projection onto the ( k x , λ ) plane, linear spectral projection onto the ( k z , λ ) plane and associated O-shaped spatio-temporal profile I ( x ; τ ) are retained. The spatial bandwidths drop from Δ k x ≈ 0.030 rad μm –1 to Δ k x ≈ 0.023 rad μm –1 and Δ k x ≈ 0.017 rad μm –1 , respectively. In all the three dBM wave packets (Fig. 4 ), we retain a fixed intersection with the dispersive light line at λ o ≈ 1,054 nm (corresponding to a fixed particle mass), such that reducing \(\widetilde{v}\) decreases the wavelength of the second intersection point with the curved light line. The second parameter—wavelength λ o corresponding to the particle rest mass m o for de Broglie phase waves—can also be readily tuned (Fig. 5 ). Here the maximum exploitable bandwidth changes as a result of shifting the value of λ o from λ o = 1,054 nm (Fig. 5a ), where Δ λ = 16 nm, to λ o = 1,055 nm (Fig. 5b ), where Δ λ = 14 nm, and then to λ o = 1,056 nm (Fig. 5c ), where Δ λ = 12 nm. Once again, both spatial and temporal widths of the circularly symmetric O-shaped profile in the ( x , t ) domain change accordingly. Fig. 5: Tuning the equivalent rest mass of an optical dBM wave packet. The columns correspond to those in Fig. 4 . The dimensionless dispersion coefficient is σ m = c ω o k 2m = 0.3; all the measurements are acquired at z = 30 mm, and the group velocity is held fixed at \(\widetilde{v}=0.9975c\) . a – c , Short-wavelength intersection with the dispersive light line is λ o = 1,054 nm ( a ), λ o = 1,055 nm ( b ) and λ o = 1,056 nm ( c ). Full size image The Airy wave packet, as mentioned earlier, is the unique non-dispersive solution to Schrödinger’s equation—no other waveform will do 45 . Although Mackinnon obtained a particular sinc-function-shaped wave packet 8 , this waveform is not unique. Indeed, the sinc function results from combining all the de Broglie phase waves with equal weights. However, dBM wave packets can take on, in principle, arbitrary waveforms by associating different magnitudes or phases with the plane-wave components constituting it. We confirm (Fig. 6 ) that the spatio-temporal profile I ( x ; τ ) of optical dBM wave packets can be modified as it remains propagation invariant in the dispersive medium. First, setting the complex spectral amplitudes to be equal along the elliptical spectral support, we obtain propagation-invariant circularly symmetric wave packets in the dispersive medium (Fig. 6a ). Truncating the ellipse and eliminating the plane-wave components in the vicinity of k x = 0 disrupts the formation of the full circular profile, but the wave packet nevertheless propagates invariantly (Fig. 6b ). By introducing a π step in the spectral phase along k x , a spatial null is formed along x = 0 in the dBM wave-packet profile (Fig. 6c ), whereas introducing the π-phase step along λ produces a temporal null along τ = 0 (Fig. 6d ). Finally, alternating the phases between 0 and π in the four quadrants of the spatio-temporal spectral plane ( k x , λ ) produces spatial and temporal nulls along both x = 0 and τ = 0 (Fig. 6e ). Despite such variations in their spatio-temporal profiles, all these optical dBM wave packets propagate invariantly in the dispersive medium. Fig. 6: Changing the spatio-temporal structure of optical dBM wave packets. From left to right: the measured spatio-temporal spectrum projected onto the ( k x , λ ) plane, along with the spectral phase; the measured spatio-temporal intensity profile I ( x , z ; τ ) at z = 15 mm; the measured spatio-temporal intensity profile I ( x , z ; τ ) at z = 30 mm; the calculated spatio-temporal intensity profile I ( x , z ; τ ) at fixed z ; I ( x ; τ = 0) at z = 30 mm; and I ( x = 0; τ ) at z = 30 mm. The dimensionless dispersion coefficient is a measurement of an O-shaped spectrum with v a = 0.9975 c and σ m = c ω o k 2m = 0.3, and the group velocity is held fixed at \(\widetilde{v}\,=\,0.9975c\) , corresponding to Fig. 4a . a , Spectrum with uniform phase. b , Spectrum with uniform phase but its amplitude is truncated along λ . c , A π-phase step is introduced along k x . d , A π-phase step is introduced along λ . e , Spectral phase is alternated between 0 and π in the four quadrants of the ( k x , λ ) plane. Full size image Finally, the ideal spectral constraint underlying the optical dBM wave packets implies an exact association between the spatial and temporal frequencies. Such idealized wave packets, consequently, have infinite energy 46 . In any realistic system, however, a spectral uncertainty is inevitably introduced into this association, resulting in a finite-energy wave packet travelling for a finite distance over which it is approximately invariant 47 . In our experiments, this spectral uncertainty arises from the finite spectral resolution of the diffraction grating employed (Fig. 3 ), which is estimated to be ~16 pm, corresponding to a propagation distance of ~32 m at a spectral tilt angle θ = 44.99° (ref. 43 ). Discussion We have experimentally realized the theoretical proposal made by Mackinnon almost 45 years ago for constructing a non-dispersive wave packet from dispersive de Broglie phase waves 8 . This is all the more surprising because Mackinnon’s proposed wave packet is neither a solution of the Klein–Gordon equation (although the underlying de Broglie phase waves are) nor is a solution of any other wave equation. Instead, as originally proposed, the dBM wave packet is to be constructed epistemologically, that is, measurements of individual phase waves obtained by a multiplicity of observers that are reported to a single reference frame are to be utilized to constitute the dBM wave packet (after correcting for Lorentz contraction and time dilation). Such a construction is, thus, expected to remain a theoretical object. Nevertheless, by utilizing recent developments in structuring optical fields 15 , 48 , dBM wave packets have transitioned from theoretical objects to ones that can be realized in a physical field. The importance and uniqueness of the dBM wave packet can be best appreciated compared with Airy wave packets. Because Airy wave packets are the only non-dispersive solutions to the Schrödinger equation (and thus also to the monochromatic paraxial wave equation in free space as well as the wave equation for a plane-wave pulse in a dispersive medium), it has had tremendous impact across all the domains of physical waves. Unlike Airy wave packets that exhibit self-acceleration, dBM wave packets have a constant velocity and are thus truly propagation invariant. Most crucially, although the specific complex spectral amplitudes that produce the Airy profile must be implemented, there are no spectral amplitudes that are emblematic of dBM wave packets, and their spatio-temporal profiles can be varied (Fig. 6 ) whereas they retain their characteristic closed-trajectory spatio-temporal spectrum. Moreover, the experimental procedure implemented here points to a synthesis strategy for generalized dispersion relationships that extends beyond the particular scenario of dBM wave packets. The overarching theme is that novel dispersion relationships for the axial propagation of a wave packet can be imposed by first adding another dimension to space and then exploiting the new dimension to tailor the dispersion relationship before spectral projection back onto the original reduced-dimensionality space. In the scenario studied here, starting with a (1 + 1)D physical wave in which an axial dispersion relationship ω ( k z ) is enforced by the dynamics of the wave equation, increasing the dimensionality of the space to (2 + 1)D by including a transverse coordinate x yields a new dispersion relationship ω ( k x , k z ). In free space, optical waves are subject to the constraint ω = c k z in (1 + 1)D and \(\omega ({k}_{x},{k}_{z})\,=\,c\sqrt{{k}_{x}^{2}+{k}_{z}^{2}}\) in (2 + 1)D. By judiciously associating each transverse wave number k x with a particular axial wave number k z , a reduced-dimension axial dispersion relationship ω red. ( k z ) is obtained: ω ( k x , k z ) = ω ( k x ( k z ), k z ) ↦ ω red. ( k z ), which can be engineered almost arbitrarily. In the experiment reported here, we employed this strategy to produce a linear dispersion relationship \(\omega ({k}_{x},{k}_{z})\,\mapsto \,{\omega }_{{{{\rm{red.}}}}}({k}_{z})\,=\,({k}_{z}-{k}_{{{{\rm{o}}}}})\widetilde{v}\) when projected onto the \(({k}_{z},\frac{\omega }{c})\) plane. In the presence of anomalous GVD, this dispersion relationship is delimited at both ends by the curved light line, thereby yielding the circular symmetric spatio-temporal profile characteristic of dBM wave packets. One may envision a variety of other scenarios that can be facilitated by this general strategy. For example, one may produce accelerating wave packets 49 , 50 , 51 whose group velocity changes with propagation in linear or nonlinear media, which is predicted to produce a host of new phenomena related to two-photon emission 52 and relativistic optics 53 , 54 . Here we emphasize the main distinctions between the optical dBM wave packets realized in this work and all the ST wave packets reported to date. First, although all previously synthesized ST wave packets are X-shaped, dBM wave packets, on the other hand, are the sole example of O-shaped (that is, circularly symmetric with respect to space and time coordinates) propagation-invariant wave packets. Second, the spatio-temporal spectra for dBM wave packets comprise closed trajectories with a two-to-one association between wavelength and spatial frequency. In contrast, all the ST wave packets reported to date have had open-trajectory spatio-temporal spectra, thus resulting in a one-to-one association between the spatial and temporal frequencies. Third, dBM wave packets have an upper limit on the bandwidth, which is consistent with propagation invariance at the selected group velocity in the medium, whereas all the reported ST wave packets to date have had no intrinsic upper limit on the bandwidth within the paraxial domain. The dimensionality of the (2 + 1)D dBM wave packets described here can be further extended to the full-dimensional (3 + 1)D space of ( x , y , z ; t ) by including the second transverse coordinate y . This can now be achieved in light of very recent progress in producing the so-called three-dimensional ST wave packets that are localized in all the dimensions of (3 + 1)D space 55 , 56 , 57 , 58 . Combining this new synthesis methodology with the procedure outlined here for producing dBM wave packets in the anomalous GVD regime will yield spherically symmetric propagation-invariant pulsed field structures that are compatible with coupling to optical fibres and waveguides, thus enabling new opportunities in optical communications, optical signal processing, and nonlinear and quantum optics. Indeed, the GVD in optical fibres in the telecommunications window is anomalous, thus yielding the possibility of producing in-fibre dBM wave packets. Moreover, such field configurations provide a platform for exploring proposed topological structures associated with polarization (spin texture) 55 without resorting to stereo-projection onto a 2D plane. Of course, the strategy employed here is not confined to optical waves, and is indeed agnostic with respect to the physical substrate; therefore, it can be implemented, in principle, with acoustic waves, microwaves, surface plasmon polaritons 59 , electron beams, neutron beams or other massive particles. In all the cases, an added spatial dimension can be exploited to override the intrinsic dispersion relationship of the particular wave phenomenon, thus producing novel propagation dynamics. Intriguingly, the results here allow us to return to de Broglie’s starting point almost 100 years ago and complete his program. Mackinnon’s work in proposing dispersion-free wave packets took a major step towards vindicating de Broglie’s pioneering efforts by constructing wave packets that can serve as stable representations of massive particles, although these wave packets were destined to remain a theoretical concept in the form that he originally proposed 8 . Inspired by Mackinnon’s proposal, our approach suggests that a decisive step can now be taken towards physically realizing propagation-invariant particle wave packets—using de Broglie phase-wave solutions of the Klein–Gordon equation—whose size is uniquely determined solely by the intrinsic attributes of the particle itself (its rest mass and velocity). Methods de Broglie phase waves At rest, the frequency of the internal clock and that of the infinite-wavelength phase wave are the de Broglie frequency ω o . When the particle moves at velocity v , the observed frequency of the internal clock in the rest frame is reduced to \(\omega \,=\,{\omega }_{{{{\rm{o}}}}}\sqrt{1-{\beta }_{v}^{2}}\) , whereas that of the phase wave is increased to \(\omega \,=\,{\omega }_{{{{\rm{o}}}}}/\sqrt{1-{\beta }_{v}^{2}}\) , where β v = v / c (ref. 9 ). To obtain the phase velocity v ph of the phase wave, de Broglie proposed a theory of phase harmony, which requires that the internal clock and phase wave remain in phase for all t and at any v (ref. 9 ). The phase of the moving clock in the rest frame is \(\phi \,=\,{\omega }_{{{{\rm{o}}}}}t\sqrt{1-{\beta }_{v}^{2}}\) , and that of the phase wave is \(\phi \,=\,{\omega }_{{{{\rm{o}}}}}(t-\frac{z}{{v}_{{{{\rm{ph}}}}}})/\sqrt{1-{\beta }_{v}^{2}}\) . At time t , the particle has covered a distance z = v t , and equating the phases yields \({v}_{{{{\rm{ph}}}}}\,=\,\frac{{c}^{2}}{v}\, > \,c\) . Alternatively, the Lorentz transformation of the proper time is \(t{\prime} \,=\,(t-\frac{v}{{c}^{2}}z)/\sqrt{1-{\beta }_{v}^{2}}\) and \({\omega }_{{{{\rm{o}}}}}t{\prime} \,=\,\omega t-kz\,=\,\omega t-\frac{\omega }{{v}_{{{{\rm{ph}}}}}}z\) , from which we again have \({v}_{{{{\rm{ph}}}}}\,=\,\frac{{c}^{2}}{v}\) . Conventional de Broglie wave packets The equations \(\omega \,=\,{\omega }_{{{{\rm{o}}}}}/\sqrt{1-{\beta }_{v}^{2}}\) and \(k\,=\,\frac{\omega }{{v}_{{{{\rm{ph}}}}}}\,=\,\frac{\omega }{c}\beta\) for the phase wave yield the dispersion relationship \(k\,=\,\frac{1}{c}\sqrt{{\omega }^{2}-{\omega }_{{{{\rm{o}}}}}^{2}}\) . A de Broglie wave packet of finite bandwidth Δ ω centred at ω = ω c requires including an uncertainty Δ v in the particle velocity centred on speed v (which corresponds to ω c ). The group velocity of the de Broglie wave packet is thus \(\widetilde{v}\,=\,1/\frac{\mathrm{d}k}{\mathrm{d}\omega }{\left\vert \right.}_{{\omega }_{{{{\rm{c}}}}}}\,=\,c\sqrt{1-{(\frac{{\omega }_{{{{\rm{o}}}}}}{{\omega }_{{{{\rm{c}}}}}})}^{2}}\,=\,c{\beta }_{v}\,=\,v\) . Formulation of dBM wave packets Consider the configuration depicted in Fig. 1d , where the particle moves at velocity v and an observer moves at velocity u , both with respect to a selected rest frame. Here the relative velocity of the particle with respect to the observer is ξ , where \({\beta }_{\xi }\,=\,\frac{\xi }{c}\,=\,\frac{{\beta }_{v}-{\beta }_{u}}{1-{\beta }_{v}{\beta }_{u}}\) , \({\beta }_{v}\,=\,\frac{v}{c}\) and \({\beta }_{u}\,=\,\frac{u}{c}\) . According to this observer, the frequency and wave number are \(\omega {\prime} \,=\,{\omega }_{{{{\rm{o}}}}}/\sqrt{1-{\beta }_{\xi }^{2}}\) and \(k{\prime} \,=\,{k}_{{{{\rm{o}}}}}{\beta }_{\xi }/\sqrt{1-{\beta }_{\xi }^{2}}\) , respectively. The crucial step proposed by Mackinnon is that all the potential observers with velocities u ranging from − c to c report their observations of ω ′ and k ′ to the common rest frame, where the wave packet is constructed epistemologically after accounting for Lorentz contraction and time dilation. Following this prescription, it is straightforward to show that $$\omega ={\omega }_{{{{\rm{o}}}}}\frac{1-{\beta }_{v}{\beta }_{u}}{\sqrt{1-{\beta }_{v}^{2}}},\quad k={k}_{{{{\rm{o}}}}}\frac{{\beta }_{v}-{\beta }_{u}}{\sqrt{1-{\beta }_{v}^{2}}}.$$ (4) Because v is a fixed velocity whereas u extends from − c to c , a linear dispersion relationship between ω and k is established (Fig. 1f ): $$k=\frac{1}{{\beta }_{v}}\left(\frac{\omega }{c}-{k}_{{{{\rm{o}}}}}\sqrt{1-{\beta }_{v}^{2}}\right).$$ (5) This line intersects with the light line \(k\,=\,\frac{\omega }{c}\) at \(k\,=\,{k}_{+}\,=\,{k}_{{{{\rm{o}}}}}\sqrt{\frac{1+{\beta }_{v}}{1-{\beta }_{v}}}\) (when u = − c ), and with the light line \(k\,=\,-\frac{\omega }{c}\) at \(k\,=\,-{k}_{-}\,=\,-{k}_{{{{\rm{o}}}}}\sqrt{\frac{1-{\beta }_{v}}{1+{\beta }_{v}}}\) (when u = c ), where \({k}_{+}\,=\,\frac{{\omega }_{+}}{c}\, > \,{k}_{{{{\rm{o}}}}}\) and \({k}_{-}\,=\,\frac{{\omega }_{-}}{c}\, < \,{k}_{{{{\rm{o}}}}}\) (Fig. 1f ). When u = 0, the associated de Broglie phase wave has \(\frac{\omega }{c}\,=\,{k}_{1}\) and k = β v k 1 , where \({k}_{1}\,=\,\frac{{k}_{{{{\rm{o}}}}}}{\sqrt{1-{\beta }_{v}^{2}}}\, > \,{k}_{{{{\rm{o}}}}}\) . Another phase wave of interest is the one that retains the stationary frequency ω = ω o , which occurs when \({\beta }_{u}\,=\,\frac{1}{{\beta }_{v}}(1-\sqrt{1-{\beta }_{v}^{2}})\,=\,\frac{1}{{\beta }_{v}}(1-\frac{{k}_{{{{\rm{o}}}}}}{{k}_{1}})\) , and is associated with k = k 2 = k o β u . Finally, the linear dispersion relationship has k = 0 when u = v and thus \(\frac{\omega }{c}\,=\,{k}_{{{{\rm{o}}}}}\sqrt{1-{\beta }_{v}^{2}}\,=\,\frac{{k}_{{{{\rm{o}}}}}^{2}}{{k}_{1}}\, < \,{k}_{{{{\rm{o}}}}}\) . Throughout, setting v = 0 ( β v = 0) in these relationships yields the result shown in Fig. 1e for a dBM wave packet associated with a particle in its rest frame. Optical dBM wave packets in free space For a monochromatic optical field at ω = ω o , we have the dispersion relationship \({k}_{x}^{2}+{k}_{z}^{2}\,=\,{k}_{{{{\rm{o}}}}}^{2}\) , which is the circle at the intersection of the free-space light cone \({k}_{x}^{2}+{k}_{z}^{2}\,=\,{(\frac{\omega }{c})}^{2}\) with the horizontal iso-frequency plane ω = ω o . As described in the main text, the full circle is the spectral support for the optical field produced by a stationary planar dipole. When the source and detector are in relative motion along the z axis at velocity v , the initially horizontal iso-frequency plane in \(({k}_{x},{k}_{z},\frac{\omega }{c})\) space is tilted by angle θ with respect to the k z axis, where tan θ = β v (ref. 42 ). The resulting spectral constraint at the intersection with the light cone (that conforms to a dBM wave packet) is $${k}_{z}={k}_{+}+\frac{\omega -{\omega }_{+}}{\widetilde{v}}=\frac{1}{{\beta }_{v}}\left(\frac{\omega }{c}-{k}_{{{{\rm{o}}}}}\sqrt{1-{\beta }_{v}^{2}}\right)=\frac{1}{{\beta }_{v}}\left(\frac{\omega }{c}-\frac{{k}_{{{{\rm{o}}}}}^{2}}{{k}_{1}}\right),$$ (6) where \({k}_{+}\,=\,\frac{{\omega }_{+}}{c}\) is the point on the light line \({k}_{z}\,=\,\frac{\omega }{c}\) intersecting with the tilted spectral plane, and we make use of the same definitions of k o , k + and k 1 as above for dBM wave packets. Eliminating k z from these two relationships (the light cone and titled spectral plane) yields the spectral projection onto the \(({k}_{x},\frac{\omega }{c})\) plane in the form of an ellipse (equation ( 1 ) and Fig. 2b ). We can also obtain the spectral projection onto the ( k x , k z ) plane by eliminating ω : substituting \(\frac{\omega }{c}\,=\,{\beta }_{v}{k}_{z}+{k}_{{{{\rm{o}}}}}\sqrt{1-{\beta }_{v}^{2}}\) from the spectral constraint into \({k}_{x}^{2}+{k}_{z}^{2}\,=\,{(\frac{\omega }{c})}^{2}\) yields the ellipse \(\frac{{k}_{x}^{2}}{{k}_{{{{\rm{o}}}}}^{2}}+\frac{1}{{({{\Delta }}{k}_{z}/2)}^{2}}{({k}_{z}-{\beta }_{v}{k}_{1})}^{2}\,=\,1\) , where Δ k z = 2 k 1 and β v k 1 is the central axial wave number. Substituting \({k}_{x}\,=\,\frac{\omega }{c}\sin \varphi\) and \({k}_{z}\,=\,\frac{\omega }{c}\cos \varphi\) , the propagation angle is \(\cos \varphi (\omega )\,=\,\{1-\frac{{\omega }_{{{{\rm{o}}}}}}{\omega }\sqrt{1-{\beta }^{2}}\}/{\beta }_{v}\) , ω ≠ 0, which is not differentiable at φ = 0 or φ = π—corresponding to the maximum and minimum points on the ellipses in the \(({k}_{x},\frac{\omega }{c})\) or ( k x , k z ) planes. Optical dBM wave packets in the presence of anomalous GVD In the presence of anomalous GVD, the wave number in the medium expanded at around ω o is \(k\,=\,n(\omega )\omega /c\,=\,{n}_{{{{\rm{m}}}}}{k}_{{{{\rm{o}}}}}+{{\varOmega }}/{\widetilde{v}}_{{{{\rm{m}}}}}-\frac{1}{2}| {k}_{2{{{\rm{m}}}}}| {{{\varOmega }}}^{2}\) ; here Ω = ω − ω o and n ( ω ) is the frequency-dependent refractive index. The quantities n m , \({\widetilde{v}}_{{{{\rm{m}}}}}\) and k 2m are evaluated in the medium at ω = ω o : n m = n ( ω o ) is the refractive index, \(\widetilde{v}\,=\,1/\frac{\mathrm{d}k}{\mathrm{d}\omega }{\left\vert \right.}_{{\omega }_{{{{\rm{o}}}}}}\) is the group velocity and \({k}_{2{{{\rm{m}}}}}\,=\,\frac{{\mathrm{d}}^{2}k}{\mathrm{d}{\omega }^{2}}{\left\vert \right.}_{{\omega }_{{{{\rm{o}}}}}}\,=\,-| {k}_{2{{{\rm{m}}}}}|\) is the negative-valued GVD coefficient in the anomalous dispersion regime. In the small-angle (paraxial) approximation $${k}_{z}=\sqrt{{k}^{2}-{k}_{x}^{2}}\approx {n}_{{{{\rm{m}}}}}{k}_{{{{\rm{o}}}}}+\frac{{{\varOmega }}}{{\widetilde{v}}_{{{{\rm{m}}}}}}-\frac{1}{2}| {k}_{2{{{\rm{m}}}}}| {{{\varOmega }}}^{2}-\frac{{k}_{x}^{2}}{2{n}_{{{{\rm{m}}}}}{k}_{{{{\rm{o}}}}}},$$ (7) and the light line ( k x = 0) is now curved (Fig. 2c ). To produce a dBM wave packet, we impose the spectral constraint $${k}_{z}={n}_{{{{\rm{m}}}}}{k}_{{{{\rm{o}}}}}+\frac{{{\varOmega }}}{\widetilde{v}},$$ (8) which intersects with the light line at two points: Ω = 0 ( ω = ω o ) and \(\frac{\omega }{c}\,=\,-2{k}_{{{{\rm{a}}}}}\frac{1-{\beta }_{v}^{{\prime} }}{{\beta }_{v}^{{\prime} }}\) ; here \({\beta }_{v}^{{\prime} }\,=\,\frac{{\beta }_{v}}{{\beta }_{{{{\rm{m}}}}}}\,=\,\frac{\widetilde{v}}{{\widetilde{v}}_{{{{\rm{m}}}}}}\) , \(\widetilde{v}\, < \,{\widetilde{v}}_{{{{\rm{m}}}}}\) and \({k}_{{{{\rm{a}}}}}\,=\,{({c}^{2}| {k}_{2{{{\rm{m}}}}}| {\beta }_{{{{\rm{m}}}}})}^{-1}\,=\,\frac{{\omega }_{{{{\rm{a}}}}}}{c}\) . The maximum temporal bandwidth compatible with dispersion-free propagation in this dispersive medium at group velocity \(\widetilde{v}\) is thus \(\frac{{{\Delta }}\omega }{c}\,=\,2{k}_{{{{\rm{a}}}}}\frac{1-{\beta }_{v}^{{\prime} }}{{\beta }_{v}^{{\prime} }}\) , and the corresponding maximum bandwidth of the axial wave number is \({{\Delta }}{k}_{z}\,=\,2\frac{{k}_{{{{\rm{a}}}}}}{{\beta }_{{{{\rm{m}}}}}}\frac{1-{\beta }_{v}^{{\prime} }}{{\beta }_{v}^{{\prime} 2}}\) ; therefore, \(\frac{{{\Delta }}\omega /c}{{{\Delta }}{k}_{z}}\,=\,{\beta }_{{{{\rm{m}}}}}{\beta }_{v}^{{\prime} }\,=\,{\beta }_{v}\) . We can now obtain the spectral projections onto the \(({k}_{x},\frac{\omega }{c})\) and \(({k}_{z},\frac{\omega }{c})\) planes for the optical dBM wave packet in the presence of anomalous GVD, just as we did for their counterparts in free space. By equating equations ( 7 ) and ( 8 ), we eliminate k z and obtain, in the \(({k}_{x},\frac{\omega }{c})\) plane, the ellipse \(\frac{{k}_{x}^{2}}{{k}_{x,\max }^{2}}+\frac{{(\omega -{\omega }_{{{{\rm{c}}}}})}^{2}}{{({{\Delta }}\omega /2)}^{2}}\,=\,1\) , where the central frequency is \({\omega }_{{{{\rm{c}}}}}\,=\,{\omega }_{{{{\rm{o}}}}}-\frac{{{\Delta }}\omega }{2}\) , \({k}_{x,\max }^{2}\,=\,{\sigma }_{{{{\rm{m}}}}}{(\frac{{{\Delta }}\omega /2}{c})}^{2}\) and σ m = n m c ω o ∣ k 2m ∣ is a dimensionless GVD parameter. Similarly, we can obtain the spectral projection onto the ( k x , k z ) plane by substituting \({{\varOmega }}\,=\,\widetilde{v}({k}_{z}-{n}_{{{{\rm{m}}}}}{k}_{{{{\rm{o}}}}})\) from the spectral constraint in equation ( 8 ) into the light cone in equation ( 7 ) to obtain the ellipse: \(\frac{{k}_{x}^{2}}{{k}_{x,\max }^{2}}+\frac{{({k}_{z}-{k}_{{{{\rm{c}}}}})}^{2}}{{({{\Delta }}{k}_{z}/2)}^{2}}\,=\,1\) , where k c is the centre of the k z span. Details of experimental setup and spectral measurements The field configuration (Fig. 2c ) is produced via the angular dispersion synthesizer (Fig. 3) 32 . Starting with femtosecond laser pulses (central wavelength, λ = 1,064 nm; bandwidth, Δ λ = 20 nm; and pulse width, Δ T ≈ 100 fs; Spark Lasers, ALCOR), a diffraction grating (1,200 lines mm –1 ) spatially resolves the spectrum, and a cylindrical lens (focal length, f = 500 mm) collimates the wavefront before incidence on a reflective, phase-only SLM (Meadowlark, E19X12). The SLM deflects each wavelength λ at angles ± φ ( λ ) according to the elliptical spatio-temporal spectrum given in equation ( 3 ). The wavefront retro-reflected from the SLM returns to the grating, and the optical dBM wave packet is formed. The spatio-temporal spectrum is acquired by directing a portion of the spectrally resolved wavefront reflecting from the SLM to a spatial Fourier transform (Fig. 3 ), consequently yielding the spatio-temporal spectral projection onto the ( k x , λ ) plane. We then obtain the spectral projection onto the ( k z , λ ) plane in the medium via the relationship \({k}_{z}(\omega )\,=\,\sqrt{{\left\{\left(n(\omega )\frac{\omega }{c}\right.\right\}}^{2}-{k}_{x}^{2}(\omega )}\) . Dispersive medium The dispersive sample that we exploit comprises a pair of chirped Bragg mirrors (Edmund 12-335) that provide anomalous group-delay dispersion. By adjusting the separation between the two mirrors and the incident angle of the wave packet, we can control the number of bounces off the mirrors, thereby producing an anomalous GVD coefficient k 2m ≈ −500 fs 2 mm –1 . Because the thickness of the mirrors is negligible with respect to the free-space gap separating them, we can have \({\widetilde{v}}_{{{{\rm{m}}}}}\,\approx \,c\) ( \({\widetilde{n}}_{{{{\rm{m}}}}}\,\approx \,1\) ). Reconstruction of spatio-temporal profiles of dBM wave packets To reconstruct the spatio-temporal intensity profile of a dBM wave packet I ( x , z ; τ ) at a fixed axial plane z , we make use of the interferometric arrangement (schematic shown in Fig. 3 ). We bring together two wave packets: the synthesized optical dBM wave packet and a reference plane-wave pulse taken from the initial laser pulse 30 . An optical delay τ is placed in the path of the reference pulse. When the dBM wave packet and reference pulse overlap in space and time, we observe high-visibility spatially resolved interference fringes at the CCD camera placed in their common path. As we sweep the optical delay τ (thus reducing the overlap between the dBM wave packet and reference pulse), the interference visibility drops. We make use of the recorded visibility along x and τ to reconstruct the wave-packet intensity profile I ( x ; τ ). Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. | University of Central Florida College of Optics and Photonics researchers achieved the first observation of de Broglie-Mackinnon wave packets by exploiting a loophole in a 1980s-era laser physics theorem. The research paper by CREOL and Florida Photonics Center of Excellence professor Ayman Abouraddy and research assistant Layton Hall has been published in the journal Nature Physics. Observation of optical de Broglie–Mackinnon wave packets highlights the team's research using a class of pulsed laser beams they call space-time wave packets. In an interview with Dr. Abouraddy, he provides more insight into his team's research and what it may hold for the future. You accomplished several 'firsts' during this phase of your research. Will you provide some history of the theoretical ideas that brought you here? In the early days of the development of quantum mechanics almost 100 years ago, Louis de Broglie made the crucial conceptual breakthrough of identifying waves with particles, sometimes called wave-particle duality. However, a crucial dilemma was not resolved. Particles are spatially stable: their size does not change as they travel, however waves do change, spreading in space and time. How can one construct a model out of the waves suggested by de Broglie that nevertheless correspond accurately to a particle? In the 1970s, L. Mackinnon proposed a solution by combining Einstein's special theory of relativity with de Broglie's waves to construct a stable 'wave packet' that does not spread and can thus accompany a traveling particle. This proposal went unnoticed because there was no methodology for producing such a wave packet. In recent years, my group has been working on a new class of pulsed laser beams that we have called 'space-time wave packets,' which travel rigidly in free space. In our recent research, Layton extended this behavior to propagation in dispersive media, which normally stretch optical pulses—except for space-time wave packets that resist this stretching. He recognized that the propagation of space-time wave packets in a medium endowed with a special kind of dispersion (so-called 'anomalous' dispersion) corresponds to Mackinnon's proposal. In other words, space-time wave packets hold the key to finally achieving de Broglie's dream. By carrying out laser experiments along these lines, we observed for the first time what we have called de Broglie-Mackinnon wave packets and verified their predicted properties. What is unique about your results? There are several unique aspects of this paper. This is the first example of a pulse propagating invariantly in a medium with anomalous dispersion. In fact, a well-known theorem in laser physics from the 1980's purports to prove that such a feat is impossible. We found a loophole in that theorem that we exploited in designing our optical fields. Also, all previous pulsed fields that propagate without change have been X-shaped. It has long been theorized that O-shaped propagation-invariant wave packets should exist, but they have never been observed. Our results reveal the first observed O-shaped propagation-invariant wave packets. The U.S. Office of Naval Research is supporting your research. How are your findings useful to them and others? We don't know yet exactly. However, these findings have practical consequences in terms of the propagation of optical pulses in dispersive media without suffering the deleterious impact of dispersion. These results may pave the way to optical tests of the solutions of the Klein-Gordon equation for massive particles, and may even lead to the synthesis of non-dispersive wave packets using matter waves. This would also enable new sensing and microscope techniques. What are the next steps? This work is a part of a larger study of the propagation characteristics of space-time wave packets. This includes long-distance propagation of space-time wave packets that we are testing at UCF's Townes Institute Science and Technology Experimentation Facility (TISTEF) on Florida's space coast. From a fundamental perspective, the optical spectrum that we have used in our experiments lies on a closed trajectory. This has never been achieved before, and it opens the path to studying topological structures of light on closed surfaces. | 10.1038/s41567-022-01876-6 |
Biology | Land-use change disrupts wild plant pollination on a global scale | Joanne M. Bennett et al, Land use and pollinator dependency drives global patterns of pollen limitation in the Anthropocene, Nature Communications (2020). DOI: 10.1038/s41467-020-17751-y Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-17751-y | https://phys.org/news/2020-08-land-use-disrupts-wild-pollination-global.html | Abstract Land use change, by disrupting the co-evolved interactions between plants and their pollinators, could be causing plant reproduction to be limited by pollen supply. Using a phylogenetically controlled meta-analysis on over 2200 experimental studies and more than 1200 wild plants, we ask if land use intensification is causing plant reproduction to be pollen limited at global scales. Here we report that plants reliant on pollinators in urban settings are more pollen limited than similarly pollinator-reliant plants in other landscapes. Plants functionally specialized on bee pollinators are more pollen limited in natural than managed vegetation, but the reverse is true for plants pollinated exclusively by a non-bee functional group or those pollinated by multiple functional groups. Plants ecologically specialized on a single pollinator taxon were extremely pollen limited across land use types. These results suggest that while urbanization intensifies pollen limitation, ecologically and functionally specialized plants are at risk of pollen limitation across land use categories. Introduction Nearly 90% of flowering plants rely on animal pollinators for reproduction 1 , and as a consequence, angiosperm biodiversity relies on stable mutualisms between plants and pollinators 2 , 3 . As the world’s human population has grown, native vegetation has been converted to intensively human-managed and urbanized landscapes 4 that, along with increased use of pesticides, have demonstrably reduced pollinator abundance and diversity even in natural areas 4 , 5 , 6 , 7 , 8 . Although insect declines are now recognized broadly, wild bee species may be particularly vulnerable to land-use change 9 , 10 and these represent the most important pollinators of flowering plants globally 5 , 11 . Moreover, how plant reproduction responds to land use via any declines in pollinators has important implications for much of the world’s flora 12 , yet the effects of land use changes on pollen limitation of wild plant reproduction have not been evaluated on a global scale 13 . The consequences of anthropogenic disturbances for pollen limitation of plant reproduction (hereafter PL) are likely to vary with degree of plant dependence on pollinators, as well as level of ecological or functional specialization 14 , in addition to plant traits that reflect the evolutionary history of their interactions with their pollinators, such as floral symmetry 15 , 16 . For example, plant species that have evolved traits that buffer them from pollinator uncertainty, such as autofertility (i.e., self pollination in the absence of flower visitors) and functional generalization (e.g., pollination by a range of taxa or functional groups), are expected to be less prone to PL in response to anthropogenic change. While land use changes have been posited to erode ecosystem services provided by pollination, the effects of land use change on plants is likely heavily mediated by pollinator dependence. Thus, the consequences of land use change on PL and on how it may reshape phenotypic and genetic diversity, as well as the distributions of plant species across the globe require a more nuanced examination. The degree to which pollen receipt limits plant reproduction has been studied in thousands of independent experiments that compare fruit or seed production of flowers exposed to natural pollination with those receiving supplemental pollination. This standardized experimental approach provides important insight to assess global drivers of PL via meta-analysis while controlling for plant phylogenetic history 17 , 18 . Early theoretical research based on sexual selection and optimality predict that plants should not increase seed production in response to experimental pollen addition unless they have been displaced from their evolutionary optimum 16 , 19 , 20 , 21 , possibly by anthropogenic factors. While later models have suggested that PL may represent an evolutionary equilibrium in a stochastic pollination environment where pollen quantity or quality may vary 19 , 22 , 23 , anthropogenic changes that disrupt plant–pollinator interactions beyond historical means and variances are still expected to increase PL. Yet we do not know the extent of anthropogenic impact nor the spatial scale at which it occurs. In this study, we use phylogenetically controlled meta-analysis of 2247 studies of 1247 wild plant species across the globe (Fig. 1a ) in conjunction with data on landscape conversion to determine whether there is a signature of contemporary land use on PL, and if so, whether it is dependent on the extent to which plant species rely on pollinators for reproductive success. Does high pollinator dependency and high ecological or functional pollinator specialization place plants at higher risk of PL, while autofertility or pollinator generalization buffer plant reproduction from PL, in the face of land use modification? Fig. 1: The global distribution of data from the GloPL database ( a ) and an interaction plot showing the interaction between land use and pollinator dependence in respect to the effect size of pollen limitation (PL) ( b ). The point colour indicates the dominant land use category urban (orange), managed (purple), and natural (green) in ( a , b ). In the interaction plot, pollinator dependant plants are indicated by the solid line and autofertile plants by the dashed line. The area of the plot shaded orange indicates an effect size above (i.e. plants are PL) and the area of the plot shaded purple indicates an effect size below (i.e. plants are not PL). The interaction plot illustrates the average modelled result and 95% confidence intervals (shown as error bars) from 500 bootstrapped phylogenetic meta-analyses with the response variable PL and the interaction between land use and pollinator dependence as the predictor variables. Source data are provided as a Source Data file. Full size image We show that pollinator dependant plants in urban settings have higher PL than those in managed and natural landscapes, and that plant traits play a strong role in determining PL across different land use categories. Our results show that high intensity land-use increases PL, and that ecologically and functionally specialized plants are particularly at risk. This work reveals that human-mediated disruptions may be a turning point for natural systems, and that conservation should focus not just on pollinators but also the diverse wild plant communities that support them, especially in urban and natural habitats. Results Global patterns in PL PL was evident at a global scale: on average the PL effect size in GloPL 17 is 0.49 (CI: 0.45–0.52), which equates to a 63% increase in reproduction following supplementation (Fig. 1b ). We did not find significant phylogenetic signal in PL in our highly geographically and species diverse dataset ( K = 0.31, P = 0.097). However, as a variety of plant traits related to pollination have been shown to be phylogenetically conserved 24 , 25 , we control for phylogenetic structure in the meta-analysis and focus on the influence of land use categories and pollinator dependency on PL. Land use categories, pollinator dependency, ecological specialization and functional specialization in our data set were well distributed across the globe (Fig. 1a ) and across our plant phylogeny (Fig. 2a ). Fig. 2: Phylogenetic distribution of data extracted from the GloPL database 17 ( a ) and interaction plots of the interaction between land use and ecological specialization ( b ) and land use and functional specialization ( c ) in respect to the effect size of pollen limitation (PL). The phylogeny is modified from the angiosperm supertree 42 and for each species the PL effect size and category of pollinator dependence, ecological specialization, and functional specialization are shown. Pollen limitation effect size in ( a ) is given by a bar plot, where orange bars indicate a positive effect size and dark purple bars indicate an effect size of or below (i.e. no PL). Pollinator dependence of plants in ( a ) is classified as autofertile (purple) or pollinator dependent (light green). Ecological specialization of plants in ( a , b ) is classified as reliant on either one (dark green), few (green) or many (light green) pollinator species. Functional specialization of plants in ( a , c ) is classified as exclusively bee pollinated (dark blue), exclusively pollinated by another functional group (blue) or pollinated multiple functional groups (light blue). Interaction plots represent the average modelled and 95% confidence intervals (shown as error bars) result from 500 bootstrapped phylogenetic meta-analyses with pollen limitation as the response variable and the interaction between land use and ecological specialization or functional specialization as the predictor variables. Source data are provided as a Source Data file. Full size image Land-use intensity The effects of land use on PL were influenced by pollinator dependency (Supplementary Tables 1 and 2 ; Fig. 1b — Q M = 13,294, d f = 6, P < 0.001). Autofertile plants were not PL under any land use category (none significantly different from zero, Fig. 1b , Supplementary Table 1 ). However, for pollinator-dependent plants, the extent of PL depended on land use with PL greatest in urban locations, followed by natural and managed vegetation (Fig. 1b ; Supplementary Tables 1 and 2 ). Although the frequency of studies in urban landscapes is low, the result is robust and is derived from 93 studies conducted in 24 urban centers across the globe (Fig. 1a ). Ecological and functional specialization Plants only pollinated by one pollinator taxon have higher PL than those pollinated by few or many pollinator taxa (Supplementary Table 3 ; Fig. 2a ). Functional specialization significantly modified responses of PL to land use (Supplementary Tables 4 and 5 — Q M = 4518, d f = 6, P < 0.001). Specifically, exclusively bee-pollinated plants were significantly more PL in natural landscaped than in managed landscapes (Fig. 2c , Supplementary Table 5 ), but the opposite was the case for plants exclusively pollinated by another functional group or those serviced by multiple functional groups. For these, managed habitats lead to higher PL than natural ones (Fig. 2c , Supplementary Table 5 ). Discussion Our finding of higher PL in urban settings suggests that urbanization (e.g., fragmentation, impervious surfaces, and pollution and traffic) is highly disruptive to plant–pollinator interactions 26 . This result reflects recent reports suggesting that although pollinator richness can be high in urban areas, pollinators tend to service a lower proportion of the available plant species than in managed and natural sites 27 . Plants in managed and natural habitats are similarly pollen limited (Table S1 ; Fig. 1b ). Variation in intensity of management and/or in degree of degradation of natural habitats could be obscuring potential differences in these land use categories, or it is possible that differences in PL depend on ecological and functional specialization on pollinators. For example, although many stressors associated with managed landscapes are known to lead to higher PL 14 , heterogenously managed landscapes can also increase pollinator diversity and therefore could lower PL 10 . Furthermore, the asymmetric nature of plant–pollinator interactions, in which specialist plant species are often pollinated by generalist pollinators, may make them resilient to some disturbance 28 . In both managed and natural landscape types, we found that the most ecologically specialized plants—those pollinated by only one pollinator taxon—were generally more pollen limited than those pollinated by few or many pollinator taxa (Supplementary Table 3 ; Fig. 2a ). These results indicate that regardless of contemporary land use, reproduction by highly specialized plant species, such as orchids, and endangered endemic species, such as Daphne rodriguezii (Thymelaeaceae) and Oxypetalum mexiae (Apocynaceae), are vulnerable to pollinator declines at a global scale. While insects are declining globally 5 , losses are not uniform across taxa and habitat types 29 , and the composition and efficiencies of pollinator fauna can differ among habitat types 30 . For example, in the UK, rare bee species have strongly declined in natural habitats, while widespread generalist bees (that are dominant crop pollinators) have increased in managed habitats 29 . In contrast to native pollinators, global trends suggest managed honey bee hives are increasing 31 . In many managed habitats, pollination is supplemented by domesticated honey bees, and this could lower PL not only for the crop species but also for the wild plants in these settings 32 . However, the addition of honey bees can have detrimental effects on other pollinating taxa, negatively impacting the plant species that rely on them 33 . We expected that plants exclusively pollinated by bees might benefit from managed habitats while those specialized on other functional groups (e.g., dipterans, lepidopterans, and mammals) might not. We expected that plants pollinated by multiple functional groups including bees (e.g., species visited by two or more orders of insects) would have low levels of PL across both land use types. We find that exclusively bee-pollinated plants were significantly more PL in natural habitats than managed ones (Fig. 2c ; Supplementary Table 5 ), but the opposite was the case for plants exclusively pollinated by another functional group or those serviced by multiple functional groups. For these, managed habitats lead to higher PL than natural ones (Fig. 2c ; Supplementary Table 5 ). The result of enhanced reproductive output of bee-pollinated plant species in managed areas is consistent with the findings that bee abundance is also higher in managed areas 34 , thereby highlighting how understanding the pollinator crisis requires more research effort on non-bee pollinators and non-bee pollinated plant species. Taken together these results highlight the complex ways that land use intensification along with other anthropogenic forces put various wild plant species at risk of reproductive failure. On a global scale, we found that PL was related to the intensity of human land use and that the magnitude of the effect was modulated by plant traits that reflect their dependence and specialization on pollinators. Our results link anthropogenic disturbance and changes in pollinator services to plant reproduction and, by doing so, fill a major gap in our knowledge highlighted in the recent Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services Pollinators, Pollination and Food Production assessment 11 . The magnitude of PL in pollinator-dependent plants in natural sites highlights that to maintain healthy plant communities under widespread pollinator declines new management approaches that incorporate natural landscapes are needed. This is particularly urgent because pollinator losses may set in motion negative feedback loops where loss of pollinators limits reproduction which leads to plant population declines that lead to even greater pollinator declines. This may occur even for pollinators that are more resistant to anthropogenic change, e.g., generalist crop pollinators, as even these need diverse plant communities for temporal stability and diversity in floral resources, as well as diverse nesting habitat 5 , 6 . In the longer term, evolution toward autofertility and/or pollination generalization 35 could buffer many plant species from pollinator losses. However, evolution towards increasing reliance on generalist pollinators could result in a dead end if pollinator losses continue. On the other hand, evolution toward selfing can decrease overall genetic diversity leaving plants vulnerable to extinction under further environmental perturbation 35 . Species that self pollinate also allocate less to pollen and nectar, than outcrossing species, additionally reducing resource availability to pollinators 36 . Recognizing that human-mediated disruptions may represent a turning point for these natural systems, conservation should focus not just on pollinators but also the diverse wild plant communities that support them, especially in urban and natural habitats. Methods Experimental design We used data from 2247 study populations of 1247 plant species across the globe from the GloPL database 17 . Each experiment compared the mean reproductive output of plants receiving supplemental pollination applied by hand with those receiving natural pollination. A pollen limitation effect size was calculated as the log response ratio of reproduction following natural or supplemental pollination 2 , 3 : PL effect size = ln [(supplement)/(natural)]. The response variables (i.e., reproductive output in natural or supplemental flowers) were based on one of fruit set, seed set, seeds per fruit, seeds per flower, or seeds per plant. We computed a single estimate of the magnitude of PL and its variance for each unmanipulated experiment (i.e., species, population, and year of study). In simple cases, a pooled variance was calculated following ref. 37 , page 64, i.e., when a row related to a single species population and year. For cases in GloPL when data for a single species were presented across multiple rows because there were multiple time-periods (e.g., season) or multiple morphs (e.g., flower color and gender) variance was calculated following ref. 38 formulae 11.2–11.4, pages 65–66. A small value was added to all cases so that zero cases could be included in the calculations of variance. We compared results with this PL effect size to those where 0.5 was added to both the response variables before log transformation, in cases where one or both of the response variables was zero. This leads to a slightly larger sample size (~2% increase), because points with zero values (e.g., no seed set under natural conditions) can be included. Analysis using both response variables were the same and the interpretations unaffected, therefore we only present model results from the more conservative PL effect size with zero values excluded. Land use variables We used the spatial coordinates supplied in the GloPL dataset 17 to determine land use. Land use percent cover in 12 categories urban, agricultural crops (5 categories; C3 nitrogen fixing, annual and perennial and C4 annual and perennial), rangeland, pasture, primary forest, primary non-forest, secondary forest, and secondary non-forest was extracted using the GPS location and the year of study from the Land-Use Harmonization 2 (LUH2) dataset 39 which contains annual land use states for the years 850–2100 at 0.25° × 0.25° spatial resolution. The dominant land use category surrounding each PL experiment was consolidated to three main category types: ‘urban’, ‘managed’ (agricultural crops, rangeland, and pasture), ‘natural’ vegetation (primary and secondary forest or non-forest). In the LUH2 dataset 39 the rangeland classification is based on the aridity index and the human population density and could range from semi-natural vegetation grazed by livestock to intensively managed pastures, e.g., were broadleaf herbicide are applied to reduce non-grass species. For this reason, we performed analyses both with and without rangelands included in the ‘managed’ category but found no difference in the quantitative results, thus we retained rangeland in the managed category presented here. We acknowledge that the broad categories of land-use used here are unlikely to capture the full range of intensity of urban, managed or natural environments. However, there are clear advantages to using such broad categories of land-use. Firstly the data is available at a global scale and secondly these broad categories are relevant to all biogeographic regions. Given the large numbers of species and the vast geographic area of coverage, this leads to the expectation that general patterns should still emerge, if present. Pollinator dependency traits Plants were scored as pollinator dependent when evidence of pollinator dependence existed, that is they were reported to be pollinator dependent, self-incompatible, or self-compatible but not autofertile following 24 . When quantitative data was not available, we scored the trait based on the author’s statements first and then considered information from additional published literature and web sources. Diecious, distylous and tristylous species were categorized as pollinator dependent. Information on pollinator dependency status was missing for 60 records, these along with wind-pollinated plants were excluded from analysis. Levels of pollination specialization were scored based on the authors determination in the original studies. The degree of ecological specialization was based on the total number of known pollinators for the plant or the number of recorded flower visitors to the plant. Plants were scored as ‘one’ when pollinated by one pollinator species, ‘few’ when pollinated by a few (2–4) species or ‘many’ (5 or more) pollinator species following 25 . The degree of functional specialization was characterized as ‘exclusively bee’, when pollinated by this functional group, the largest and most efficient pollinating class 10 and the majority of functionally specialized plants in our dataset, or as ‘exclusively other’ when pollinated by a single other functional group (i.e., either flies, beetles, moths, butterflies, wasps, mammals, or birds) or as ‘multiple groups’ when pollinated by multiple functional groups, including bees and others. As with all meta-analysis there will be sampling differences between studies and these may affect our measures of ecological and functional specialization. However, the authors of each study are assumed to be the authority on their study species and we do not expect bias to occur in any particular direction. Thus, given the large sample size of our dataset broad patterns should still emerge if present. Statistical analysis All analyses were performed in R version 3.6.3 40 . We conducted phylogenetic mixed-effects meta-analyses as per methods in refs. 24 , 41 with PL as the response variable and the interaction between land use, and three plant traits that relate to their level of dependence on pollinators (pollinator dependency, and ecological and functional specialization on pollinators). We used a phylogenetic meta-analysis, as in addition to weighting effect sizes by the inverse of their variances it incorporates a variance-covariance structure based on phylogenetic relationships to take the non-independence among species into account 18 . The species-level phylogeny used in our analysis is available on-line as part of the GloPL database 17 . To create the phylogeny, we started with the dated supertree created by Zanne et al. 42 . Species that were not included in the supertree, were bound to the tree when their genus was present by creating a polytomy with congeners that were present in the tree using the congeneric.merge function from the ‘pez’ package in R 43 . When no congener was present, as was the case for 60 of the GloPL species, we searched the literature for published phylogenies indicating closely related genera and manually grafted these species to the branch leading to the genus clade. We then pruned the supertree to only include our focal species using the drop.tip function from the ‘ape’ package 44 . Phylogeny was modeled as a variance-covariance matrix, which assumes Brownian motion like evolution, using the vcv function in the ape package 44 and was included as a random effect in all models. Because differences in experimental design affect the estimated magnitude of PL, for a review of their effects see 45 , we included in each model a random effect to control for differences in the response variables measured (fruit set, seed set, seeds per plant, seeds per flower, and seeds per fruit), the level at which the treatment was applied (whole plant, partial plant, and flower) and whether or not bags were applied to the plants. AIC model selection confirmed our strong a priori reasons for including all random and fixed effects used in each model. Overdispersion is common in meta-analysis and it is often necessary to include a random effect for each effect size Tau 2 as a correction. To test whether overdispersion is present and whether it affects our results we re-ran our models with the addition of a random effect for Tau 2. We found that our main result is robust to its inclusion and that none of our observed patterns changed (see Supplementary Tables 6 – 11 ). The rma.mv function in the metafor package version 2.4-0 was used to perform all models 46 . All models presented here were fit using ML and no quantitative differences were detected when compared with models fit using REML. To test for significant interactions between predictors we used the Holm adjustment for multiple comparisons 47 to conducted planned comparisons among means when appropriate. Profile plots of the variance component of each model was examined to insure there was a clear peak in likelihoods at the ML estimate, indicating the model had converged. Residuals were checked for normality and model fit. For each figure presented in text we derived 95% confidence intervals around the model coefficients. We used a nonparametric bootstrap approach where each of our models was bootstrapped 500 times, sampling with replacement records from each interaction (each group/combination formed by the two fixed effects, i.e., land use and the three levels of dependence/specialization on pollinators). Marginal means for each group present in Fig. 1 were extracted by running bootstrapped models fit with ML without the intercept. Averaged bootstrapped model results are shown in text. All natural populations in GloPL with geographic coordinates, data on all random effect and with known pollinator dependency were included in modeled analysis. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The GloPL dataset is published in scientific data and publicly available in the Dryad repository . The Land-Use Harmonization 2 (LUH2) 39 is publicly available here . Source data are provided with this paper. Code availability The associated analysis code and complementary functional and ecological data are archived on github ( ). | Human changes to the environment have been linked to widespread pollinator declines. New research published in Nature Communications shows that intensive land use will further decrease pollination and reproductive success of wild plants, especially of those plants that are highly specialized in their pollination. An international team of scientists led by researches from the German Center for Integrative Biodiversity Research (iDiv), Martin Luther University Halle-Wittenberg (MLU) and the Helmholtz Center for Environmental Research (UFZ) performed a global data analysis that provided conclusive evidence of the links of human land use and pollination of plants. Plants provide resources including food and shelter to all other living organisms on earth. Most plants need pollinators to reproduce, which is why mounting research showing widespread pollinator declines is concerning. Despite concerns we are facing a pollination crisis, we do not know which types of plants will be most affected by pollinator declines and under which conditions declines in plant reproductive success are to be expected. Changes in land-use are the leading threat to plants and pollinators. However, different groups of pollinators may have different responses to changes in land-use. For example, some farming practices may increase honeybee abundance on the one hand but reduce the abundance of other pollinators such as wild bees and butterflies on the other hand. Dr. Joanne Bennett, who led the research as a postdoctoral researcher at iDiv and MLU and is now working at the University of Canberra, said: "Plants and their pollinators have evolved relationships over millions of years . Humans are now changing these relationships in just a few years." A global data set on land use and pollen limitation To determine if land use effects pollen limitation, an international team of researchers set out to compile a global dataset that quantified the degree to which pollenation limits plant reproductive success. For this, they analyzed thousands of published pollen supplementation experiments—experiments that estimate the magnitude of pollen limitation by comparing the number of seeds produced by naturally pollinated flowers with flowers receiving hand supplemented pollen. Joanne Bennett said: "If naturally pollinated plants produce less fruits or seeds than plants that have received additional pollen by hand then the reproduction of that plant population is limited—this is called pollen limitation. In this way, pollen limitation experiments provide an unparalled opportunity to link plant reproductive function to the health of pollination services." It was almost 20 years ago when Prof Dr. Tiffany Knight, Alexander von Humboldt professor at MLU and head of the Spatial Interaction Ecology research group at iDiv and UFZ, started to compile the first data sets. Supported by iDiv's synthesis center sDiv, Knight and Bennett took the project to a new level by forming a group of 16 experts from all over the world to expand the dataset and generate new ideas. The researchers started with 1,000 experiments on 306 plant species from Europe and North America. To date, it includes data from over 2,000 experiments and more than 1,200 plants and has a more global distribution. Tiffany Knight said: "One of the most rewarding components of this research has been the collaboration with the international team, and the inclusion of studies published in languages other than English." Specialists and plants in intensely used landscape highly pollen limited Ultimately, this data allowed for a global meta-analysis, which showed that wild plants in intensely used landscapes, such as urban areas, are highly pollen limited. The researchers found that plants that are specialized in their pollination are particularly at risk of pollen limitation, but this varies across the different land-use types and is based on which pollinator taxa they are specialized on. For example, plants specialized on bees were less pollen limited in agriculturally managed lands than those specialized on other pollinator types. This could be because domesticated honey bees support the pollination of wild plants in these lands. The results show conclusively that intensive land use is linked to lower plant reproductive success due to lower pollination success. This suggests that future land-use change will decrease the pollination and reproductive success of plants, and can cause plant communities to become more dominated by species that are generalized in their pollination. | 10.1038/s41467-020-17751-y |
Nano | Quantum dots illuminate transport within the cell | Eugene A. Katrukha, et al. Probing cytoskeletal modulation of passive and active intracellular dynamics using nanobody-functionalized quantum dots. Nature Communications, 21 March 2017, DOI: 10.1038/NCOMMS14772 Journal information: Nature Communications | http://dx.doi.org/10.1038/NCOMMS14772 | https://phys.org/news/2017-03-quantum-dots-illuminate-cell.html | Abstract The cytoplasm is a highly complex and heterogeneous medium that is structured by the cytoskeleton. How local transport depends on the heterogeneous organization and dynamics of F-actin and microtubules is poorly understood. Here we use a novel delivery and functionalization strategy to utilize quantum dots (QDs) as probes for active and passive intracellular transport. Rapid imaging of non-functionalized QDs reveals two populations with a 100-fold difference in diffusion constant, with the faster fraction increasing upon actin depolymerization. When nanobody-functionalized QDs are targeted to different kinesin motor proteins, their trajectories do not display strong actin-induced transverse displacements, as suggested previously. Only kinesin-1 displays subtle directional fluctuations, because the subset of microtubules used by this motor undergoes prominent undulations. Using actin-targeting agents reveals that F-actin suppresses most microtubule shape remodelling, rather than promoting it. These results demonstrate how the spatial heterogeneity of the cytoskeleton imposes large variations in non-equilibrium intracellular dynamics. Introduction Cells have a highly structured internal organization that depends on the cytoskeleton, a network of biopolymers, including F-actin and microtubules, that shapes the cell and enables active intracellular transport driven by a variety of motor proteins that can walk along F-actin (that is, myosins) or microtubules (that is, kinesins and dyneins). Intracellular transport processes can be categorized as either active or passive, depending on whether they are driven by chemical energy or by thermal excitation, respectively. While linear motion driven by molecular motors is clearly active, recent studies have argued that the apparent passive dynamics of intracellular particles is largely driven by the active contractile dynamics of the actomyosin network, rather than by thermal excitation 1 , 2 , 3 . Although one study suggested that even the apparent diffusion of individual proteins depends on this non-thermal mixing 2 , it has remained unclear for which particle sizes fluctuations of the viscoelastic actomyosin network would dominate over thermal diffusion. In addition, given that actin is enriched near the cortex, non-thermal contributions are likely to depend on the position within the cell, but how the highly heterogeneous intracellular organization of F-actin affects passive particle dynamics has remained largely unexplored. The heterogeneous composition of the cytoplasm should also affect active point-to-point transport inside cells 4 , 5 , 6 . Recently, the intracellular behaviour of custom-synthesized single-walled carbon nanotubes (SWNT) was probed over several orders of magnitude in time and space 3 . It was reported that their dynamics was strongly affected by the presence of an active viscoelastic cytoskeleton and that an active stirring mechanism induced large sideways fluctuations of SWNTs that were transported over microtubules by kinesin-1. SWNTs are non-isotropic rods and have a disperse length distribution that spans at least one order of magnitude (100–1,000 nm), which will result in disperse hydrodynamic properties that could hamper proper analysis and interpretations. Although they are only 1 nm in diameter, their stiffness is comparable to that of actin filaments 3 . Given these lengths and stiffness, their dynamics are expected to be governed by the actin network, which has a characteristic mesh size of ∼ 100 nm 7 . Importantly, earlier work reported that kinesin-1 moves on a specific subset of microtubules and, therefore, the generality of these observations has remained unclear 8 . Here we use quantum dots (QDs) to examine how the actin cytoskeleton modulates active and passive transport in different subcellular zones. QDs are an attractive alternative for SWNTs and plastic beads, because they are widely available, can be tuned to emit in the visible range and have a monodisperse size distribution. Although QDs have been used previously to study intracellular dynamics, their widespread use in intracellular applications has so far been hampered by challenges in intracellular delivery and functionalization 9 . We combine adherent cell electroporation and nanobody-based functionalization to deliver QDs to the cytoplasm, and analyse the dynamics of both non-functionalized QDs and QDs that are targeted to different subsets of microtubules by binding to different kinesin family members. We report that the diffusion of non-functionalized QDs is highly heterogeneous and dependent on the presence of F-actin. Most microtubule-associated QDs propelled by different types of kinesin do not experience strong (actin-induced) transverse fluctuations. Kinesin-1 bound QDs, however, display directional fluctuations, because the subset of modified microtubules used by this motor undergoes more prominent undulations. However, this shape remodelling is not caused by active contractility of actomyosin network, but instead suppressed by it. These results demonstrate how the heterogeneity of the mammalian cytoskeleton imposes a large spatial variation in non-equilibrium cellular dynamics, which precludes straightforward application of physical approaches that model the cytoplasm as a viscoelastic homogeneous and isotropic medium. Results Rapid and slow diffusion of cytoplasmic QDs As small size intracellular probes we used commercially available QD625–streptavidin conjugates with a measured Stokes radius of 14.4 nm ( Supplementary Fig. 1a ). To explore the passive dynamics of QDs inside the cytoplasm, we used adherent cell electroporation for intracellular delivery of particles. After optimization of electroporation parameters ( Supplementary Table 1 ), we observed that most cells routinely enclosed 10–100 QDs ( Fig. 1a,b ). Upon internalization, we observed a strong suppression of QDs blinking, manifested in the increase of the average duration of the on-state ( Supplementary Fig. 1c ). This is likely the result of the mildly reducing environment of the cytoplasm, which is known to reduce QD blinking 10 . Figure 1: Non-isotropic diffusion of intracellular QDs. ( a ) COS-7 cell fixed 30 min after electroporation with QDs and stained with phalloidin. Maximum projection of a z -stack acquired with the spinning disk microscope. Scale bar, 10 μm. ( b ) Lateral Y – Z (left) and X – Z maximum projection views of cross sections along the boxes depicted in a . Scale bar, 5 μm. ( c ) Stills from a stream recording of GFP-actin expressing COS-7 cell electroporated with QDs. The interval between frames is 2.4 ms. Blue and red arrows indicate slow and fast cytosolic diffusion of QDs, respectively. The complete trajectories (66 frames, 156 ms) are depicted on the right panel. Scale bar, 2 μm. ( d ) Distribution of the mean square displacement at one frame (2.4 ms) delay for the trajectories of QDs in control (black, N =485, 10 cells) and latrunculin A treated cells (red, N =474, 13 cells). Dashed line marks the threshold used to separate trajectories of fast and slow diffusing particles in f . ( e ) Example trajectories of intracellular QDs, color-coded according to the corresponding value of mean square displacement at one frame (2.4 ms) delay. ( f ) Average MSD of slow (blue solid line, N =318) and fast (red solid line, N =167) fractions of QDs trajectories. Error bars represent s.e.m. Dashed line shows fit MSD( τ )=4 Dτ +d x 2 where offset d x 2 =(35 × 35) nm 2 reflects squared average localization precision. ( g ) Density of fast and slow diffusing subpopulation of QDs in cells, imaged either near the coverslip or 1 μm above ( N =6 cells per each condition). Error bars represent s.e.m. ( h ) Proposed origin of slow and fast diffusing subpopulations of QDs. Full size image Rapid live-imaging with 2.4 ms intervals of QDs located within the cell’s lamella, followed by single-particle tracking and mean square displacement (MSD) analysis, revealed fast and slow diffusing subpopulations of QDs ( Fig. 1c–e and Supplementary Movie 1 ), which appeared as two separate peaks in the distribution of the mean squared frame-by-frame displacement ( Fig. 1d ). For some QD we also observed transitions from slow to fast mobility regimes and back ( Supplementary Fig. 1b ). The fraction of slowly diffusing QDs was strongly reduced after treatment with latrunculin A, an inhibitor of actin polymerization ( Fig. 1d ). The average MSD curves for the fast and slow fractions yielded diffusion coefficients of D fast =10.1 μm 2 s −1 and D slow =0.06 μm 2 s −1 , respectively ( Fig. 1f ). Thus, rapid intracellular tracking of QDs revealed two classes of QD diffusion, the faster of which could be promoted by actin destabilization. The increased diffusion constant upon actin depolymerization suggests that slowly diffusing QDs are trapped within the F-actin-rich cellular cortex at the inner face of the plasma membrane. Optical cross sections of cells labelled for F-actin show that F-actin density is high close to the cell membrane, but strongly decreases at 1 μm depth into the cell ( Fig. 1b ). We therefore also measured the density of fast and slow moving particles at 1 μm above the cell’s bottom cortex and compared it with the area close to a coverslip ( Fig. 1g ). Indeed, the density of rapidly diffusing QD increased from 0.055 to 0.11 μm −2 when moving 1 μm deeper into the cell and the density of slowly diffusing QDs remained almost unchanged (0.027 to 0.024 μm −2 ). These results indicate that upon electroporation of adherent cells, most QDs are freely moving in the internal cytoplasm and suggest that slow diffusive QDs are trapped within the F-actin-rich cellular cortex at the inner face of the plasma membrane ( Fig. 1h ). In standard recordings with 100 ms intervals, rapid QD diffusion will not be detectable and observations will be biased towards slower diffusing QDs embedded in actin meshwork ( Supplementary Movie 2 ). Probing kinesin motor proteins using nanobody-conjugated QDs Previous work using SWNT reported that cargoes propelled by kinesin-1 experience significant sideways fluctuations ( ∼ 0.5–1 μm), which were ascribed to myosin-driven fluctuations of the actin cytoskeleton. The generality of these observations has remained unclear, because not all microtubules are embedded within F-actin and because kinesin-1 has been reported to move over a specific subset of stabilized microtubules 8 . To perform the same assay using probes of smaller size, we labelled QDs with biotinylated nanobodies (VHH GFP ) against green fluorescent protein (GFP). Nanobodies are promising candidates for QD functionalization because they are small, very stable and easily produced 11 ( Supplementary Fig. 1d–h , Supplementary Table 2). We delivered these nanobody-functionalized QDs into cells transfected with different constitutively active GFP-tagged motor proteins. COS-7 cells expressing kinesin-1 showed a peripheral accumulation of QDs–VHH GFP ( Fig. 2a–c ), and live-imaging revealed numerous QDs running processively over the kinesin-1 decorated microtubules ( Fig. 2c,d and Supplementary Movie 3 ). Similar results were obtained for kinesin-2 (KIF17-GFP-FRB), kinesin-3 (KIF1A-GFP-FRB) and kinesin-4 (KIF21B-GFP-FRB). The bright fluorescence of QDs allowed us to reach a localization precision of 3–5 nm ( Supplementary Table 3 ), while recording the fast motion using continuous stream acquisition with 50 ms exposure without any noticeable photobleaching. All motor proteins tested in our QDs transport assay showed dynamic binding to the microtubules, that is, processive movement interspersed with dissociation and cytosolic diffusion ( Fig. 2d–h ), and moved unidirectionally, suggesting that QDs are bound exclusively to the respective GFP-labelled kinesins 12 . Figure 2: Probing specific motor proteins using nanobody-conjugated QDs reveals limited transverse fluctuations during microtubule-based runs. ( a ) Linkage of QDs to GFP-fused motor proteins through GFP nanobody (top) and expected movement of individual QDs-kinesin complexes along microtubules (bottom). ( b ) Distribution of electroporated QDs–VHH GFP inside COS-7 cell expressing kinesin-1-GFP (cyan) and electroporated with QDs–VHH GFP (yellow). White curve indicates cell outline. Scale bar, 5 μm. ( c ) QDs–VHH GFP colocalize with microtubules decorated by kinesin-1 (KIF5B-GFP-FRB). Single frames from TIRFM stream recordings of COS-7 cell expressing kinesin-1-GFP (cyan/right) and electroporated with QDs–VHH GFP (yellow). Scale bar, 2 μm. ( d – g ) Example trajectories (top row) and kymographs (bottom row) for QDs coupled to kinesins from different families. Scale bars, 2 μm and 2 s. ( h ) Average MSD of longitudinal (top) and transverse (bottom) components of directed motion segments of different kinesins ( n= 104, 146, 151, 144 for Kinesin-1,2,3,4) decomposed using B-spline trajectory fitting with 1 μm control points distance. ( i – k ) Average speed i , run length j and run duration k of the individual motor runs for the different kinesins. Error bars represent s.e.m. See Supplementary Table 3 for numeric values. Full size image To robustly extract the transient periods of directional movement from the trajectories, we developed a set of directional filtering techniques that was validated on artificial data ( Supplementary Fig. 2a–f and Supplementary Methods ). The filtering was able to detect both strictly directional movements of motors with a smooth change of velocity’s angle and those containing abrupt stochastic displacements of underlying microtubules ( Supplementary Fig. 2e,f ). To characterize the transverse and longitudinal dynamics of directionally moving QDs relative to the microtubule axis, we estimated the position of the microtubule by fitting continuous non-periodic cubic B-splines to the QD coordinates ( Supplementary Fig. 2d ) and projecting the coordinates onto these curves. The flexibility of the spline curve is defined by the number of internal knots and control points placed along its arc length, and the distance between such control points sets the length scale at which microtubules are considered straight ( Supplementary Fig. 2g ). We found that the behaviour of the transverse component was heavily dependent on the degree of spline approximation ( Supplementary Fig. 2g–j ). When B-spline control points were positioned with 1 μm interval, the amplitude of transversal displacement diminished in comparison to 6 μm, and the transverse MSD curve changed its behaviour from diffusional to highly confined ( Supplementary Fig. 2h–j ). For all further analyses, we chose the distance between control points equal to 1 μm to account for the highly bent microtubule shapes observed in live cells, which are relatively stable on the timescale of kinesin movement ( ∼ 1 s, Fig. 2k ). MSD analysis of trajectories of different kinesin family members revealed the same highly confined transverse behaviour ( Fig. 2h ), whereas the longitudinal MSD increased quadratically with time delay τ , as expected for directional movement. For all directional movement episodes, we determined the average speed run length and run duration for the different kinesin family members ( Fig. 2i–k ). The speeds ranged from 1 μm s −1 for kinesin-1 to 2.5 μm s −1 for kinesin-2, similar to results obtained for non-cargo bound motors 8 . Interestingly, reported kinesin-1 speeds measured using SWNT were threefold lower, presumably due to the non-selective interactions with other cellular components 3 . These results demonstrate that, unlike reported previously 3 , most motor-driven particles do not undergo large sideways fluctuations. Kinesin-1-targeted microtubules display undulations As shown above, the underfitting of trajectories will affect the MSD analysis of the transverse dynamics. Conversely, overfitting could result in trajectories segments with unrealistically high curvatures that no longer reflect the underlying microtubule shape. To verify the consistency of our analysis, we built the distribution of trajectory curvatures for all four motors tested and found that the curvature of about 90% of arc length of runs is below 2 μm −1 (that is, radius of >0.5 μm, Fig. 3a,b ), meaning that the fraction of highly curved segments is low. Surprisingly, there was a clear difference between the curvature distributions of kinesin-1 and other kinesins ( Fig. 3b,c ). The processive runs of kinesin-1 were more curved, suggesting that this motor runs on a subset of microtubules that are more bent. Figure 3: Kinesin-1 preferably walks on acetylated microtubules that are highly curved and undergo bending deformations. ( a ) Example of two trajectories with the same radius of curvature R , but different average angle between the directions of consecutive displacements. ( b ) Distribution of local curvatures of directed motion segments in trajectories of QDs coupled to different kinesins. ( n= 4,247, 8,922, 7,965, 6,041 for Kinesin-1,2,3,4). ( c ) Characteristic decay values from exponential fits to distributions in b . Error bars represent s.e. of fitting. ( d ) Average directional persistence (cosine between consecutive displacements) as a function of displacement, assuming constant average speed of kinesins ( n is the same as in Fig. 2h , see also Supplementary Fig. 3a ). Error bars represent s.e.m. ( e ) COS-7 cell expressing KIF5B-GFP (green) stained for tyrosinated (blue) and acetylated (red) tubulin. Scale bar, 10 μm. ( f , g ) Stills from a time lapse recording of a COS-7 cell expressing KIF5B-GFP ( f ) or KIF21B-GFP ( g ) (green) and Tubulin-TagRFP (red). Shapes of microtubules highlighted by white arrows are traced over time (s) and color-coded as indicated. Scale bars, 2 μm. Full size image The higher curvature of kinesin-1-targeted microtubules could be caused by active remodelling forces that are selective for these microtubules. Such shape remodelling would affect the smoothness of QD trajectories, and we, therefore, examined the directional persistence of trajectories by calculating the correlation length of the angle between subsequent displacements. We found that the directional persistence was lower for kinesin-1 in comparison to other kinesins ( Fig. 3d and Supplementary Fig. 3a ), indicating that the amount of stochastic distortion during processive runs is higher. These results suggest that the microtubules targeted by kinesin-1 are fluctuating more than microtubules targeted by the other kinesins. It has been reported that kinesin-1 moves preferentially on a subset of stable microtubules marked by certain post-translational modifications 8 . Indeed, immunocytochemistry confirmed the preference of kinesin-1 for acetylated microtubules in COS-7 cells ( Fig. 3e ). Consistent with our trajectory analysis, these acetylated microtubules are highly curved ( Fig. 3e and Supplementary Fig. 3b ). To test whether these microtubules were also fluctuating more, we imaged fluorescent microtubules together with kinesin-1, which revealed that microtubules decorated by kinesin-1 changed their shape over time by undulation-like motion ( Fig. 3f–g and Supplementary Movie 4 ). This type of motion has been observed previously in in vitro microtubule gliding assays where immobilized microtubule motors propel partially immobilized microtubules 13 and in vivo in Xenopus melanophores 14 and epithelial cells 15 , 16 . These results suggest that the subset of microtubules labelled by kinesin-1 undergoes constant undulations under the action of some localized force generators that, like kinesin-1, preferentially interact with this microtubule subset. Microtubule bending fluctuations are suppressed by actin Previous work suggested that microtubule bending is caused by actomyosin contractility 3 . To quantify undulating microtubule movement and the contribution of the actin cytoskeleton to it, we imaged live COS-7 cells transfected with mCherry-tubulin using spinning disk microscopy ( Fig. 4a and Supplementary Movie 5 ). The dynamics of lateral displacements of microtubules can be easily visualized using kymographs built along lines parallel to the periphery of cells ( Fig. 4b ). To explore the influence of the embedding actin meshwork on the generation of these fluctuations, we treated cells with drugs having different effects: 10 μM of latrunculin A to depolymerize actin meshwork, 10 μM jasplakinolide to promote F-actin stabilization and polymerization and 50 μM blebbistatin to inhibit the contractility of myosin II motors. We quantified their effect on the lateral microtubule fluctuations using spatiotemporal image correlation spectroscopy 17 . Image MSD (iMSD) analysis revealed ( Fig. 4c–e ) that myosin inhibition did not abolish fluctuations, while the F-actin-disrupting drug latrunculin A resulted in a twofold increase of both the speed (diffusion coefficient, Fig. 4d ) and confinement size of fluctuations ( Fig. 4e ). Thus, microtubule fluctuations are not caused by the contracting actin network, but are instead dampened by it. Figure 4: Analysis of microtubules bending deformations under the treatment of F-actin modifying drugs. ( a ) Still from a spinning disk movie of COS-7 cells expressing mCherry-tubulin. Dashed yellow lines mark areas used to build kymographs. Scale bar, 5 μm. ( b ) Representative kymographs built from spinning disk movies of COS-7 cells transfected with mCherry-tubulin under the treatment of indicated drugs. Scale bars, 60 s (vertical) and 5 μm (horizontal). ( c ) Plot of average iMSD versus time derived from kymograph analysis ( N =11, 10, 14 and 13 for control, 10 μM latrunculin A, 10 μM jasplakinolide and 50 μM blebbistatin treatment). Error bars represent s.e.m. ( d ) Average diffusion coefficient derived from individual fitting of iMSD curves for indicated conditions. ** P <0.01 (two-tailed Mann–Whitney test), n >=9, each group. Error bars represent s.e.m. ( e ) Average confinement size derived from individual fitting of iMSD curves for indicated conditions *** P <0.001 (two-tailed Mann–Whitney test), n ≥9, each group). Error bars represent s.e.m. ( f ) Hypothetical phase diagram reflecting the behaviour of intracellular particles and cellular components of different sizes on different timescales. Full size image Discussion Based on the very slow diffusion of beads and SWNTs observed in earlier works 2 , 3 , the cytoplasm has recently been proposed to be a dense elastic network in which most particles are trapped in the actin meshwork. In this situation most diffusion-like behaviour would be established by active contractions and remodelling of the actin network and can, therefore, emerge only at longer timescales 2 , 3 . By fast tracking of non-functionalized QDs, we instead revealed two populations of diffusive QDs that differed in diffusion constant by almost two orders of magnitude and the faster fraction could be increased by F-actin depolymerization. The fast population has a diffusion constant that is surprisingly close to the diffusion constant expected for these QDs in water ( D fast ∼ 10 versus D water ∼ 16 μm 2 s −1 , Supplementary Fig. 1a ). It is likely that this fraction has been overlooked in many earlier experiments, because slower acquisitions will not detect this population and be biased towards the slower diffusing QDs embedded in actin meshwork. Detecting this subpopulation is also challenging using bulk average methods such as fluorescence recovery after photobleaching and FCS. Our results are in agreement with recent reports of two diffusive regimes in the cytoplasm of live cells, detected using GFP as a probe in combination with novel correlation analysis techniques 18 , 19 . The measured diffusion coefficient values allow us to estimate the viscosity of the aqueous phase of cytoplasm being 1.6 times higher than that of water, which is in agreement with previous estimates using bulk average techniques with small fluorescent molecules 20 . Our results support a unifying model in which the slow diffusion observed here and in earlier work reflects the dynamics in specific actin-rich cellular subdomains, such as the actin cortex. The simultaneous co-presence of fast diffusion is attributed to a different, central subcellular compartment containing a less dense filament network. In addition, our results highlight the influence of probe size and geometry in probing cell environment. From super-resolution studies it is known that actin in the lamella of COS-7 cells forms two dense horizontal layers of 30–50 nm mesh size separated vertically by ∼ 100 nm 7 . Particles of 100–300 nm in size will be trapped between these layers and their dynamics will reflect the fluctuations or remodelling of the network itself. For probe dimensions smaller than the mesh size, the assumption of a continuous viscoelastic cytoplasm is no longer valid and actin filaments will appear as discrete mechanical obstacles forming void compartments (pores) accessible for diffusion 18 , 19 ( Fig. 4f ). The ratio of D slow / D fast ∼ 0.006 allows us to estimate the average pore size to be 36 nm (ref. 21 and Supplementary Note 1 ), that is, compatible with the size of the particle itself ( ∼ 28 nm). Such probes will diffuse freely and rapidly inside the actin pores on timescales below ( d mesh − d probe ) 2 /6 D ≈(0.036–0.028) 2 /10≈5 μs, but will jump from one cell of the mesh to another at much longer timescales, resulting in a much slower effective diffusion constant at higher timescales ( Fig. 1f ). What implications does this finding have for the intracellular transport of cargoes driven by different kinesin motors? To study this, we probed the active linear motion of QDs labelled with different types of kinesin motors. We developed a robust method to separate transient periods of directional movement from the random movement that interspersed these episodes and also carefully examined how longitudinal motility and transverse fluctuations should be extracted from the xy -trajectories. We found that large apparent transverse fluctuations emerge if the microtubule curvature is underestimated, but are absent if the microtubule curvature is closely followed by smooth fitted B-spline. This latter approach is preferable, because motors typically travel along a curve within 1–3 s, which is faster than the typical curvature remodelling time that we observed. In our study we, therefore, did not observe the large transverse displacements ( ∼ 0.5–1 μm) reported earlier for kinesin-1 fused to nanorods of 300–1,000 nm long 3 . However, we did detect that the trajectories of this motor were both less smooth and more curved compared to other kinesins, which originated from its preferential binding to a subset of microtubules that underwent continuous bending and shaking. Preferential binding of kinesin-1 to modified microtubules has been previously observed, but the precise molecular origin of this selectivity has remained elusive 8 . Interestingly, we found that these preferred microtubules also undergo more active shape remodelling, again reflecting the ability of these microtubules to attract specific force generators. Our experiments with actin-altering drugs provide evidence that the undulations of microtubules are not caused by contraction of the actomyosin network as suggested earlier 3 , but instead are strongly suppressed by it. These observations are in line with previous studies, showing that bending of microtubules is caused by microtubule specific force generators 14 , 15 , 16 . To illustrate our findings and aid future studies of intracellular transport and cytosol compartmentalization, we built a hypothetical ‘phase diagram’ of the heterogeneous cell environment ( Fig. 4f ). The diagram maps the boundaries of applicable physical models of the cytoplasm as a function of probe sizes and timescales. For example, for proteins of 2–6 nm in size the cytoplasm can be considered as a liquid on almost any timescale. The behaviour of bigger objects (20–100 nm) would depend on the timescale, but also on the position within the cell. Areas of overlapping hatching highlight ‘metastable’ conditions where two compartments with different properties coexist simultaneously, as shown here for our QDs probes (timescale: 1 ms, size: 30 nm). Objects of 300–500 nm in size would be mostly stuck inside the filament network and move together with it. Further development of scalable tracers should allow precise mapping of those boundaries and a thorough description of the non-equilibrium mechanical environment of the cell 22 . Importantly, apart from the actin meshwork, other intracellular structures, such as intermediate filaments 14 , 23 and membrane organelles 21 , 24 are also likely to affect transport processes inside cells. In summary, using novel functionalization, delivery and analysis tools, we found that the heterogeneity of the mammalian cytoskeleton imposes a large spatial variation in non-equilibrium cellular dynamics, which precludes straightforward application of physical approaches that model the cytoplasm as a viscoelastic homogeneous and isotropic medium. These results increase our understanding of the material properties of the cytoplasm, can guide future modelling approaches and could aid studies of passive delivery of nanoparticles or therapeutic agents. Methods Cell culture and transfections COS-7 cells 25 were cultured at 37 °C in DMEM/Ham’s F10 (50/50%) medium supplemented with 10% FCS and 1% penicillin/streptomycin. One to three days before transfection, cells were plated on 19 or 24 mm diameter glass. Cells were transfected with Fugene6 transfection reagent (Promega) according to the manufacturer’s protocol and grown for 16–24 h. Human HA-KIF5B(1–807)-GFP-FRB, human KIF17(1–547)-GFP-FRB, rat KIF1A(1–383) and rat KIF21B(1–415) plasmid constructs were used for transfections, see Supplementary Methods for these and additional cDNA constructs information. Purification of recombinant nanobody Bio-VHH GFP was cloned in pMXB10 vector using vhhGFP4 sequence 26 . Recombinant bacterially expressed bio-VHH GFP or bio-VHH GFP (2 × ) were obtained by using IMPACT Intein purification system. Induction, expression and purification of fusion proteins were performed according to the manufacturer’s instructions (New England Biolabs), see Supplementary Methods for details. Electroporation of COS-7 cells and functionalization of QDs For electroporation of adherent COS-7 cells (all values are given for one 24 or 25 mm coverslip), 2 μl of Qdot 625 streptavidin conjugate (1 μM; A10196, Molecular Probes, Life Sciences) and 20–25 μl of purified bio-VHH GFP (0.7–0.8 μg μl −1 ) were diluted in PBS to a final volume of 200 μl. Reaction was incubated for 1 h at room temperature and then at 4 °C overnight. Cells were electroporated with the Nepa21 Electroporation system (Nepagene) using CUY900-13-3-5 cell-culture-plate electrode with 5 mm distance between electrodes. Electroporation was performed in six-well plate containing 1.8 ml of warm Ringer’s solution (10 mM HEPES, 155 mM NaCl, 1 mM CaCl 2 , 1 mM MgCl 2 , 2 mM NaH 2 PO 4 , 10 mM glucose, pH 7.2) and 200 μl of electroporation mix per well. Parameters for electroporation (Voltage, Interval, Decay, Number and Pulse Length) were optimized from standard settings to achieve optimal efficiency and provided in Supplementary Table 1 . Each coverslip was electroporated with fresh solution of QDs. Electroporation programme was applied two times, rotating electrode 90° for the second time. Cells were then washed three times with Ringer’s solution to remove QDs from solution and either mounted with growth medium in imaging ring chamber for immediate live-imaging experiments or returned back to the growth medium and fixed at different time points. Detailed protocol is available online 27 . Particle detection and trajectories analysis Image/movies processing routines were automated using ImageJ/FIJI macros or custom build plugins. MSD calculation, curve fitting and all other statistical and numerical data analysis were performed in Matlab (MATLAB R2011b; MathWorks) and GraphPad Prism (ver.5.02, GraphPad Software). Briefly, positions of individual QDs were determined by fitting elliptical Gaussian, linked to trajectories using nearest neighbour algorithm, manually inspected and corrected. Only tracks longer than 12 (for diffusion) and 50 (for kinesin trajectories) frames were used for analysis. MSD and velocity autocorrelation curves together with diffusion coefficient calculations were performed using ‘msdanalyzer’ Matlab class 28 . Ensemble diffusion coefficients were measured as a slope of the affine regression line fitted to the first 25% of weighted average MSD curves and divided by four (assuming two-dimensional motion). Detection of motor runs and spline fitting of kinesins’ trajectories are described in Supplementary Methods . Data availability The data that support the findings of this study (Graphs including raw data, analysed trajectories and Matlab code used for analysis) are freely available online at the ‘figshare’ repository 29 . QDs trajectories in accessible text format are deposited at . Source code of used ImageJ plugins is available online 30 , 31 . Additional information How to cite this article: Katrukha, E. A. et al . Probing cytoskeletal modulation of passive and active intracellular dynamics using nanobody-functionalized quantum dots. Nat. Commun. 8, 14772 doi: 10.1038/ncomms14772 (2017). Publisher’s note : Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Biophysicists from Utrecht University have developed a strategy for using light-emitting nanocrystals as a marker in living cells. By recording the movements of these quantum dots, they can clarify the structure and dynamics of the cytoskeleton. Their findings were published today in Nature Communications. The quantum dots used by the researchers are particles of semiconducting material just a few nanometres wide, and are the subject of great interest because of their potential for use in photovoltaic cells or computers. "The great thing about these particles is that they absorb light and emit it in a different colour," explains research leader Lukas Kapitein. "We use that characteristic to follow their movements through the cell with a microscope." But to do so, the quantum dots had to be inserted into the cell. Most current techniques result in dots that are inside microscopic vesicles surrounded by a membrane, but this prevents them from moving freely. However, the researchers succeeded directly delivering the particles into cultured cells by applying a strong electromagnetic field that created transient openings in the cell membrane. In their article, they describe how this electroporation process allowed them to insert the quantum dots inside the cell. Extremely bright Once inserted, the quantum dots begin to move under the influence of diffusion. Kapitein: "Since Einstein, we have known that the movement of visible particles can provide information about the characteristics of the solution in which they move. Previous research has shown that particles move fairly slowly inside the cell, which indicates that the cytoplasm is a viscous fluid. But because our particles are extremely bright, we could film them at high speed, and we observed that many particles also make much faster movements that had been invisible until now. We recorded the movements at 400 frames per minute, more than 10 times faster than normal video. At that measurement speed, we observed that some quantum dots do in fact move very slowly, but others can be very fast." Kapitein is especially interested in the spatial distribution between the slow and fast quantum dots: at the edges of the cell, the fluid seems to be very viscous, but deeper in the cell he observed much faster particles. Kapitein: "We have shown that the slow movement occurs because the particles are caught in a dynamic network of protein tubules called actin filaments, which are more common near the cell membrane. So the particles have to move through the holes in that network." Motor proteins In addition to studying this passive transport process, the researchers have developed a technique for actively moving the quantum dots by binding them to a variety of specific motor proteins. These motor proteins move along microtubuli, the other filaments in the cytoskeleton, and are responsible for transport within the cell. This allowed them to study how this transport is influenced by the dense layout of the actin network near the cell membrane. They observed that this differs for different types of motor protein, because they move along different types of microtubuli. Kapitein: "Active and passive transport are both very important for the functioning of the cell, so several different physics models have been proposed for transport within the cell. Our results show that such physical models must take the spatial variations in the cellular composition into consideration as well." | 10.1038/NCOMMS14772 |
Biology | Mathematicians use machine intelligence to map gene interactions | Zixuan Cang et al. Inferring spatial and signaling relationships between cells from single cell transcriptomic data, Nature Communications (2020). DOI: 10.1038/s41467-020-15968-5 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-15968-5 | https://phys.org/news/2020-05-mathematicians-machine-intelligence-gene-interactions.html | Abstract Single-cell RNA sequencing (scRNA-seq) provides details for individual cells; however, crucial spatial information is often lost. We present SpaOTsc, a method relying on structured optimal transport to recover spatial properties of scRNA-seq data by utilizing spatial measurements of a relatively small number of genes. A spatial metric for individual cells in scRNA-seq data is first established based on a map connecting it with the spatial measurements. The cell–cell communications are then obtained by “optimally transporting” signal senders to target signal receivers in space. Using partial information decomposition, we next compute the intercellular gene–gene information flow to estimate the spatial regulations between genes across cells. Four datasets are employed for cross-validation of spatial gene expression prediction and comparison to known cell–cell communications. SpaOTsc has broader applications, both in integrating non-spatial single-cell measurements with spatial data, and directly in spatial single-cell transcriptomics data to reconstruct spatial cellular dynamics in tissues. Introduction Single-cell transcriptomics methods enable analyses of gene expression heterogeneities in individual cells to study cell fate decisions 1 . Dissociation of tissues into single cells allows high-throughput genomics measurements, but spatial information of cells is often lost. While single-cell transcriptomics has mainly been used to delineate cell subpopulations and their lineage relationships, recently computational tools have also been developed to infer cell–cell communications from scRNA-seq data 2 , 3 . For example, by comparing the average enrichment of genes involved in different cell subpopulations, one might describe the signaling activities of each subpopulation 4 . A probability model that correlates ligand and receptor (and the downstream genes) expression levels in different cells allows the inference of communications between individual cells 5 . At the level of cell populations, a similar approach based on known ligand–receptor pairs was used to derive communications between cell types 6 , which can be further refined using prior knowledge of cell types 7 . Using cell type-specific enrichment of genes in known gene regulatory networks, one can also infer communication among cell clusters 8 . Despite rich details on genes contributing to cell–cell communication in scRNA-seq data, the lack of spatial information in such datasets restricts its usefulness for studying cell–cell communication in tissues with spatial structure. On the other hand, measuring gene expression in intact tissues provides spatial resolutions but the genes examined need to be selected in advance. Is it possible to better infer communications between cells located in different positions in the intact tissues using single-cell transcriptomics data with the aid of additional spatial measurements? Several methods have been developed to pair scRNA-seq data with spatial information using spatial imaging data (e.g., in situ hybridization). For example, spatial information was obtained at cell cluster levels by identifying spatial domains with coherent gene expressions in spatial imaging data combined with scRNA-seq data 9 . At an individual cell level, similarity measurements based on correlation coefficients 10 , 11 or correspondence scores 12 between commonly examined genes in both spatial imaging data and scRNA-seq data were used to reconstruct spatial gene expression or map cells in scRNA-seq data to their potential spatial origins. Posterior probability estimates were carried out on spatial data described by a mixture model 13 or simplified to one-dimensional bins 14 to assign spatial origins to individual cells. Other general methods designed for the integration of multi-omics data can also be applied to integrate these two data types. Canonical correlation analysis was used to connect cells in scRNA-seq data to locations in spatial data 15 , facilitating the subsequent identification of anchors across the datasets for integration 16 . Non-negative matrix factorization can also be used to construct common low-dimensional spaces of multiple datasets 17 . These methods connecting scRNA-seq data and spatial data are especially valuable for analyzing spatial patterns of different genes or cell types in embryos 10 , 13 and organisms with robust patterns 9 , 12 , 14 . These existing approaches focus on reconstructing spatial gene expression or estimating the spatial origins of cells in scRNA-seq data. The heterogeneity in single-cell measurements might be averaged out when determining the spatial gene expression patterns since multiple cells can be mapped to a single position. There also might be multiple highly possible spatial origins for an individual cell in scRNA-seq data. This makes existing approaches difficult to incorporate with cell–cell communications. Here we utilize the optimal transport method 18 to equip cells in scRNA-seq data with a spatial distance with single-cell resolutions by connecting with another dataset on spatial measurements of a small number of genes. Optimal transport allows natural coupling of distributions (pairing of datasets) and characterization of distances between multiple distributions (the difference between datasets or data samples represented as distributions) 18 . Recent advancements in optimal transport method development, including efficient algorithms 19 , accessible library 20 , and flexible formulations 21 , 22 , enable its broader application 23 , 24 , 25 , such as inference of developmental trajectories 26 and handling batch effects 27 in scRNA-seq data. To connect individual cells and the spatial positions in two different measurements, we first develop a map between the two datasets by spa tially o ptimal t ransporting the s ingle c ells (SpaOTsc) to spatial imaging datasets. The cell–cell distance calculated from the SpaOTsc map yields a spatial metric for scRNA-seq data. We then use this metric to establish an optimal transport plan from a probability distribution of “sender cells” to “receiver cells.” Such sender cell distributions can be characterized by the expression levels of communication genes (e.g., ligands) while the receiver cells can be distinguished by the paired genes (e.g., receptors and ligand–receptor downstream genes). Our approach mimics the corresponding physical processes of ligand release by signal senders and consumption by potential receivers. After obtaining an initial cell–cell communication network, we then use a machine learning model based on ensemble of trees to estimate the spatial range of signaling for spatial cell–cell communications. While cells may communicate with each other directly through ligand–receptor interactions, a gene in one cell may affect another gene in other cells indirectly. To explore influences among genes across different cells, which may not be directly interacting in cell–cell communications, we then use partial information decomposition 28 , 29 to quantify the unique information provided by one gene to another gene across different cells. For a given pair of genes and a prescribed spatial distance, we quantify how likely they are interacting across different cells through space, referred to as intercellular gene–gene regulatory information flows. We first test SpaOTsc through cross-validation of spatial gene expression predictions as well as spatial mapping of single cells with known origins in four pairs of single-cell RNA-seq and spatial gene expression measurements. We then infer cell–cell communications and intercellular gene–gene regulatory information flows for three systems. Results Overview of SpaOTsc method SpaOTsc method consists of two major components: (a) constructing a spatial metric for cells in scRNA-seq data and (b) reconstructing cell–cell communication networks and identifying intercellular regulatory relationships between genes (Fig. 1 ). A spatial metric for the cells in scRNA-seq data is first constructed using a mapping to spatial data. Using this spatial metric, we generate spatial visualization and clustering of cells and genes for scRNA-seq data. Cell–cell communication networks are then reconstructed for particular signalings. Finally, by feeding the spatial metric and scRNA-seq data to machine learning models and partial information decomposition, we infer the spatial ranges of particular signalings and quantify intercellular gene–gene regulatory information flows between genes. See more details in Methods and Supplementary Methods. Fig. 1: Overview of SpaOTsc. a The unbalanced transport relaxes the mass conservation constraint (e.g. lines between circles), and the structured transport utilizes additional information (e.g. dotted links) to refine the mapping (e.g. blue hexagon). b Cell–cell distance is inferred by computing optimal transport distance of the spatial probability distributions of cells (rows of γ in a ). c Calculated cell–cell distance, along with partial information decomposition and random forest models, is used to infer spatial distance of signaling and then construct space-constrained cell–cell communications and identify potential intercellular regulation between genes. Full size image To construct a spatial metric for scRNA-seq data, we integrate it with the spatial data using optimal transport 21 (SpaOTsc). We treat the two datasets as two distributions and generate a transport cost based on the expression profile dissimilarity of shared genes across the two datasets. The dissimilarity measurements within each dataset are used to refine the mapping between these two distributions through the structured optimal transport 22 . The resulting optimal mapping depicts the probability distributions of individual cells over space. Specifically, given the spatial data ( m positions) and the scRNA-seq data ( n cells), we generate three dissimilarity/distance matrices: \({\mathbf{M}} \in {\Bbb R}^{n \times m}\) measuring gene expression dissimilarity between cells and positions using the common genes from the two datasets, \({\mathbf{D}}_{{\mathrm{sc}}} \in {\Bbb R}^{n \times n}\) measuring gene expression dissimilarity among individual cells using all genes in scRNA-seq data, and \({\mathbf{D}}_{{\mathrm{spa}}} \in {\Bbb R}^{m \times m}\) measuring the spatial distance between positions in spatial data. These matrices are fed to an unbalanced 21 and structured 22 optimal transport algorithm (Eq. ( 1 ) in Methods), which returns an optimal transport plan \({\boldsymbol{\gamma }}^ \ast \in {\Bbb R}^{n \times m}\) connecting the two datasets (Fig. 1a ) for the related subsequent analyses (Fig. 1b,c ). We then annotate the scRNA-seq data with a spatial metric in addition to determining a mapping between spatial positions and cells in scRNA-seq data. To this end, we infer the spatial distance between every pair of cells by computing the optimal transport distance (Eq. ( 2 ) in Methods) between their probability distributions over space (rows of γ *). The spatial distance among positions ( D spa ) is used as the transport cost. We refer to this as the cell–cell distance \(\widehat {\mathbf{D}}_{{\mathrm{sc}}} \in {\Bbb R}^{n \times n}\) (Fig. 1b ). Additionally, the sparsity of the resulting optimal transport plan depicts the confidence of the estimated cell–cell distance. This cell–cell distance immediately provides spatial insights when paired with conventional analysis pipelines. Visualizations on spatial arrangements of scRNA-seq can be constructed by feeding the cell–cell distance to dimension reduction methods such as t-SNE 30 and UMAP 31 , 32 . Spatially localized subclusters can be classified by the cell–cell distance using clustering algorithms such as Louvain method 33 . Moreover, the genes in scRNA-seq data can be viewed as distributions on a metric space (cells equipped with the cell–cell distance). By computing the optimal transport distance between these distributions, we then derive a metric for the n g genes represented by a distance matrix \(\widehat {\mathbf{D}}_{\mathrm{g}} \in {\Bbb R}^{n_{\mathrm{g}} \times n_{\mathrm{g}}}\) assembling a gene spatial atlas. Next, we infer cell–cell communication and intercellular gene–gene regulatory information flow over the scRNA-seq data annotated by the spatial cell–cell distance. To identify possible communications among cells mediated by ligand–receptor interactions, we formulate an optimal transport problem that transports a source probability distribution of signal sender cells to a target probability distribution of receiver cells (Eq. ( 4 ) in Methods). The expression of ligand, receptor, and downstream genes are used to estimate these sender and receiver distributions. The cell–cell distance is used as the transport cost to spatially constrain the signaling network, and the corresponding optimal transport plan \({\boldsymbol{\gamma }}_{\mathrm{S}}^ \ast \in {\Bbb R}^{n \times n}\) represents the likelihoods of cell–cell communications (Fig. 1c ). Knowing the spatial range of particular signaling can help further confine the inference of cell–cell communication. To infer this spatial range, we analyze a collection of trained random forest models with the downstream genes as outputs and the receptors as sample weights. The genes that highly correlate to the downstream genes and the ligands from cells located within a spatial range are the input features. The ligand feature importance in the trained model indicates how helpful knowing the ligand expression level within the corresponding spatial range is to the prediction of downstream gene expressions. A series of spatial distances are examined, and the one with the highest ligand feature importance serves as an approximation of the spatial range for this signaling (Fig. 1c ). To interrogate whether two genes affect each other across cells through space, we utilize partial information decomposition 28 , 29 , 34 to compute the intercellular gene–gene regulatory information flow (Fig. 1c ). Specifically, we estimate the unique information about a gene in a cell provided by another gene expressed in its neighboring cells within a predefined spatial distance, taking into account the information given by a collection of other genes in this cell. The gene expression in the spatial neighborhood of each cell is estimated by a weighted average based on the spatial metric of cells in scRNA-seq data. Both cellular gene expression and spatial neighborhood gene expression are summarized into histograms using Bayesian blocks 35 . These histograms are fed to discrete partial information decomposition algorithms 28 , 34 (Eq. ( 3 ) in Methods). By iterating over different spatial distances, this approach yields a directed network of genes annotating possible interactions between genes across cells under different spatial distances. Accuracy of SpaOTsc mapping and comparison to other methods The mapping between scRNA-seq data and spatial data obtained by SpaOTsc is the foundation of the subsequent analyses. To evaluate this mapping, we utilized four scRNA-seq datasets paired with spatial data from zebrafish embryo, Drosophila embryo, and mouse visual cortex. Two different scRNA-seq datasets on measurements of 6hpf zebrafish embryo were used. The first dataset 13 has 851 cells and 10495 genes, and it has been previously used for analyzing spatial data 13 . The second dataset 36 has 5693 cells and 30677 genes, and it can be used to test our method in handling unbalanced datasets since the number of cells in scRNA-seq data is ~90 folds more than positions in spatial data. The spatial reference dataset 13 consisting of 64 spatial positions and 47 genes was used for both single-cell datasets. For the Drosophila embryo, the scRNA-seq data 10 has 1297 cells and 8925 genes, and the spatial data 10 has 3039 spatial positions and 84 genes. The mouse visual cortex scRNA-seq dataset 37 has 15413 cells and 45768 genes, and the corresponding spatial dataset 38 has 1549 spatial positions and 1020 genes. The details on data acquisition and preprocessing can be found in Datasets and processing in Methods. For the zebrafish embryo and Drosophila embryo, we carried out leave-one-out cross-validation of spatial expression prediction for each gene in spatial data using the scRNA-seq data. When predicting the spatial expression for each gene, we excluded the gene for prediction in the spatial data. The quality of the reconstructed spatial gene expressions was evaluated by Spearman’s correlation coefficient, the area under the receiver operating characteristic curve (AUC), and root-mean-square error (RMSE). When comparing to binary spatial data, AUC is used to evaluate this classification problem. RMSE is used when the spatial data is continuous. Three other established methods for spatial gene expression prediction DistMap 10 , Achim, et al. 12 , and Seurat v1 13 were used for comparison. All three methods provide mapping matrices between scRNA-seq data and spatial data, which were used to reconstruct spatial gene expression via a weighted average. Our method has shown high accuracy in the three pairs of datasets tested (Fig. 2a-d , Supplementary Figs. 1 – 4 ), achieving an AUC of 0.88 in D rosophila dataset and 0.95/0.94 for the first/second pairs of zebrafish datasets (Table 1 ). The performance on the second much larger scRNA-seq dataset of zebrafish embryo is only slightly inferior to that on the first smaller dataset (Table 1 ). This indicates the capability of SpaOTsc at handling unbalanced data size while combining scRNA-seq data and spatial data. SpaOTsc exhibits a more noticeable improvement in terms of evaluation metrics upon other methods on the D rosophila embryo datasets, which contain more detailed spatial data compared to the zebrafish embryo datasets (Table 1 ). This implies that our method is potentially more robust and effective for spatial data with higher spatial resolution. We also investigated the prediction accuracy of our method using three other data normalization procedures, and found consistent results (Supplementary Fig. 5 ). The observed robustness in the prediction under different preprocessing procedure is partly due to the usage of ranking based correlation coefficients for similarity measurements. Fig. 2: Validation of SpaOTsc using three systems. a Predicted spatial expressions for the zebrafish embryo (both data from ref. 13 ). b The receiver operating characteristics (ROC) curves of leave-one-out cross-validation (LOO CV) of the spatial expression prediction for the zebrafish embryo data. c Predicted spatial expressions for the D rosophila embryo (both data from ref. 10 ). d The ROC curves of LOO CV of the spatial expression prediction for the D rosophila embryo spatial data. e Assignment of spatial positions to the scRNA-seq data for the mouse visual cortex (spatial data from ref. 38 ; scRNA-seq data from ref. 37 ). Each column depicts all cells from the spatial data in the visual cortex. For example, in column one, the color of cells represents the average probability of the spatial origin of the 890 cells in scRNA-seq data labeled with spatial origin L1. f Violin plots along L1-L6 axis of the mapped spatial origins for single cells from each subregion. Inside the violin plots are standard boxplots (median, 25th perceltile, 75th percentile, the bigger of minimum value and 25th percentile – 1.5 interquartile range, and smaller of maximum value and 75th percentile + 1.5 interquartile range). The numbers of data points for the violin plots from left to right are 890, 1979, 1594, 3040, 2899, respectively. Full size image Table 1 Performance comparison by leave-one-out cross-validation of spatial gene expression prediction. Full size table In the original mouse visual cortex datasets, the cells were annotated with their original layer in the intact tissue. The problem of predicting the original spatial region for scRNA-seq data is used to evaluate our method in a multiclass classification problem (Fig. 2e,f ). A micro F 1 score (harmonic mean of precision and recall) of 0.48 was achieved (compared to 0.09 for a baseline model that always predicts the label of the majority population). We also evaluated the importance of using unbalanced optimal transport and the benefit of incorporating the information of all genes in scRNA-seq data through structured optimal transport. This was done by altering the parameters on the weights for the unbalanced and structure terms (see Eq. ( 1 ) in Methods). Both unbalanced and structured optimal transport yield improved performance upon balanced and unstructured configurations in all tested cases (Supplementary Fig. 1 ). Space-constrained visualization and clustering We applied SpaOTsc to analyze the spatial aspect of scRNA-seq datasets. Visualizations of scRNA-seq data with spatial details were produced by feeding the cell–cell distance to commonly used dimension reduction algorithms (Fig. 3a,b , Supplementary Figs. 6 – 11 ). Gene–gene “distances” that are used to depict the difference in spatial expression patterns can also be generated using the cell–cell distance. Such gene–gene difference provides a gene spatial atlas, in which networks of genes with edges indicating similarity in spatial expression patterns are obtained when the scRNA-seq data is equipped with a cell–cell spatial distance (Fig. 3c,d , Supplementary Figs. 12 , 13 ). Fig. 3: Metric spaces and spatial gene atlases for scRNA-seq data. a A low-dimensional spatial visualization (UMAP) of mouse visual cortex scRNA-seq data (ref. 37 ) using the cell–cell distance inferred by SpaOTsc with the spatial data (ref. 38 ). The cell labels are taken from ref. 37 . b Similar to (a) but for Drosophila embryo data (ref. 10 ). c, d The gene spatial atlases for mouse visual cortex data ( c ) and Drosophila embryo data ( d ) consisted of collections of highly variable genes where nodes represent genes and edges indicate similarity in spatial pattern. Full size image For the mouse visual cortex, a spatial axis from layer L2/3 through L6 was reconstructed for the scRNA-seq data (Fig. 3a ). The spatial visualization of scRNA-seq data shows a consistent spatial colocalization of different cell types. For example, somatostatin (Sst) expressing cells are relatively abundant across space and are colocalized with vasoactive intestinal peptide (Vip) and Parvalbumin (Pvalb) cells (Fig. 3a ). Direct interactions between Sst and Pvalb neurons, and Vip and Sst neurons are known to regulate neuron activity 37 , 39 . The spatial visualization suggests that Sst neurons are preferentially placed in the middle of Vip and Pvalb neurons in space, indicating the indirect interaction between Vip and Pvalb neurons through Sst neurons. In contrast, the low-dimensional visualization based only on scRNA-seq data does not show such spatial arrangements (Supplementary Fig. 11 ). For the D rosophila embryo, the spatial visualization of scRNA-seq data successfully reconstructs the dorsal-ventral and posterior-anterior axes (Fig. 3b ). Spatially localized subclusters of the same cell type were also identified, further revealing the relationship between cell heterogeneity and spatial arrangement. Gene spatial atlases were constructed to classify spatial gene expression patterns (Fig. 3c,d ). For the mouse visual cortex, we identified genes that are enriched in certain spatial regions such as the clusters of genes expressed at L2/3, L4-L5, and L6 (Fig. 3c ) as well as genes that have no apparent spatial localization behavior (Supplementary Fig. 13 ). For the D rosophila embryo, we identified gene clusters that are highly expressed in the dorsal or ventral side, and a gene cluster that exhibits a smooth dorsal-to-ventral gradient which may contribute to dorsal-ventral patterning (Fig. 3d ). Reconstruction of cell–cell communication in space We first constructed a cell–cell distance for the scRNA-seq data of zebrafish embryo 13 , 36 at 6hpf using the spatial data 13 of the same developmental stage. We inferred cell–cell communication in scRNA-seq data through Wnt signaling and bone morphogenetic protein (BMP) signaling (Fig. 4a,b ) using known ligand, receptor, and downstream genes. The cell–cell communication was mapped to space by constructing a position to position communication flow using the SpaOTsc mapping matrix between scRNA-seq data and spatial data. Based on the position to position communication, we approximated the direction of signal flow from the signal sending positions (Fig. 4a ). The cell–cell communication was also summarized into a cluster–cluster communication matrix revealing the communication between cell types (Fig. 4b ). The genes used in signaling analysis are listed in Section 3 of Supplementary Methods. Fig. 4: Reconstruction of cell–cell communications in space. a (zebrafish embryo) Wnt and BMP signalings interpolated from the SpaOTsc cell–cell communication matrix and mapped to space using the mapping between cells and positions. The arrow lengths indicate the signal sending probability of the position and the color shows the estimated signal receiver probability distribution over space. The scRNA-seq data from ref. 36 and spatial data from ref. 13 were used. b (zebrafish embryo) Wnt and BMP signaling summarized into cell clusters. c, d ( Drosophila embryo) Spatial ranges of Wg and Dpp signaling inferred using a series of sets of random forest models. The gray band represents the 95% confidence interval. The experiment was repeated three time with similar results. e ( D rosophila embryo) Left: cell–cell communications of Wg signaling at the single-cell level using a visualization (UMAP) constrained by cell–cell distance. The color of the link is marked by the color of the sending cells, based on the clustering using only scRNA-seq data. Right: cluster–cluster communications of Wg signaling based on SpaOTsc spatial subclustering (subcluster the previously determined clusters based only on scRNA-seq data using the cell–cell distance). f ( Drosophila embryo) Dpp signaling in space plotted similar to ( e ). Full size image We found significant Wnt signaling, an important development regulator, from the ventral side to the dorsal side along the margin, which may contribute to axis specification 40 . Most Wnt signaling activity was identified to take place within the mesoderm. A significant group of Wnt ligand sending cells was identified at the ventrolateral margin (depicted by arrow length in Fig. 4a ) indicating the regulation of the later formation of posterior mesoderm through Wnt signaling 41 . Interestingly, a subgroup of ectodermal cells was found near the dorsal margin sending signals to a subgroup of mesodermal cells (cluster 10 to cluster 9 in Fig. 4b ). Significant BMP signaling, an essential regulator on development growth, was identified at the ventral side, which is consistent with the established BMP signaling gradient along the ventral-dorsal axis 42 . While Wnt signaling was mainly identified in the mesoderm, BMP signaling was found to be enriched across endoderm, mesoderm, and ectoderm at the ventral side. Furthermore, we found a secondary hotspot of BMP signaling receivers colocalized with Wnt signaling receivers at the dorsal side (cluster 9 in Fig. 4b ) which supports the suggested interaction between Wnt and BMP signaling in early embryo development 43 . This subgroup of both Wnt and BMP receivers is located in the mesoderm, indicating possible crosstalks between Wnt and BMP signaling through the mesodermal layer. Next we performed a similar analysis for fibroblast growth factor (FGF) signaling (Supplementary Fig. 14 ). While the identified BMP signaling activity was found to be strong on the ventral side, the inferred FGF signaling was found to be more active on the dorsal side. This observation is consistent with a prior study, suggesting a down-regulation mechanism on BMP by FGF signaling 44 . For the D rosophila embryo 10 , we used SpaOTsc to analyze cell–cell communications with a focus on wingless (Wg) and decapentaplegic (Dpp) signaling (Fig. 4c–f ). To fully utilize the fine resolution of this spatial data, we first estimated the spatial ranges of the signalings to restrict the cell–cell communication networks in space. Wg, an invertebrate analog of Wnt that plays an essential role in growth, polarity and patterning, was previously shown to act in a range of 50–100 µm 45 . The spatial range of Wg signaling inferred using SpaOTsc was about 100 µm (Fig. 4c , Supplementary Fig. 15 ). After estimating the probability of signaling between each pair of cells constrained by the spatial distance, the cell–cell communications could be summarized into cell subclusters (Fig. 4e ). Interestingly, a thin strip of cells located near the lateral-ventral part of the embryo was found to be both sources and targets of Wg signaling. Moreover, Wg signaling was abundant at the lateral side of the embryo with the direction biased toward the posterior. This finding explains a previous observation that Wg signaling is crucial to the growth of the posterior 46 , and further predicts a subpopulation of cells at the posterior-ventral domain that receives Wg signaling from their neighbors. In addition, one can prioritize the most significant cell–cell connections by adjusting a scaling parameter (η in Supplementary Methods Eq. 34), which determines whether a gene is sufficiently expressed to be included in the cell–cell communication analysis (Supplementary Fig. 16 ). Dpp, the D rosophila homolog of BMP which is an essential morphogen regulating the patterning in early D rosophila embryo development, was found to have a longer signaling spatial range of 125 µm (Fig. 4d , Supplementary Fig. 17 ). The most active Dpp signaling was predicted to occur at the lateral side where short gastrulation (Sog) expression is predicted to be abundant, supporting a prior result that Dpp signaling undergoes a long-range transport facilitated by Sog during dorsal-ventral patterning 47 (Fig. 4f ). Interestingly, the strong Wg source located near the ventral side was also identified as a significant target of Dpp signals from the dorsal side. When we compared Wg and Dpp based cell–cell communications inferred by SpaOTsc with another inference method 5 which does not include spatial information, we found that SpaOTsc makes predictions that are more biologically feasible and more consistent with the prior knowledge (Supplementary Figs. 18 – 22 ). Epidermal growth factor (EGF) signaling, another key regulator of dorsal-ventral patterning 48 was also inferred (Supplementary Fig. 23 ). Similar to Dpp that regulates dorsal-ventral patterning, EGF signaling was found to be strong along the dorsal-ventral axis. The inferred EGF signaling is more active in the posterior in contrast to Dpp signaling that is stronger in the anterior. Identification of intercellular gene–gene information flows In the previous section, we inferred relationships between cells (the cell–cell communication network) based on known genes involved in signaling. Here we attempt to identify the spatial influence of one gene on another gene by computing the intercellular gene–gene regulatory information flow. For the drosophila embryo, we inferred such a flow for a set of most variable genes under different spatial ranges to predict which gene in one cell may affect another gene in a different cell located within an estimated maximal distance (Fig. 5a ). For example, gene Twist is connected to gene Snail at a spatial distance of 25 µm (red curve in Fig. 5a ), suggesting that Snail is directly or indirectly affected by Twist in neighbor cells within the spatial distance. These two genes are known to be important during mesoderm formation 49 . Fig. 5: Intercellular gene–gene regulatory information flows. a ( D rosophila embryo) The intercellular gene–gene regulatory information flow for the top 20 variable genes in D rosophila embryo scRNA-seq data. For example, gene Twist in the 25 μm shell is connected with gene Snail (red curve), suggesting Snail is directly or indirectly affected by Twist in neighbor cells within a spatial distance of 25 μm. b (zebrafish embryo) The intercellular gene–gene regulatory information flow for the variable genes involved in Wnt signaling or BMP signaling. Relative distances are considered where short, medium and long range corresponds to 1/8, 1/4 and 1/2 of the embryo radius. c Heatmaps of the information flows at different spatial scales showing the intercellular regulation within and across the two signaling modules. Full size image For the zebrafish embryo, we analyzed genes that may have links between Wnt and BMP to study crosstalk between these two signalings (Fig. 5b, c ). A subset of variable genes was used to infer the information flow, confirming the intercellular regulatory relationships within Wnt signaling or BMP signaling (solid curves in Fig. 5b ). We also found several significant connections between genes from Wnt signaling and BMP signaling (dashed curves in Fig. 5b ), suggesting potential interactions between Wnt and BMP signaling. Moreover, significant connections between a downstream gene of BMP signaling id1 (inhibitor of DNA binding 1) and the Wnt ligand genes were identified. This finding is consistent with a previous suggestion that id1 is a mediator for the crosstalk between Wnt signaling and BMP signaling 50 . To investigate whether the number of background genes affects the inference of gene–gene regulatory information flows, we systematically increased the number of background genes from 1 gene to 300 genes for the background genes. Consistent results were obtained once more than 50 genes were used as the background genes for the inference (Supplementary Figs. 24 – 29 ). Applications to spatial transcriptomics datasets To investigate if SpaOTsc is applicable to the inference of cell–cell communications using spatial transcriptomics data, we used two different datasets for mouse olfactory bulb: a Slide-seq dataset 51 containing 26316 cells with 18838 genes and an RNA seqFISH+ dataset 52 containing 2050 cells with 10000 genes. In addition, we utilized one scRNA-seq dataset 53 containing 51426 cells with 18560 genes. The scRNA-seq dataset consists of six samples from three physiological conditions: wild type (WT), olfactory trained (TR), and naris occluded (OC). By selecting secreted ligands in a database of more than a thousand ligand–receptor pairs 3 , we identified a list of 1157, 989, and 758 pairs in the scRNA-seq, Slide-seq, and RNA seqFISH+ datasets, respectively. We first inferred cell–cell communications in the spatial transcriptomics datasets without using the scRNA-seq dataset. A spatial transcriptomics dataset annotated with spatial distances between cells directly computed from the spatial coordinates in the data does not require the usage of the first part of SpaOTsc, the part to integrate spatial data and scRNA-seq data, denoted as SpaOTsc-integration (Fig. 1a,b ). The second part of our method, denoted as SpaOTsc-communications (Fig. 1c ), was used to analyze the spatial transcriptomics datasets. We found in both spatial datasets that the signal sender cells exhibit more spatial localization pattern, and the signal receivers are more scattered over the space (Fig. 6a, b , Supplementary Figs. 30 , 31 ). For example, a strip of cells in the middle of the Slide-seq data (Fig. 6b ) and the top portion of the RNA seqFISH+ data (Fig. 6a ) are the signal senders. Individual ligands such as Apoe and Ptn are abundant across the whole sample and Trf is sparse and located in the left and right side of the domain (Fig. 6c ). The intercellular gene–gene regulatory information flow for the top variable genes in the Slide-seq dataset shows abundant connections for the gene Pcp4 (Fig. 6d ), a gene suggested to be expressed in neuronal origins 54 . Fig. 6: Application of SpaOTsc to spatial transcriptomic data and their integrations with scRNA-seq datasets. The Slide-seq data, the RNA seqFISH+ data, and the scRNA-seq data for mouse olfactory bulb were taken from ref. 51 , ref. 52 , and ref. 53 , respectively. a, b Spatial distributions of signal senders and receivers with the color showing the likelihood of being a sender or receiver. Top row: inference using only spatial transcriptomics data. Bottom row: the inferred signaling in scRNA-seq data (sample WT1) visualized by mapping the single cells to space using spatial transcriptomic data. c Signaling for four individual marked ligands in the Slide-seq data. d The intercellular gene–gene regulatory information flow of the top 20 variable genes in the Slide-seq data. e Similarities on cluster–cluster communication between the six samples of the scRNA-seq data integrated with the RNA seqFISH+ data measured by Pearson’s correlation coefficient. Full size image We next integrated the scRNA-seq dataset with the two spatial transcriptomics datasets, respectively, using SpaOTsc-integration. As a result, cells in scRNA-seq data were mapped into space after using SpaOTsc-communications (Fig. 6a,b , Supplementary Figs. 32 , 33 ). To study the similarities between the three different physiological conditions, only available in the scRNA-seq data, we carried out a clustering on the scRNA-seq data with all cells from different conditions and samples (Supplementary Fig. 34 ) to compare the average cell–cell communications between different clusters. Overall the six samples have a similar cell–cell communication profile, however the OC samples are a bit more different from the TR and WT samples (Fig. 6e ). A similar result was obtained when integrating with the Slide-seq dataset (Supplementary Fig. 35 ). Discussion Overall, we have shown the capabilities of SpaOTsc to (1) map between scRNA-seq data and spatial data, (2) infer spatial distances between single cells, (3) quantitatively compare spatial gene expression patterns, (4) reconstruct spatial cell–cell communications, (5) estimate the spatial range of particular types of intercellular signaling, and (6) identify gene pairs that potentially intercellularly regulate each other. The mapping accuracy of SpaOTsc has been demonstrated by gene expression reconstruction validation on zebrafish embryo and D rosophila embryo datasets, along with spatial origin assignments to the scRNA-seq data of mouse visual cortex. Unlike previous mapping methods, the mapping of a cell–position pair depends on not only the gene expression profile similarity between this pair but also the mapping of all other pairs. The structured nature of our optimal transport method allows us to fully utilize the scRNA-seq data, which is especially useful when the spatial data only partially represents the cell types in scRNA-seq data. The spatial metric for cells in scRNA-seq obtained using SpaOTsc allows one to carry out spatial analyses of all genes at a single-cell resolution. Inferring the spatial distance between two cells by comparing their estimated spatial probability distributions provides a useful coupling between these two cells, quantifying the confidence of the estimated cell–cell distance. In addition, this spatial metric annotated scRNA-seq data can be fed to different spatial transcriptomics analysis pipelines such as Giotto 55 . Beyond application to data analysis, the spatial metric for scRNA-seq data can also be used for modeling approaches. For example, ordinary (or partial) differential equations on graphs might be introduced using this metric to study the dynamics of intracellular and intercellular gene regulation. Computationally, the cell–cell distance inference requires an iterative calculation of optimal transport over all pairs of cells. Although effective approximation was made by only using a small number of landmark positions, the computation can become intractable when the dataset is excessively large. Improvement can be made by first constructing a graph partially representing the distances between cells, and approximating the full cell–cell distance matrix using the methods to estimate pairwise distances designed for large graphs 56 . Adding spatial constraints in cell–cell communication inference is critical to spatial analysis of gene–gene regulations across cells. However, our approach does not consider the time delay that may take place in cell–cell communication. Such delay may include the diffusion time of ligand or the reacting time of the intracellular cascades. It is potentially beneficial to include this effect in studying spatially regulated cell–cell communication, and dynamical systems models or more sophisticated probability models might be needed for more accurate inference. Other than inferring cell–cell communications based on known genes involved in specific signaling, the estimation of the spatial range of signalings and identification of new gene pairs that might affect each others’ expression across cells are potentially instrumental in spatial analysis of gene expression data. Further incorporation of gene–gene regulatory networks in our spatial analysis tools can be very fruitful in studying spatial gene regulations. Finally, SpaOTsc is generally applicable to datasets where reasonable similarity measurement between single cells and spatial positions are obtainable. Since single-cell spatial transcriptomics data 52 , 57 naturally resembles a (spatial) metric space of a collection of individual cells, SpaOTsc is also directly applicable to the high-throughput spatial data. The SpaOTsc-integration utility provides a useful tool for the integration of scRNA-seq data with the spatial transcriptomics data to fully utilize more easily available scRNA-seq datasets under various biological conditions. Methods Full details of the theoretical background and implementation of SpaOTsc can be found in Supplementary Methods. SpaOTsc model SpaOTsc constructs a mapping between the n cells in scRNA-seq data and the m positions in spatial data by solving an optimal transport problem 20 given three dissimilarity/distance matrices, \({\mathbf{M}} \in {\Bbb R}^{n \times m}\) for the gene expression dissimilarity between cells and locations, \({\mathbf{D}}_{{\mathrm{sc}}} \in {\Bbb R}^{n \times n}\) for the gene expression dissimilarity among cells, and \({\mathbf{D}}_{{\mathrm{spa}}} \in {\Bbb R}^{m \times m}\) for the distances among spatial locations. The optimal transport plan γ * is obtained by solving $$ \mathop{{\mathrm{arg}}{\mathrm{min}}}\limits_{{\boldsymbol{\gamma}} \in {\Bbb R}_{+}^{n \times m}} \left[\left(1 - \alpha \right) \,{<} \,{\boldsymbol{\gamma}}, {\mathbf{M}} {> }_{\mathrm{F}} \right.\\ \qquad \qquad +\, \rho \left({\mathrm{KL}}\left({\boldsymbol{\gamma}} {\mathbf{1}}^{m} | {\mathbf{\omega}}_{1}\right) + {\mathrm{KL}}\left({\boldsymbol{\gamma}}^{T} {\mathbf{1}}^{n} | {\mathbf{\omega}}_{2}\right)\right) \\ \qquad\qquad +\, \alpha \sum_{i,j,k,l}\left. L({\mathbf{D}}_{{\mathrm{sc}}}\left(i,k\right),{\mathbf{D}}_{{\mathrm{spa}}}\left(j,l\right)){\boldsymbol{\gamma}}_{i,j}{\boldsymbol{\gamma}}_{k,l}\right]$$ (1) where ω 1 , ω 2 are weight vectors and L measures the difference between scaled dissimilarities/distances. The first term quantifies the major transport cost, the second penalty term promotes weight conservation (unbalanced transport) 21 , and the last term preserves the distance within datasets through the mapping (structured transport) 22 . The spatial cell–cell distance \(\widehat {\mathbf{D}}_{{\mathrm{sc}}}\) is then computed based on γ * using the optimal transport distance: $$\begin{array}{*{20}{c}} {\widehat {\mathbf{D}}_{{\mathrm{sc}}}(i,j) = \mathop {\mathrm{min }}\limits_{{\boldsymbol{\gamma}} \in \Gamma } < {\boldsymbol{\gamma}},{\mathbf{D}}_{{\mathrm{spa}}} > _{\mathrm{F}},} \\ {\Gamma = \{ {\boldsymbol{\gamma }} \in {\Bbb R}_ + ^{m \times m}:{\boldsymbol{\gamma }}{\mathbf{1}}^m = {\boldsymbol{\gamma }}_i^ \ast /\mathop {\sum}\limits_k {{\boldsymbol{\gamma }}_{i,k}^ \ast } ,{\boldsymbol{\gamma }}^T{\mathbf{1}}^m = {\boldsymbol{\gamma }}_j^ \ast /\mathop {\sum}\limits_k {{\boldsymbol{\gamma }}_{j,k}^ \ast } \} .} \end{array}$$ (2) One can carry out three major tasks immediately after obtaining γ * and \(\widehat {\mathbf{D}}_{{\mathrm{sc}}}\) : (1) prediction of spatial gene expression at the i th position by \(\mathop {\sum}\limits_j {{\boldsymbol{\gamma }}_{j,i}^ \ast } {\mathbf{g}}_j/\mathop {\sum}\limits_j {{\boldsymbol{\gamma }}_{j,i}^ \ast }\) where \({\mathbf{g}} \in {\Bbb R}^n\) is the expression vector for a gene in scRNA-seq data; (2) identification of spatially localized cell subclusters by distance-based clustering using \(\widehat {\mathbf{D}}_{{\mathrm{sc}}}\) within each previously identified cluster; and (3) visualization of scRNA-seq data constrained by cell–cell distances using the distance matrix \(\widehat {\mathbf{D}}_{{\mathrm{sc}}}\) . The intercellular gene–gene regulatory information flow is inferred by using partial information decomposition 28 , 29 , 34 . We estimate how much unique information about a gene (target gene) can be provided by another gene (source gene) in its spatial neighborhood through the calculation of the accumulated unique information: $$u(G_{{\mathrm{src}}},G_{{\mathrm{tar}}},\eta ) = \mathop {\sum}\limits_{G \in {{{\cal{G}}}}} {{\mathrm{Unq}}_G} (G_{{\mathrm{tar}}};\tilde G_{{\mathrm{src}}})$$ (3) where G tar is the variable for target gene expression in the cells, \(\tilde G_{{\mathrm{src}}}\) is the variable for source gene expression in η -neighborhoods of cells whose observation is estimated using \(\widehat {\mathbf{D}}_{{\mathrm{sc}}}\) , and \({{{\cal{G}}}}\) is a collection of genes with high intracellular correlation with the target gene. The unique information Unq X ( Z ; Y ) measures how much unique information Y provides about Z in addition to X . For the case of intercellular signaling with known ligands, receptors, and their downstream genes, we use random forest models 58 , 59 to infer the spatial distance of signaling. The ligand expressions of cells in a neighborhood of distance of η , denoted as \({\tilde{L}} \left(\eta \right)\) , together with other genes highly correlated to a downstream target gene of the ligand–receptor interaction are used as features to fit a random forest model outputting the target gene. The receptor expressions are used as sample weights. The η under which \({\tilde{L}} \left(\eta\right)\) has the highest feature importance is considered to be the spatial distance of this signaling. Knowing the ligands, receptors and downstream genes involved in intercellular signaling and \({\widehat{\mathbf{D}}}_{{\mathrm{sc}}}\) , we then infer cell–cell communication by solving another optimal transport problem. First, the source distribution over the cells ω L is constructed to be proportional to the expression of ligand gene. Next a destination distribution ω D is constructed based on the expression of receptors and downstream genes to represent the probability of a cell to receive the signal. A cell highly expressing receptors with downstream genes consistent with the up-/down-regulation relationships (low expression of down-regulated genes and high expression of up-regulated genes) is assigned with a high probability. With this information we solve the following optimal transport problem $$\mathop{{{\mathrm{arg}}{\mathrm{min}}}}\limits_{{\boldsymbol{\gamma}} \in {\Bbb R}_{+} ^{n \times n}} < \, {\boldsymbol{\gamma}},{\widehat {\mathbf{D}}}_{{\mathrm{sc}}} {> }_{\mathrm{F}} + \rho \left({\mathrm{KL}}\left({\boldsymbol{\gamma}} {\mathbf{1}}^{n} | {\boldsymbol{\omega}}_{\mathrm{L}}\right) + {\mathrm{KL}}\left({\boldsymbol{\gamma}}^{T} {\mathbf{1}}^{n} | {\boldsymbol{\omega}}_{\mathrm{D}}\right)\right).$$ (4) The optimal transport plan γ \({\!\,}_{\mathrm{S}}^{\ast}\) is interpreted as likelihood of cell–cell communications, e.g. its ij th element describes how likely cell j receives signal from cell i . When spatial distances for signaling are available, we can simply adjust the cost matrix \({\widehat{\mathbf{D}}}_{{\mathrm{sc}}}\) by setting entries greater than this distance to a large number to enforce a spatial constraint on communications identification. When a spatial constraint is applied, long-distance connections will be eliminated and new short connections may emerge (Supplementary Figs. 21 , 22 ). Datasets and processing For zebrafish embryo, we downloaded the accompanying data files ( ) for the Seurat tutorial ( ). The scRNA-seq data is stored in the file “zdata.matrix.txt” and the spatial data (in situ hybridization) is stored in “Spatial_ReferenceMap.xlsx” 13 . The scRNA-seq data is also available through the accession code GEO: GSE66688. We binarized the scRNA-seq data and selected a set of highly variable genes following the same tutorial. For the scRNA-seq data matrix X , a log transformation was performed elementwise log(1 + X ) for the analyses. Another more recent scRNA-seq data 36 (accession number: GSE112294) is used for the analysis of cell–cell communication. The cells for 6hpf are extracted followed by normalization to 10000 total counts per cell and a logp1 transform. Genes used for signaling analysis are listed in Supplementary Methods section 3.2. For D rosophila embryo, the scRNA-seq data and the spatial data (in situ hybridization) were downloaded from the Dream Single cell Transcriptomics Challenge through Synapse ID (syn15665609) 10 . The files “bdtnp.txt” and “binarized_bdtnp.csv” were used for numerical and binary spatial data, respectively. The files “dge_normalized.txt” and “dge_binarized_distMap.csv” were used for the numerical and binary scRNA-seq data. The coordinate of each cell in the spatial data is assigned according to the file “geometry.txt”. We used Scanpy 60 to select highly variable genes for downstream analysis (the script used is included in SpaOTsc tutorial files). Genes used for signaling analysis are listed in Supplementary Methods section 3.1. For mouse visual cortex, the spatial data (Spatially-resolved Transcript Amplicon Readout mapping) was downloaded from STARmap Resources ( ) 38 . We used the data named “20180505_BY3_1kgenes” from the folder “visual_1020”. The scRNA-seq data was downloaded from Allen Brain Atlas 37 , 61 ( ), and specifically the file “mouse_VISp_2018-06-14_exon-matrix.csv” was used. The spatial data contains 1020 genes and quantifying similarity by directly computing correlation coefficients might include too much noise and inconsistency across datasets. Therefore, we used the “cca” utility in Seurat 15 which determines a low-dimensional common space for the two datasets and the script for processing is included in SpaOTsc tutorial files. For mouse olfactory bulb, the spatial data by Slide-seq was downloaded from the Broad Institute Single Cell Portal ( ) 51 with file ID: 180430_3. The spatial data by RNA seqFISH+ was downloaded from the Github repository ( ) 52 . The scRNA-seq data were downloaded from the supplementary of the associated publication 53 . The same procedure as for the mouse visual cortex data was used to measure similarities between cells of these two datasets. For the RNA seqFISH+ data with several fields, an initial mapping was done for each sample of the scRNA-seq data and the whole spatial data (all seven fields). The single cells were then assigned to the fields based on this initial mapping and separate mapping was carried out for each field. The 39 clusters of the scRNA-seq data were identified using the Scanpy package (PCA + Louvain) 60 . The ligand–receptor pairs were chosen from a ligand–receptor database 3 by using only secreted ligands according to the Human Protein Atlas 62 . The gene symbols were converted from human to mouse using the Mouse Genome Database 63 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The original data used in this paper can be accessed through the following links: (1) zebrafish embryo spatial data: downloaded from ( ) 13 ; (2) zebrafish embryo scRNA-seq data: GEO assession codes: GSE66688 13 (first dataset) and GSE112294 36 (second dataset); (3) D rosophila embryo spatial and scRNA-seq data: accessible at the Dream Single cell Transcriptomics Challenge through Synapse ID (syn15665609) 10 ; (4) mouse visual cortex spatial data: downloaded from STARmap Resources 38 ( ); (5) mouse visual cortex scRNA-seq data: downloaded from Allen Brain Atlas 37 , 61 ( ); (6) mouse olfactory bulb spatial data: Slide-seq data 51 downloaded from Broad Institute Single Cell Portal ( ) and RNA seqFISH+ data 52 downloaded from ( ); (7) mouse olfactory bulb scRNA-seq data: downloaded from the supplementary of the associated publication 53 . Full tutorials that reproduce the presented results containing the data used for analysis can be accessed through the GitHub repository ( ). Code availability An open-source Python implementation of SpaOTsc is available at GitHub ( ). | Researchers at the University of California, Irvine have developed a new mathematical machine-intelligence-based technique that spatially delineates highly complicated cell-to-cell and gene-gene interactions. The powerful method could help with the diagnosis and treatment of diseases ranging from cancer to COVID-19 through quantifying crosstalks between "good" cells and "bad" cells. By combining the mathematical concept known as "optimal transport" with machine learning and information theory, the scientists were able to equip unconnected single cells with spatial information, thereby highlighting communication links between cells or genes. The work is the subject of a new study published today in Nature Communications. "With this tool, we can identify cross-talk between virus-infected cells and immune cells," said co-author Qing Nie, UCI professor of mathematics and the director of the National Science Foundation-Simons Center for Multiscale Cell Fate Research, which supported the project. "This novel approach may have an immediate application in finding critical cell-to-cell communication links in the lung when the COVID-19 virus attacks." Nie said that accurate disease diagnosis and treatment requires both gene screening and tissue imaging. High-throughput gene profiling at single-cell resolution often requires dissociation of tissues into individual cells, leading to a loss of spatial information. But imaging intact tissues only allows the measurement of a small number of genes. "This new mathematical machine-intelligence method greatly enriches our capability in integrating multiple biomedical datasets," said Nie. "For the very first time, we can reveal how one gene in one cell—for example, in a particular cancer cell—may influence another gene in an immune cell, for instance." He said that he was partly inspired to look into the use of optimal transport, a tool with broad applications, including deep learning, after the 2018 Fields Medal (the mathematics equivalent to the Nobel Prize) was awarded on the topic. | 10.1038/s41467-020-15968-5 |
Physics | Ultra-thin designer materials unlock quantum phenomena | Shawulienu Kezilebieke et al. Topological superconductivity in a van der Waals heterostructure, Nature (2020). DOI: 10.1038/s41586-020-2989-y Journal information: Nature | http://dx.doi.org/10.1038/s41586-020-2989-y | https://phys.org/news/2020-12-ultra-thin-materials-quantum-phenomena.html | Abstract Exotic states such as topological insulators, superconductors and quantum spin liquids are often challenging or impossible to create in a single material 1 , 2 , 3 . For example, it is unclear whether topological superconductivity, which has been suggested to be a key ingredient for topological quantum computing, exists in any naturally occurring material 4 , 5 , 6 , 7 , 8 , 9 . The problem can be circumvented by deliberately selecting the combination of materials in heterostructures so that the desired physics emerges from interactions between the different components 1 , 10 , 11 , 12 , 13 , 14 , 15 . Here we use this designer approach to fabricate van der Waals heterostructures that combine a two-dimensional (2D) ferromagnet with a superconductor, and we observe 2D topological superconductivity in the system. We use molecular-beam epitaxy to grow 2D islands of ferromagnetic chromium tribromide 16 on superconducting niobium diselenide. We then use low-temperature scanning tunnelling microscopy and spectroscopy to reveal the signatures of one-dimensional Majorana edge modes. The fabricated 2D van der Waals heterostructure provides a high-quality, tunable system that can be readily integrated into device structures that use topological superconductivity. The layered heterostructures can be readily accessed by various external stimuli, potentially allowing external control of 2D topological superconductivity through electrical 17 , mechanical 18 , chemical 19 or optical means 20 . Main There has been a surge of interest in designer materials that would have electronic responses not found in naturally occurring materials 1 , 10 , 11 , 12 , 13 , 14 , 15 . Topological superconductors are one of the main targets of these efforts, and they are currently attracting intense attention due to their potential as building blocks for Majorana-based qubits for topological quantum computation 1 , 2 , 3 . Majorana zero-energy modes (MZM) have been reported in several different experimental systems, the most prominent examples being semiconductor nanowires with strong spin–orbit coupling and ferromagnetic atomic chains proximitized with an s -wave superconductor 1 , 3 , 4 , 5 , 7 , 21 , 22 . It is also possible to realize MZMs in vortex cores on a proximitized topological insulator surface 23 , 24 or on a FeTe 0.55 Se 0.45 superconductor surface 25 , 26 , 27 . In these cases, the MZM were spectroscopically identified as zero-energy conductance signals that are localized at the ends of the one-dimensional (1D) chain or in the vortex core. The evidence for the Majorana states in these various platforms consists of subgap conductance peaks, and the features occurring at system boundaries and defects are consistent with their topological origin. Unfortunately, different disorder-induced states are known to mimic the Majorana conductance signals, and the status of the observations remains inconclusive. Proof of the existence of Majorana zero modes would be obtained if their non-Abelian exchange statistics or braiding could be demonstrated 1 . In 2D systems, 1D dispersive chiral Majorana fermions are expected to localize near the edge of the system (Fig. 1a ). For example, it was proposed that the dispersing Majorana states could be created at the edges of an island of magnetic adatoms on the surface of an s -wave superconductor 28 , 29 , 30 . Experimentally, promising signatures of such 1D chiral Majorana modes have recently been reported around nanoscale magnetic islands either buried below a single atomic layer of Pb (ref. 6 ) or adsorbed on a Re substrate 8 , and in domain walls in FeTe 0.55 Se 0.45 (ref. 9 ). However, these types of systems can be sensitive to disorder and may require interface engineering through, for example, the use of an atomically thin separation layer. In addition, it is difficult to incorporate these materials into device structures. Such problems can be circumvented in van der Waals heterostructures, where the different layers interact only through van der Waals forces 10 . Such heterostructures naturally allow for very high-quality interfaces, and many practical devices have been demonstrated. Although van der Waals materials with a wide range of properties have been discovered, ferromagnetism has been notably absent until recent discoveries of atomically thin Cr 2 Ge 2 Te 6 (ref. 31 ), CrI 3 (ref. 32 ) and CrBr 3 (refs. 20 , 33 ). The first reports relied on mechanical exfoliation for the sample preparation, but CrBr 3 (ref. 16 ) and Fe 3 GeTe 2 (ref. 34 ) have also been grown by molecular-beam epitaxy (MBE) in ultra-high vacuum. This is essential for producing clean edges and interfaces. Fig. 1: Realization of topological superconductivity in CrBr 3 –NbSe 2 heterostructures. a , Schematic of the experimental set-up. b , c , Schematic of the band-structure engineering to realize topological superconductivity. Effect of adding spin–orbit interactions and weaker and stronger Zeeman-type magnetization are shown on the low-energy band structure in the normal ( b ) and superconducting states ( c ). Left panel, with spin–orbit interaction; middle and right panels, spin–orbit interaction plus magnetization. Panels in c correspond to the above panels in b (with the addition of superconductivity). d , STM image of a monolayer-thick CrBr 3 island grown on NbSe 2 using MBE (STM feedback parameters: V bias = +1 V, I = 10 pA; scale bar, 10 nm). e , Atomically resolved image of the CrBr 3 layer (STM feedback parameters: V bias = +1.7 V, I = 0.5 nA; image size, 18.8 × 16.4 nm 2 ). f , Calculated structure and the induced spin-polarization from density-functional theory calculations. g , Experimental d I /d V spectroscopy on the NbSe 2 substrate (blue), the middle of the CrBr 3 island (red) and at the edge of the CrBr 3 island (green), measured at T = 350 mK. Colours correspond to the positions shown in d . ML, monolayer; SC, superconducting. Full size image The recently discovered monolayer ferromagnet transition metal trihalides combined with transition metal dichalcogenide (TMD) superconductors are ideal for realizing 2D topological superconductivity (Fig. 1a ). Here, we use MBE to grow high-quality monolayer ferromagnet CrBr 3 on a NbSe 2 superconducting substrate. The mirror symmetry is broken at the interface between the different materials, and the Rashba effect lifts the spin degeneracy. Therefore, we have all the necessary ingredients — magnetism, superconductivity and Rashba spin–orbit coupling — to realize a designer topological superconductor 35 , 36 . The use of a van der Waals heterostructure has considerable advantages. They can potentially be manufactured by simple mechanical exfoliation, the interfaces are naturally very uniform and of high quality, and the structures can be straightforwardly integrated in device structures. Finally, as the layered heterostructures are not buried inside the material, and as the edge modes exist in well defined and easily identifiable locations, they can be readily accessed by a wide variety of external stimuli, making external control of 2D topological superconductivity potentially possible by electrical 17 , mechanical 18 , chemical 19 and optical approaches 20 . Pioneering theoretical work 35 , 36 demonstrated that topological superconductivity may arise from a combination of out-of-plane ferromagnetism, superconductivity and Rashba-type spin–orbit coupling, as illustrated in Fig. 1b, c . In this scheme, the Rashba coupling lifts the spin degeneracy of the conduction band, while Zeeman splitting due to proximity magnetization lifts the remaining Kramers degeneracy. Adding superconductivity creates a particle–hole symmetric band structure, and the superconducting pairing opens gaps at the Fermi energy. In our theoretical model for magnetically covered NbSe 2 , a similar picture arises for the real band structure around any of the high-symmetry points of the hexagonal Brillouin zone (Γ, K or M) where Rashba coupling vanishes. Depending on the magnitude of the magnetization-induced gap M and the position of the Fermi energy μ , the system enters a topological phase when | ε ( k 0 ) – μ | ≤ M , where ε ( k 0 ) is the energy of the band crossing at the high-symmetry point in the absence of magnetization. This is due to the effective p -wave pairing symmetry created. In Fig. 1d , we show a constant-current scanning tunnelling microscopy (STM) image of a CrBr 3 island grown on a freshly cleaved bulk NbSe 2 substrate by MBE (see Methods for details). The CrBr 3 islands show a well ordered moiré superstructure with 6.3-nm periodicity arising from the lattice mismatch between the CrBr 3 and the NbSe 2 layers. Figure 1e shows an atomically resolved STM image of the CrBr 3 monolayer, revealing periodically spaced triangular protrusions. These features are formed by the three neighbouring Br atoms as highlighted in Fig. 1f (red triangle), showing the fully relaxed geometry of the CrBr 3 /NbSe 2 heterostructure obtained through density functional theory (DFT) calculations (see Methods ). The measured in-plane lattice constant is 6.5 Å, consistent with the recent experimental value (6.3 Å) of monolayer CrBr 3 grown on graphite 16 and with our DFT calculations. DFT calculations further confirm that the CrBr 3 monolayer retains its ferromagnetic ordering with a magnetocrystalline anisotropy favouring an out-of-plane spin orientation as shown in Fig. 1f . We confirmed the ferromagnetism of the CrBr 3 islands on NbSe 2 experimentally with measurements of the magneto-optical Kerr effect. The magnetization density (red and much smaller blue isosurfaces in Fig. 1f , lower panel) shows that the magnetism arises from the partially filled d orbitals of the Cr 3+ ion. Although the largest magnetization density is found close to the Cr atoms, there is also considerable proximity-induced magnetization of the Nb atoms in the underlying NbSe 2 layer. We probe the emerging topological superconductor phase with scanning tunnelling spectroscopy (STS) measurements at a temperature of T = 350 mK. Figure 1g shows experimental d I /d V spectra (raw data) taken at different locations indicated in Fig. 1d (marked by filled circles). The d I /d V spectrum of bare NbSe 2 has a hard gap with an extended region of zero differential conductance around zero bias, which can be fitted by the McMillan two-band model 37 (see Extended Data Fig. 1a ). In contrast, in spectra taken in the middle of the CrBr 3 island, we observe pairs of conductance onsets at ±0.3 mV around zero bias (red arrows). The magnetization causes the formation of energy bands (dubbed Shiba bands) that exist inside the superconducting gap of the substrate 3 , 35 . Spin–orbit interactions can drive the system into a topological phase with associated closing and reopening of the gap between the Shiba bands. We observe edge modes consistent with the expected Majorana modes along the edge of the magnetic island that are the hallmark of 2D topological superconductivity 3 , 6 , 8 . The spectroscopic feature of the Majorana edge mode appears inside the gap defined by the Shiba bands (the topological gap) and is centred around the Fermi level ( E F ). This is a further indication that the edge states are indeed topological edge modes. A typical spectrum taken at the edge of the CrBr 3 island is shown in Fig. 1g , where a peak localized at E F is clearly seen together with side features stemming from the Shiba bands (detailed analysis shown in Extended Data Fig. 1c–f ). We have developed an effective low-energy model (see Supplementary Information for details) based on earlier work on individual magnetic impurities on a bulk NbSe 2 substrate 38 and DFT calculations. The band structure of the Nb d -states-derived band used in the effective model is shown in Fig. 2a (direct comparison with DFT is shown in the Supplementary Information ). Topological superconductivity can be generated when magnetization is sufficiently strong to push one of the spin-degenerate bands at a high-symmetry point above the Fermi energy. We identify the observed topological phase as a state arising from the gap-closing transition at the M point with a Chern number C = 3. Whereas the M point is at ∼ 270 meV below the Fermi level in our tight-binding model (calculated phase diagram shown in Fig. 2b ), it is estimated to be 100 meV below Fermi level in bulk NbSe 2 (refs. 39 , 40 ). This implies that for a reasonable magnetization of M ≱ 100 meV, we need slight doping to bring the experimental system into the C = 3 state. This is precisely what we observe by following the Nb d bands from bare NbSe 2 onto a CrBr 3 island. There is an upward shift of the NbSe 2 bands of ∼ 80 meV under CrBr 3 (Extended Data Fig. 2 ). The two other non-trivial phases that originate from gap closings at the Γ point and K points give rise to states with C = −1 and C = −2. Realization of either of these phases would require notably larger shifts in chemical potential ( ∼ 0.6 eV), making them improbable for the experimental observations. The absolute values of the non-trivial Chern numbers can be understood in terms of a threefold rotational symmetry (see Supplementary Information ). Fig. 2: Electronic structure of CrBr 3 -NbSe 2 heterostructures. a , The band structure of the spin-split Nb d band used in the effective model for topological superconductivity with magnetization M = 100 meV. The inset shows the first Brillouin zone, where the six M points and the Rashba texture around them have been highlighted. b , Calculated phase diagram of the magnetized NbSe 2 based on the effective low-energy model. The colour scale indicates the energy gap E gap (in units of Δ ). c , The calculated topological gap Δ t as a function of the Rashba and magnetization energies (in units of the superconducting gap Δ ). d , Calculated band structure of the topological phase based on a phenomenological tight-binding model (see Supplementary Information for details). b , lattice parameter. Full size image The key quantity characterizing robustness of the non-trivial phase is the topological energy gap Δ t . This scale should be much larger than kT (where T is temperature) for the state to be observable in experiment. In the simple parabolic band model, this quantity can be estimated by Δ t = α R k F /[( α R k F ) 2 + M 2 ] 1/2 , where α R is the Rashba coupling and k F the Fermi wavelength 41 . The calculated gap based on our more realistic tight-binding model is shown in Fig. 2c . Based on the experimental results shown in Fig. 1g , the topological gap is Δ t ≈ 0.3 Δ . The calculated band structure in a strip geometry corresponding to the experimental gap is shown in Fig. 2d , where we see the Majorana edge modes crossing the topological gap. The edge modes are seen to coexist with the bulk states in a finite subgap energy window in agreement with experimental observations. We have carried out spatially resolved d I /d V spectroscopy across the edge of the CrBr 3 island (Fig. 3a ). The energy dependence of the main feature of the edge-mode local density of states (LDOS) is such that it splits off from the top edge of the topological gap inside the CrBr 3 island, smoothly crosses the topological gap and merges with its lower edge outside the CrBr 3 island. We have recorded grid d I /d V spectroscopy maps (Fig. 3b–g ) to probe the spatial evolution of the edge modes. At E F , the edge modes are confined within ∼ 2.4 nm of the edge of the island (see Extended Data Fig. 3 ). In addition to the edge-mode signature close to the Fermi level, there is also enhanced LDOS at the energies above the topological gap (Fig. 3e, f ) where we also see noticeable excitations inside the magnetic island. This implies that the edge modes coexist with Shiba bands at energies higher than the topological gap. The theoretically computed LDOS (Fig. 3h–n ; see Supplementary Information for details) reproduces the essential features of the experimental results. In addition to the universal signatures of topological superconductivity, our theoretical model reproduces a number of experimentally observed system-specific characteristics including (1) the correct edge-mode penetration depth, which is orders of magnitude smaller than simple estimates (similar to the case of 1D Fe wires on Pb; refs. 5 , 42 ); (2) the specific form of the subgap LDOS, which depends on system-specific dispersion of the topological edge modes; and (3) a non-generic coexistence (that is, the coexistence depends on system-specific details) of the topological edge modes and bulk states in a substantial energy window. Finally, we confirm the experimental finding that the distribution of the spectral weight of the edge mode along the edge is non-uniform. This stems from the geometric irregularities of the island boundary with a characteristic length scale that is comparable to the edge-mode penetration depth. However, the edge modes are not discontinuous along the edge; interference effects near edge irregularities suppress the visibility of the edge mode owing to finite experimental resolution. Fig. 3: Spatially resolved spectroscopy of the Majorana zero modes. a , d I /d V spectroscopy across the edge of the CrBr 3 island (STM topography shown on the top). b – g , STM topography and spatially resolved d I /d V maps extracted from grid spectroscopy experiments. STM feedback parameters: a , V bias = +1 V, I = 10 pA; b , V bias = +0.8 V, I = 10 pA. Scale bars: a , 4 nm; b , 12 nm. h – n , Corresponding calculated LDOS across the edge ( h ) and LDOS maps ( j – n ) with the island shape shown in i (see Supplementary Information for details). Full size image The experimentally observed edge modes could have a topologically trivial origin. However, in addition to the near-quantitative match with the theoretical results that incorporate the main ingredients of the experimental system, the edge-mode signature is experimentally very robust. We consistently observe it in our hybrid van der Waals heterostructures on all CrBr 3 islands, irrespective of their specific size and shape (Extended Data Fig. 4 ). Based on theoretical considerations, to observe the modes for all subgap energies, the linear dimension of the islands should be much larger than the edge-mode penetration depth. This condition is clearly fulfilled in our experimental results. To prove that the observed edge modes of the hybrid heterostructures are strongly linked to the superconductivity of the NbSe 2 substrate, we have carried out experiments in magnetic fields up to 4 T, suppressing superconductivity in the NbSe 2 substrate. All features associated with the gap at the centre of the island and the edge modes disappear in the absence of superconductivity in NbSe 2 (Extended Data Fig. 5 ). This rules out trivial edge modes as the cause of the observed results. Another non-topological reason for resonances close to the Fermi energy, the Kondo effect, should also be present in the normal state and can hence be ruled out as well. In conclusion, our work constitutes two advances in designer quantum materials. First, by fabricating van der Waals heterostructures with a 2D ferromagnet epitaxially coupled to superconducting NbSe 2 , we obtained a near-ideal, high-quality and defect-free designer structure exhibiting two competing electronic orders. The induced magnetization and spin–orbit coupling render the superconductor topologically non-trivial, supporting Majorana edge channels which we characterized by STM and STS measurements. Second, the heterostructure demonstrated here can be combined with well established and mature technologies for fabricating electrical devices, an essential step towards practical implementation. The system would in principle also allow electrical control of the topological phase through electrostatic tuning of the chemical potential. Methods Experimental The CrBr 3 thin film was grown on a freshly cleaved NbSe 2 substrate by compound source molecular beam epitaxy (MBE). Anhydrous CrBr 3 powder of 99% purity was evaporated from a Knudsen cell. Before growth, the cell was degassed up to the growth temperature 350 °C until the vacuum was better than 1 × 10 −8 mbar. The growth speed was determined by checking the coverage of the as-grown samples by STM. The optimal substrate temperature for the growth of CrBr 3 monolayer films was ∼ 270 °C. After the sample preparation, it was inserted into the low-temperature STM (Unisoku USM-1300) housed in the same ultra-high vacuum system, and all subsequent experiments were performed at T = 350 mK. STM images were taken in the constant-current mode. d I /d V spectra were recorded by standard lock-in detection while sweeping the sample bias in an open feedback loop configuration, with a peak-to-peak bias modulation of 30–50 μV at a frequency of 707 Hz. Spectra from grid spectroscopy experiments were normalized by the normal state conductance, that is, d I /d V at a bias voltage corresponding to a few times the superconducting gap. Density functional theory calculations Calculations were performed with the DFT methodology as implemented in the periodic plane-wave-basis VASP code 43 , 44 . Atomic positions and lattice parameters were obtained by fully relaxing all structures using the spin-polarized Perdew–Burke–Ernzehof (PBE) functional 45 including Grimme’s semiempirical DFT-D3 scheme for dispersion correction 46 , which is important to describe the van der Waals interactions between the CrBr 3 and the NbSe 2 layers. The interactions between electrons and ions were described by PAW pseudopotentials, where 4 s and 4 p shells were added explicitly as semicore states for Nb, and 3 p shells were added for Cr. An energy cut-off of 550 eV was used to expand the wavefunctions, and a systematic k -point convergence was checked for all structures, with sampling chosen according to each system size. For all systems, the total energy was converged to the order of 10 −4 eV. The convergence criterion of self-consistent field computation was set to 10 −5 eV, and the threshold for the largest force acting on the atoms was set to less than 0.012 eV Å −1 . A vacuum layer of 12 Å was added to avoid mirror interactions between periodic images. Spin polarization was considered in all calculations, where we set an initial out-of-plane magnetization of 3 μ B (where μ B is the Bohr magneton) per Cr atom and 0 otherwise. Band structures were calculated with and without spin–orbit coupling (SOC) effects, and a band unfolding procedure was performed when necessary using the BandUP code 47 , 48 . In Extended Data Fig. 6 are shown the top and side views of the CrBr 3 and NbSe 2 monolayers, for which the optimized lattice parameters are 6.370 Å and 3.455 Å, respectively. To model the CrBr 3 –NbSe 2 heterostructures shown in Fig. 1 , the CrBr 3 is vertically stacked on a 2 × 2 supercell of NbSe 2 . As there is a mismatch of 8.5% between their lattice parameters, the CrBr 3 was rescaled before the full optimization, and three different stackings were considered, namely htCrSe, htCrNbSe and htCrNb (Extended Data Fig. 6c–e , respectively; Extended Data Table 1 ; ht, heterostructure). In htCrSe, one Cr atom is located on top of a Se 2 pair, while the other Cr atom is on top of the hollow site of the NbSe 2 . Similarly, the htCrNb has one Cr on top of the Nb and the other Cr on top of the hollow site. For the htCrNbSe, one Cr is located on top of the Se 2 pair, while the other Cr is on top of the Nb. As can be seen from the data in Extended Data Table 1 , the fully relaxed lattice parameters L of the heterostructures reveal that the CrBr 3 is strained by about 7%, while the NbSe 2 is compressed by less than 2%, which are small enough that they should not significantly affect the electronic and magnetic properties 49 , 50 . The binding energies E b in Extended Data Table 1 reveal that htCrSe is the most energetically stable. For htCrse and htCrNbSe, the energy differences between the stacking configurations are relatively small (3.4 meV), and their layer–layer distances d are also comparable. htCrNb is the stacking configuration with highest layer–layer distance among the three stackings considered, resulting in a higher binding energy. The band structures of the isolated monolayers of CrBr 3 and NbSe 2 are in the left and middle panels of Extended Data Fig. 7 , respectively, while the right panel shows the band structure of htCrSe. CrBr 3 has an out-of-plane magnetization of 6.0 μ B and is a semiconductor with an indirect bandgap of 1.39 eV for spin-up and 2.63 eV for spin-down channels. On the other hand, NbSe 2 has a partially filled isolated and spin-degenerate band called the d band because of the large contributions coming from the Nb d orbitals 51 . The middle panel of Extended Data Fig. 7 shows both the band structure of the 2 × 2 NbSe 2 isolated monolayer, which is the supercell used for the CrBr 3 –NbSe 2 heterostructures, and the band structure of the NbSe 2 primitive cell in the inset. Considering the band structure of the htCrSe structure in the right panel of Extended Data Fig. 7 , apart from a bandgap reduction of the CrBr 3 spin-polarized band structure (which is attributed to its strained configuration 49 ), the bands of the 2 × 2 NbSe 2 are well preserved within the bandgaps of the CrBr 3 . However, a small spin splitting is observed in these in-gap states due to an induced magnetization on the NbSe 2 layer. Similar to previous work on CrBr 3 –TMD heterostructures 52 , 53 , our results show that the magnetization of htCrSe 2 (6.097 μ B ) is slightly larger than the magnetization of CrBr 3 (6.000 μ B ). From now on, we discuss in detail the spin splitting of the in-gap bands from the NbSe 2 shown in Extended Data Fig. 8a . Extended Data Fig. 8b shows the comparison between the bands of the pristine 2 × 2 NbSe 2 layer and the bands of htCrSe, both around the Γ point, where two sets of spin-polarized bands are clearly seen, with spin splittings of 29 meV and 7 meV. The two sets of bands are obtained because each Nb atom is coupled differently with the CrBr 3 layer in the heterostructure. The unfolded bands in Extended Data Fig. 8c show that the overall effect of the magnetization is to push the spin-down band above the spin-up band, similar to other CrBr 3 –TMD heterostructures 52 , 53 . There is a discontinuity at the M point as the Nb atoms are no longer identical in htCrSe (note that the Γ point at the Brillouin zone (BZ) of 2 × 2 NbSe 2 is equivalent to the Γ and M points at the BZ of the NbSe 2 primitive cell). As the bands of htCrSe around the Fermi level have a major contribution from the d electrons of the Nb atoms, SOC effects can strongly affect their dispersion. In Extended Data Fig. 8d is shown the band structure of htCrSe with SOC. Extended Data Fig. 8e shows the comparison between the bands of a pristine 2 × 2 NbSe 2 layer and the bands of htCrSe, both around the Γ point and also considering SOC. There is no SOC splitting along Γ–M whereas a large splitting is obtained along the K–Γ line in the pristine NbSe 2 case 51 , pushing the spin-up bands above the spin-down bands. For htCrSe, the spin splitting due to SOC is also observed along the K–Γ line, while the bands along the Γ–M are spin-polarized owing to the induced magnetization. The unfolded bands shown in Extended Data Fig. 8f reveal that the bands along the Γ–M are indeed spin-polarized owing to the induced magnetization (although a small reduction of the splitting is observed when SOC is considered), with the spin-down band above the spin-up band, while the bands along the M–K line have an inverted splitting induced by SOC. The band structures shown in Extended Data Fig. 8d and e were obtained considering three different effects simultaneously: induced magnetization, SOC, and different coupling between each Nb atom with the CrBr 3 layer. Mainly owing to the non-equivalency between the Nb atoms, many states are observed crossing the Γ point in the band structure of htCrSe, as an extra splitting is observed in the spin polarized bands. However, we stress here that the magnetic bandgap at the M point is kept when SOC is taken into account (see Extended Data Fig. 8f ), and the spin inversion observed near the M point in the unfolded band structure is compatible with the spin inversion in the Rashba effect 54 , 55 , 56 , 57 . A precise estimate of the Rashba effect would require the inclusion of SOC effects in a large supercell structure where all Nb atoms couple equally to the CrBr 3 layer, or at least large enough to average the coupling from each Nb atom, which is beyond current computational capability. However, judging by the sizeable spin polarized bandgap at the M point when SOC is considered and the observed spin inversion, we conclude that the Rashba effect is of the same order of magnitude as the spin splitting due to the induced magnetization. Note that our DFT calculations show that all states between the valence-band maximum and conduction-band minimum of the CrBr 3 come from the metallic substrate, which is expected for such a weakly interacting system. Estimates of the electron and hole barrier heights based on our DFT calculations 20 , 58 , 59 indicate that there is no spontaneous charge transfer either from or to the metallic substrate. Data availability All the data supporting the findings are available from the corresponding authors upon request. The results of the DFT calculations are available on the NOMAD repository ( ). | A team of theoretical and experimental physicists have designed a new ultra-thin material that they have used to create elusive quantum states. Called one-dimensional Majorana zero energy modes, these quantum states could have a huge impact for quantum computing. At the core of a quantum computer is a qubit, which is used to make high-speed calculations. The qubits that Google, for example, in its Sycamore processor unveiled last year, and others are currently using are very sensitive to noise and interference from the computer's surroundings, which introduces errors into the calculations. A new type of qubit, called a topological qubit, could solve this issue, and 1D Majorana zero energy modes may be the key to making them. "A topological quantum computer is based on topological qubits, which are supposed to be much more noise tolerant than other qubits. However, topological qubits have not been produced in the lab yet," explains Professor Peter Liljeroth, the lead researcher on the project. What are MZMs? MZMs are groups of electrons bound together in a specific way so they behave like a particle called a Majorana fermion, a semi-mythical particle first proposed by semi-mythical physicist Ettore Majorana in the 1930s. If Majorana's theoretical particles could be bound together, they would work as a topological qubit. One catch: no evidence for their existence has ever been seen, either in the lab or in astronomy. Instead of attempting to make a particle that no one has ever seen anywhere in the universe, researchers instead try to make regular electrons behave like them. To make MZMs, researchers need incredibly small materials, an area in which Professor Liljeroth's group at Aalto University specializes. MZMs are formed by giving a group of electrons a very specific amount of energy, and then trapping them together so they can't escape. To achieve this, the materials need to be 2-dimensional, and as thin as physically possible. To create 1D MZMs, the team needed to make an entirely new type of 2-D material: a topological superconductor. Topological superconductivity is the property that occurs at the boundary of a magnetic electrical insulator and a superconductor. To create 1D MZMs, Professor Liljeroth's team needed to be able to trap electrons together in a topological superconductor, however it's not as simple as sticking any magnet to any superconductor. "If you put most magnets on top of a superconductor, you stop it from being a superconductor," explains Dr. Shawulienu Kezilebieke, the first author of the study. "The interactions between the materials disrupt their properties, but to make MZMs, you need the materials to interact just a little bit. The trick is to use 2-D materials: they interact with each other just enough to make the properties you need for MZMs, but not so much that they disrupt each other." The property in question is the spin. In a magnetic material, the spin is aligned all in the same direction, whereas in a superconductor the spin is anti-aligned with alternating directions. Bringing a magnet and a superconductor together usually destroys the alignment and anti-alignment of the spins. However, in 2-D layered materials the interactions between the materials are just enough to "tilt" the spins of the atoms enough that they create the specific spin state, called Rashba spin-orbit coupling, needed to make the MZMs. Finding the MZMs The topological superconductor in this study is made of a layer of chromium bromide, a material which is still magnetic when only one-atom-thick. Professor Liljeroth's team grew one-atom-thick islands of chromium bromide on top of a superconducting crystal of niobium diselenide, and measured their electrical properties using a scanning tunneling microscope. At this point, they turned to the computer modeling expertise of Professor Adam Foster at Aalto University and Professor Teemu Ojanen, now at Tampere University, to understand what they had made. "There was a lot of simulation work needed to prove that the signal we're seeing was caused by MZMs, and not other effects," says Professor Foster. "We needed to show that all the pieces fitted together to prove that we had produced MZMs." Now the team is sure that they can make 1D MZMs in 2-dimensional materials, the next step will be to attempt to make them into topological qubits. This step has so far eluded teams who have already made 0-dimensional MZMs, and the Aalto team are unwilling to speculate on if the process will be any easier with 1-dimensional MZMs, however they are optimistic about the future of 1D MZMs. "The cool part of this paper is that we've made MZMs in 2-D materials," said Professor Liljeroth "In principle these are easier to make and easier to customize the properties of, and ultimately make into a usable device." The paper, Topological superconductivity in a van der Waals heterostructure, was published 17 December in Nature. | 10.1038/s41586-020-2989-y |
Biology | Plant science discovery may help treat allergies and immune deficiencies | Xiyu Ma et al. Ligand-induced monoubiquitination of BIK1 regulates plant immunity, Nature (2020). DOI: 10.1038/s41586-020-2210-3 Journal information: Nature | http://dx.doi.org/10.1038/s41586-020-2210-3 | https://phys.org/news/2020-05-science-discovery-allergies-immune-deficiencies.html | Abstract Recognition of microbe-associated molecular patterns (MAMPs) by pattern recognition receptors (PRRs) triggers the first line of inducible defence against invading pathogens 1 , 2 , 3 . Receptor-like cytoplasmic kinases (RLCKs) are convergent regulators that associate with multiple PRRs in plants 4 . The mechanisms that underlie the activation of RLCKs are unclear. Here we show that when MAMPs are detected, the RLCK BOTRYTIS -INDUCED KINASE 1 (BIK1) is monoubiquitinated following phosphorylation, then released from the flagellin receptor FLAGELLIN SENSING 2 (FLS2)–BRASSINOSTEROID INSENSITIVE 1-ASSOCIATED KINASE 1 (BAK1) complex, and internalized dynamically into endocytic compartments. The Arabidopsis E3 ubiquitin ligases RING-H2 FINGER A3A (RHA3A) and RHA3B mediate the monoubiquitination of BIK1, which is essential for the subsequent release of BIK1 from the FLS2–BAK1 complex and activation of immune signalling. Ligand-induced monoubiquitination and endosomal puncta of BIK1 exhibit spatial and temporal dynamics that are distinct from those of the PRR FLS2. Our study reveals the intertwined regulation of PRR–RLCK complex activation by protein phosphorylation and ubiquitination, and shows that ligand-induced monoubiquitination contributes to the release of BIK1 family RLCKs from the PRR complex and activation of PRR signalling. Main Prompt activation of PRRs upon microbial infection is essential for hosts to defend against pathogen attacks 1 , 2 , 3 . The Arabidopsis BIK1 family of RLCKs are immune regulators associated with multiple PRRs, including the bacterial flagellin receptor FLS2 and the BAK1 and SERK family co-receptors 5 , 6 . Upon ligand perception, BIK1 is phosphorylated by BAK1 and subsequently dissociates from the FLS2–BAK1 complex 7 . Downstream of the PRR complex, BIK1 phosphorylates plasma-membrane-resident NADPH oxidases to regulate the production of reactive oxygen species (ROS) 8 , 9 , and phosphorylates the cyclic nucleotide-gated channels to trigger a rise in cytosolic calcium 10 . However, it remains unclear how the activation of BIK1 and its dynamic association with the PRR complex is regulated. Ligand-induced increase in BIK1 puncta BIK1–GFP localized both to the periphery of epidermal pavement cells and to intracellular puncta in Arabidopsis transgenic plants expressing functional 35S::BIK1-GFP analysed by spinning disc confocal microscopy (SDCM) (Fig. 1a , Extended Data Fig. 1a, b ). BIK1–GFP colocalized with the FM4-64-stained plasma membrane (Fig. 1b ), and frequently within endosomal compartments (Fig. 1b ). Time-lapse SDCM showed that BIK1–GFP puncta were highly mobile, disappearing, appearing, and moving rapidly in and out of the plane of view (Extended Data Fig. 1c ). The abundance of BIK1–GFP puncta increased over time (3–17 and 18–32 min) after treatment with the flagellin peptide flg22 (Fig. 1c , Extended Data Fig. 1d–p ). The timing of the ligand-induced increase in BIK1–GFP puncta differed from that of the increase in FLS2–GFP puncta, which were significantly increased 35 min after flg22 treatment 11 , 12 , 13 (Fig. 1d ). Ligand-induced endocytosis of FLS2 contributes to the degradation of the activated FLS2 receptor and attenuation of signalling 11 , 12 , 13 , 14 , whereas increased abundance of BIK1–GFP puncta precedes that of FLS2–GFP (Fig. 1c, d ). Fig. 1: MAMP-induced BIK1 endocytosis and monoubiquitination. a , BIK1–GFP localizes to the cell periphery and intracellular puncta in maximum intensity projections of cotyledon epidermal cells (dashed box expanded in insert). Scale bar, 10 μm. b , BIK1–GFP colocalizes with FM4-64 in the plasma membrane (asterisk) and intracellular puncta (arrowheads). Scale bar, 5 μm. Pearson’s correlation coefficient for BIK1–GFP and FM4-64 is 0.55 ± 0.14 ( n = 35). c , d , BIK1 and FLS2 puncta increase after treatment with 1 μM flg22. Mean ± s.e.m. overlaid on dot plots. n = 56, 48, 49, 47 images for 0, 3–17, 18–32, 33–45 min of treatment, respectively, for BIK1–GFP ( c ) and n = 24, 15, 21, 36, 34, 39, 39 images for 0, 5–15, 20–30, 35–45, 50–60, 65–75, 80–90 min of treatment, respectively, for FLS2–GFP ( d ). Scale bar, 5 μm (one-way analysis of variance (ANOVA)). e , Flg22 induces BIK1 monoubiquitination. Protoplasts from wild-type plants were transfected with plasmids expressing BIK1-HA and FLAG-UBQ , and were treated with 100 nM flg22 for 30 min. After immunoprecipitation (IP) with anti-FLAG agarose, ubiquitinated BIK1 was detected by immunoblot (IB) using anti-HA antibodies (lanes 1 and 2) or treated with GST–USP2-cc (lane 3). Heat-inactivated (HI) USP2-cc was used as a control (lane 4). Bottom panel shows BIK1–HA protein expression. Numbers on left show molecular mass (kDa). f , Time-course of flg22-induced BIK1 phosphorylation and ubiquitination. Protoplasts expressing FLAG–UBQ and BIK1–HA were treated with 100 nM flg22 for the indicated times. BIK1 band intensities were quantified using Image Lab (Bio-Rad). Quantification of BIK1 phosphorylation (under bottom panel) calculated as ratio of intensity of the upper band (pBIK1) to the sum intensities of shifted and non-shifted bands (pBIK1 + BIK1). Quantification of BIK1 ubiquitination (under top panel) calculated as relative intensity (fold change) of Ub–BIK1 bands (no treatment set to 1.0). g , BIK1 variants with impaired phosphorylation show compromised flg22-induced ubiquitination. All experiments were repeated at least three times with similar results. Source Data Full size image Ligand-induced BIK1 monoubiquitination Ligand-induced FLS2 degradation is mediated by the U-box E3 ligases PUB12 and PUB13, which polyubiquitinate FLS2 15 , 16 , 17 . We tested whether BIK1 is ubiquitinated upon treatment with flg22 using an in vivo ubiquitination assay in Arabidopsis protoplasts that co-expressed FLAG epitope-tagged ubiquitin (FLAG–UBQ) and haemagglutinin (HA) epitope-tagged BIK1 (Fig. 1e , Extended Data Fig. 2a ). Treatment with flg22 induced ubiquitination of BIK1 (Fig. 1e ), as ubiquitinated BIK1 was detected by an anti-HA immunoblot upon immunoprecipitation with an anti-FLAG antibody. Flg22 also induced ubiquitination of BIK1 in pBIK1::BIK1-HA transgenic plants (Extended Data Fig. 2b ). The strong and discrete band of ubiquitinated BIK1 indicates monoubiquitination (Fig. 1e , Extended Data Fig. 2a, b ), in contrast to the ladder-like smear of protein migration that indicates polyubiquitination of BAK1 and FLS2 (Extended Data Fig. 2c, d ). The apparent molecular mass of ubiquitinated BIK1 (about 52 kDa) is around 8 kDa larger than that of unmodified BIK1 (44 kDa), consistent with the attachment of a single ubiquitin to BIK1. Incubation with the catalytic domain of the mouse deubiquitinase USP2 (USP2-cc), but not its heat-inactivated form, reduced the molecular mass by about 8 kDa (Fig. 1e ). We observed a similar pattern of ubiquitination of BIK1 when we used the UBQ(K0) variant, in which all seven lysine residues in UBQ were changed to arginine, thus preventing the formation of polyubiquitination chains (Extended Data Fig. 2e, f ). Notably, flg22-induced ubiquitination of BIK1 was blocked by treatment with the ubiquitination inhibitor PYR-41, but not by the proteasome inhibitor MG132, and was not observed in fls2 or bak1-4 mutants (Extended Data Fig. 2g–i ). In addition to flg22, other MAMPs—including elf18, pep1, and chitin—also induced monoubiquitination of BIK1 (Extended Data Fig. 2j ), in line with the notion that BIK1 is a convergent component downstream of multiple PRRs 4 . Monoubiquitination of the BIK1 family RLCKs PBL1 and PBL10, but not of another RLCK, BSK1, was enhanced upon treatment with flg22 (Extended Data Fig. 2k, l ), suggesting that detection of MAMPs induces monoubiquitination of BIK1 family RLCKs. Upon flg22 perception, BIK1 is phosphorylated 5 , 6 , as shown by an immunoblot mobility shift within 1 min with a plateau around 10 min (Fig. 1f ). However, flg22-induced ubiquitination of BIK1 becomes apparent only 10 min after treatment and reaches a plateau around 30 min (Fig. 1f ), suggesting that flg22-induced ubiquitination of BIK1 may occur after its phosphorylation. BIK1 phosphorylation-deficient mutants, including a kinase-inactive mutant (BIK1(KM)) and two phosphorylation site mutants (BIK1(T237A) and BIK1(Y250A)) showed largely compromised flg22-induced ubiquitination (Fig. 1g ). In addition, the kinase inhibitor K252a blocked flg22-induced ubiquitination of BIK1 (Extended Data Fig. 3a ). Plasma membrane localization is required for BIK1 ubiquitination, as BIK1(G2A), which bears a mutation of the myristoylation motif that is essential for plasma membrane localization, was not ubiquitinated upon flg22 treatment (Extended Data Fig. 3b, c ). Together, these data suggest that flg22-induced phosphorylation of BIK1 is a prerequisite for its monoubiquitination at the plasma membrane. BIK1 ubiquitination by RHA3A and RHA3B There are 30 lysine residues in BIK1, each of which could potentially be ubiquitinated. We individually mutated 28 lysine residues to arginine (except for K105 and K106, which are located in the ATP-binding pocket and are required for kinase activity), and screened the mutants for flg22-induced ubiquitination. None of the individual K-to-R mutants blocked the ubiquitination of BIK1 without altering its kinase activity (Extended Data Fig. 3d ). BIK1(K204R), in which flg22-induced BIK1 monoubiquitination was compromised, also showed reduced phosphorylation in vivo and in vitro (Extended Data Fig. 3d, e ). To identify BIK1-associated regulators, we carried out a yeast two-hybrid screen using BIK1(G2A) as bait, and identified RHA3A ( AT2G17450 ), which encodes a functionally uncharacterized E3 ubiquitin ligase with a RING-H2 finger domain and an N-terminal transmembrane domain (Fig. 2a ). We confirmed that BIK1 interacts with RHA3A using an in vitro pull-down assay (Fig. 2b ), an in vivo co-immunoprecipitation (co-IP) assay in Arabidopsis protoplasts (Extended Data Fig. 4a ), and co-IP in transgenic plants that expressed both BIK1 and RHA3A under their native promoters (Fig. 2c , Extended Data Fig. 4b ). RHA3B (which is encoded by AT4G35480 ) is the closest homologue of RHA3A, bearing 66% amino acid identity (Fig. 2a ); RHA3B also co-immunoprecipitated with BIK1 (Extended Data Fig. 4c ). Flg22 treatment did not affect the interaction between BIK1 and RHA3A or RHA3B (called RHA3A/B henceforth) (Extended Data Fig. 4a, c ). Moreover, RHA3A/B co-immunoprecipitated with FLS2 (Extended Data Fig. 4d ). Fig. 2: The E3 ligases RHA3A/B interact with and monoubiquitinate BIK1. a , Domain organization of RHA3A/B. TD, transmembrane domain; RING, E3 catalytic domain; RHA3A CD , cytoplasmic domain. Amino-acid positions and the sequence of RING domain are shown. Cysteine and histidine residues that coordinate zinc are underlined. Asterisk shows the isoleucine residue that is involved in the E2–RING interaction. b , BIK1 interacts with RHA3A. GST or GST–BIK1 proteins immobilized on glutathione sepharose beads were incubated with maltose-binding protein (MBP) or MBP–RHA3A CD –HA proteins. Washed beads were subjected to immunoblotting with anti-MBP or anti-GST (top two panels). Input proteins are shown by immunoblotting (middle two panels) and Coomassie blue (CBB) staining (bottom). c , BIK1 associates with RHA3A. Transgenic plants carrying pBIK1::BIK1-HA and pRHA3A::RHA3A-FLAG (lines 7 and 10) were used for co-IP assay with anti-FLAG agarose and immunoprecipitated proteins were immunoblotted with anti-HA or anti-FLAG (top two panels). Bottom two panels, expression of BIK1–HA and RHA3A–FLAG. d , RHA3A ubiquitinates BIK1. GST–RHA3A CD or its I104A mutant was used in a ubiquitination reaction containing GST–BIK1–HA, E1, E2, and ATP. e , RHA3A/B are required for ubiquitination of BIK1. rha3a/b and rha3a plants were used for protoplast isolation followed by transfection with plasmids expressing BIK1-HA and FLAG-UBQ . The experiments were repeated three times with similar results. Source Data Full size image An in vitro ubiquitination assay showed that RHA3A had autoubiquitination activity and monoubiquitinated itself (Extended Data Fig. 5a, b ). Notably, glutathione- S -transferase (GST)–RHA3A, but not GST–RHA3A(I104A), in which a conserved isoleucine residue had been substituted, monoubiquitinated GST–BIK1–HA, as shown on immunoblots by an additional discrete band that migrated with an approximately 8-kDa increase in molecular mass (Fig. 2d ). The available rha3a and rha3b transfer DNA (T-DNA) insertion lines did not show a significant reduction in expression of the corresponding transcripts (Extended Data Fig. 5c ). We therefore generated artificial microRNAs (amiRNAs) of RHA3A/B 18 . Co-expression of amiR-RHA3A and amiR-RHA3B , but not of amiR-RHA3A alone, suppressed flg22-induced monoubiquitination of BIK1 in protoplast transient assays (Extended Data Fig. 5d, e ). Flg22-induced BIK1 monoubiquitination, but not phosphorylation, was also reduced in transgenic plants expressing amiR-RHA3A and amiR-RHA3B driven by the native promoters (Extended Data Fig. 5f, g ). We also generated rha3a and rha3a/b mutants using the CRISPR–Cas9 system (Extended Data Fig. 5h ). Flg22-induced monoubiquitination of BIK1 was reduced in the rha3a/b mutant (Fig. 2e ). These data indicate that RHA3A/B modulate flg22-induced monoubiquitination of BIK1. Sites of RHA3A-mediated BIK1 ubiquitination To identify sites of RHA3A-mediated BIK1 ubiquitination, we performed liquid chromatography-tandem mass spectrometry (LC–MS/MS) analysis of in vitro ubiquitinated BIK1. Among ten lysine residues identified (Fig. 3a, b ,Extended Data Fig. 6a–i ), K106 (which resides in the ATP-binding pocket) blocked BIK1 kinase activity when mutated 7 . Among the other nine lysine sites, all six lysines (K95, K170, K186, K286, K337, and K358) for which structural information is available 19 are located on the surface of BIK1 (Fig. 3c ). Furthermore, six ubiquitinated lysine residues were detected by LC–MS/MS of in vivo ubiquitinated BIK1–GFP upon treatment with flg22, and they all overlapped with those detected during in vitro RHA3A–BIK1 ubiquitination reactions (Extended Data Fig. 7a–h ). Individual lysine mutations did not affect ubiquitination of BIK1 in vivo (Extended Data Fig. 3d ), whereas combined mutations of the N-terminal five lysines (BIK1(N5KR)) or C-terminal four lysines (BIK1(C4KR)) partially compromised flg22-induced BIK1 ubiquitination. Mutation of all nine lysines in BIK1(9KR) largely blocked flg22-induced BIK1 monoubiquitination in vivo (Fig. 3d ) and RHA3A-mediated in vitro ubiquitination (Fig. 3e ). BIK1(9KR) showed similar activities to BIK1 with regard to its in vitro kinase activity (Fig. 3f ), flg22-induced BIK1 phosphorylation, and association with RHA3A in protoplasts (Extended Data Fig. 8a, b ). Furthermore, 35S::BIK1 9KR -HA/ WT transgenic plants showed normal flg22-induced MAPK activation and ROS production (Extended Data Fig. 8c, d ). Collectively, the data indicate that RHA3A monoubiquitinates BIK1 and that phosphorylation of BIK1 does not require monoubiquitination. Notably, BIK1 monoubiquitination may not be restricted to a single lysine, and multiple lysine residues could serve as monoubiquitin conjugation sites. Alternatively, monoubiquitination might be the primary form of modification of BIK1, whereas polyubiquitinated BIK1 could be short-lived. Fig. 3: Identification of sites of RHA3A-mediated BIK1 ubiquitination. a , BIK1 is ubiquitinated by RHA3A at multiple lysine residues. Ubiquitinated lysine residues with a diglycine remnant identified by LC–MS/MS analysis are shown in red with amino-acid positions. b , MS/MS spectrum of the peptide containing K 358 . c , Structure of BIK1 with six lysines identified as ubiquitination sites shown. Structural information was obtained from the Protein Data Bank (PDB ID: 5TOS) and analysed by PyMOL. d , BIK1(9KR) shows compromised flg22-induced ubiquitination. FLAG–UBQ and HA-tagged BIK1 mutants were expressed in protoplasts followed by treatment with 100 nM flg22 for 30 min. Quantification of fold change in BIK1 ubiquitination is shown mean ± s.e.m. overlaid on dot plot (middle). Different letters indicate significant difference with others (for example, the rightmost bar is significantly different from those marked a, b, and c but not d or e) ( P < 0.05, one-way ANOVA, n = 3). Lysines mutated in BIK1 mutants are shown in red (bottom). e , RHA3A cannot ubiquitinate BIK1(9KR). The assay was performed as in Fig. 2d . f , BIK1(9KR) exhibits normal in vitro kinase activity. The kinase assay was performed using GST–BIK1 or GST–BIK1(9KR) as the kinase and GST or GST–BAK1 K (kinase domain) as the substrate. All experiments except MS analyses were repeated three times with similar results. Source Data Full size image BIK1 monoubiquitination in immunity BIK1(9KR), in which monoubiquitination but not phosphorylation of BIK1 is blocked, enabled us to examine the function of BIK1 monoubiquitination without compromised kinase activity. We generated BIK1 9KR transgenic plants driven by the BIK1 native promoter in a bik1 background ( pBIK1::BIK1 9KR -HA/bik1 ) (Extended Data Fig. 8e, f ). Unlike pBIK1::BIK1-HA/bik1 transgenic plants, pBIK1::BIK1 9KR -HA/bik1 transgenic plants exhibited a reduced flg22-triggered ROS burst similar to that of the bik1 mutant (Fig. 4a ). Moreover, pBIK1::BIK1 9KR -HA/bik1 transgenic plants were more susceptible to the bacterial pathogen Pseudomonas syringae pv. tomato ( Pst ) DC3000 hrcC − than were wild-type or pBIK1::BIK1-HA/bik1 transgenic plants (Fig. 4b ). In addition, amiR-RHA3A/B transgenic plants exhibited compromised flg22-triggered production of ROS and enhanced susceptibility to Pst DC3000 (Fig. 4c, d ) and Pst DC3000 hrcC − and to the fungal pathogen Botrytis cinerea (Extended Data Fig. 8g, h ). Similar results were obtained with the rha3a/b mutants (Extended Data Fig. 8i, j ). Together, the data indicate that RHA3A/B-mediated monoubiquitination of BIK1 has a role in regulating ROS production and plant immunity. Fig. 4: RHA3A/B-mediated monoubiquitination of BIK1 contributes to its function in immunity and endocytosis. a , pBIK1::BIK1 9KR -HA / bik1 transgenic plants (lines 1 and 2) cannot complement bik1 for flg22-induced ROS production. One-way ANOVA; wild-type, BIK1/ bik1 : n = 53; bik1 : n = 54; BIK1(9KR)/ bik1 : n = 55. In all panels, data are shown as mean ± s.e.m. overlaid on dot plot; lines beneath P values indicate relevant pairwise comparisons. b , The pBIK1::BIK1 9KR -HA / bik1 transgenic plants show increased bacterial growth of Pst DC3000 hrcC – . Plants were spray-inoculated and bacterial growth was measured at four days post-inoculation (dpi). One-way ANOVA, n = 6. CFU, colony-forming units. c , amiRNA-RHA3A/B plants show reduced flg22-induced ROS production. One-way ANOVA, n = 51. d , amiRNA-RHA3A/B plants show increased bacterial growth of Pst DC3000. Plants were hand-inoculated and bacterial growth was measured at 2 dpi. One-way ANOVA, n = 5. e , f , Flg22-induced endocytosis of BIK1, BIK1(9KR), and FLS2 in N. benthamiana leaf epidermal cells. e , BIK1–TagRFP or BIK1(9KR)–TagRFP was co-expressed with FLS2–YFP followed by treatment with 100 μM flg22 and then imaged at the indicated time points by confocal microscopy. Scale bars, 20 μm. f , Quantification of BIK1–TagRFP (magenta) and FLS2–YFP (green) puncta. One-way ANOVA, additional images and n values shown in Extended Data Fig. 9c . g , BIK1(9KR) does not enable flg22-induced dissociation of BIK1 from FLS2. Top, co-IP was performed using protoplasts expressing FLS2–HA and BIK1–FLAG or BIK1(9KR)–FLAG, followed by treatment with 1 μM flg22 for 15 min. Bottom, the interaction of BIK1 with FLS2 was quantified as intensity from IP: anti-FLAG, IB: anti-HA divided by intensity from IP: anti-FLAG, IB: anti-FLAG. Mean ± s.e.m. fold change (BIK1 no treatment = 1.0; one-way ANOVA, n = 3). All experiments were repeated three times with similar results. Source Data Full size image BIK1 monoubiquitination in endocytosis As detection of flg22 moderately increased BIK1–GFP endosomal puncta (Fig. 1c ), we tested whether monoubiquitination of BIK1 is involved in flg22-triggered BIK1 endocytosis. Fewer FM4-64-labelled puncta were observed in plants expressing BIK1(9KR)–GFP than in those expressing BIK1–GFP after 10 or 15 min of treatment with flg22 (Extended Data Fig. 9a, b ). In addition, we compared the flg22-triggered endocytosis of BIK1–TagRFP and BIK1(9KR)–TagRFP when co-expressed with FLS2–YFP in Nicotiana benthamiana . As seen in transgenic plants (Fig. 1c, d ), endosomal puncta of BIK1–TagRFP increased at 10–20 min, whereas FLS2–YFP puncta increased only after 60 min of flg22 treatment (Fig. 4e, f , Extended Data Fig. 9c ). A large portion (about 90%) of flg22-induced BIK1–TagRFP puncta did not colocalize with FLS2–YFP puncta (Extended Data Fig. 9d ), suggesting that BIK1 and FLS2 are not likely to be internalized together. This is consistent with the differing ubiquitination characteristics of BIK1 and FLS2 (monoubiquitination versus polyubiquitination, 10 min versus 1 h). When compared to BIK1, BIK1(9KR)–TagRFP was more abundant in puncta before treatment, but the number of puncta did not increase after flg22 treatment (Fig. 4e, f , Extended Data Fig. 9c ), indicating that internalization of BIK1(9KR)–TagRFP does not respond to activation of PRRs. In addition, colocalization of BIK1(9KR)–TagRFP with YFP-tagged ARA6 (a plant-specific Rab GTPase that resides on late endosomes 20 ) was substantially reduced when compared to that of BIK1–TagRFP (Extended Data Fig. 9e, f ). Notably, flg22-induced endocytosis of FLS2–YFP was absent in the presence of BIK1(9KR)–TagRFP (Fig. 4e, f ). Together, our data support the conclusion that ligand-induced monoubiquitination of BIK1 contributes to its internalization from the plasma membrane. Notably, whereas flg22 treatment induced phosphorylation-dependent dissociation of BIK1 from FLS2 5 , 6 , 21 , this effect was largely absent in the case of BIK1(9KR) (Fig. 4g ), consistent with the finding that BIK1(9KR) shows impaired FLS2 internalization (Fig. 4e, f ). In addition, we observed an increase in the association between BIK1(9KR) and FLS2 without flg22 treatment (Fig. 4g ). Treatment with the ubiquitination inhibitor PYR-41 also blocked flg22-induced dissociation of BIK1 from FLS2 and enhanced BIK1–FLS2 association (Extended Data Fig. 10a ). Our data indicate that ligand-induced monoubiquitination of BIK1 has an important role in dissociation of BIK1 from the plasma membrane-localized PRR complex, endocytosis of BIK1 and activation of immune signalling (Extended Data Fig. 10b ). Discussion The BIK1 family RLCKs are central elements of plant PRR signalling, with many layers of regulation 4 , 22 . The stability of BIK1 is crucial for maintaining immune homeostasis. The plant U-box proteins PUB25 and PUB26 polyubiquitinate BIK1 and regulate its stability in the steady state 23 . This module regulates the homeostasis of non-activated BIK1 without affecting ligand-activated BIK1 23 . We have identified a role of RHA3A/B in monoubiquitinating BIK1 and activating PRR signalling, which is distinct from that of PUB25 and PUB26. The levels of BIK1(9KR) proteins in transgenic plants and protoplasts are similar to those of wild-type BIK1 (Extended Data Fig. 10c, d ), suggesting that monoubiquitination of BIK1 may not regulate its stability. The nature of protein ubiquitination, including monoubiquitination and polyubiquitination, dictates the distinct fates of substrates, such as proteasome-mediated protein degradation, nonproteolytic functions of protein kinase activation, and membrane trafficking 24 . Ligand-induced polyubiquitination of FLS2 by PUB12 or PUB13 promotes degradation of FLS2, thereby attenuating immune signalling 15 , 16 , whereas ligand-induced monoubiquitination of BIK1 triggers dissociation of BIK1 from PRR complexes and activates intracellular signalling. Thus, differential ubiquitination and endocytosis of distinct PRR–RLCK complex components are likely to serve as cues to fine-tune plant immune responses. Methods No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. Plant materials and growth conditions A. thaliana accession Col-0 (wild type, WT), mutants fls2 , bak1-4 , bik1 , transgenic pBIK1::BIK1-HA in the bik1 background, and pFLS2::FLS2-GFP in the Col-0 background have been described previously 7 , 13 . p35S::BIK-GFP and p35S::BIK1 9KR -GFP in the Col-0 background, pBIK1::BIK1 9KR -HA transgenic plants in the bik1 background, p35S::BIK1-HA , p35S::BIK1 9KR -HA transgenic plants in the Col-0 background, pBIK1::BIK1-HA in the Col-0 background, pBIK1::BIK1-HA/ pRHA3A::RHA3A-FLAG double transgenic plants in the Col-0 background and pRHA3A::amiR-RHA3A-pRHA3B::amiR-RHA3B transgenic plants in the Col-0 background were generated in this study (see below). All Arabidopsis plants were grown in soil (Metro Mix 366, Sunshine LP5 or Sunshine LC1, Jolly Gardener C/20 or C/GP) in a growth chamber at 20–23 °C, 50% relative humidity and 75 μE m − 2 s −1 light with a 12-h light/12-h dark photoperiod for four weeks before pathogen infection assay, protoplast isolation, and ROS assay. For confocal microscopy imaging, seeds were sterilized, maintained for 2 days at 4 °C in the dark, and germinated on vertical half-strength Murashige and Skoog (½MS) medium (1% (wt/vol) sucrose) agar plates, pH 5.8, at 22 °C in a 16-h light/8-h dark cycle for 5 days with a light intensity of 75 μE m −2 s −1 . For FM4-64 staining, whole seedlings were incubated for 15 min in 3 ml of ½MS liquid medium containing 2 μM FM4-64 and washed twice by dipping into deionized water before adding the elicitor (flg22, 100 nM). Wild-type tobacco ( N. benthamiana ) plants were grown under 14 h of light and 10 h of darkness at 25 °C. Statistical analyses Data for quantification analyses are presented as mean ± s.e.m. The statistical analyses were performed by Student’s t -test or one-way ANOVA test. The number of replicates is given in the figure legends. Plasmid construction and generation of transgenic plants FLS2, BAK1, BIK1, PBL1, PBL10 or BSK1 tagged with HA, FLAG or GFP in a plant gene expression vector pHBT used for protoplast assays, and FLS2 CD , BAK1 CD , BAK1 K , PUB13, BIK1, or BIK1(KM) fused with GST or MBP used for Escherichia coli fusion protein isolation have been described previously 5 , 7 . BIK1 point mutations in a pHBT vector were generated by site-directed mutagenesis with primers listed in Supplementary Table 1 using the pHBT-BIK1-HA construct as the template. BIK1 N5KR was constructed by sequentially mutating K41, K95, K170 and K186 into arginine on BIK1 K31R . BIK1 C4KR was constructed by sequentially mutating K337, K358 and K366 on BIK1 K286R . pHBT-BIK1 N5KR and pHBT-BIK1 C4KR were then digested with XbaI and StuI and ligated together to generate pHBT-BIK1 9KR -HA . BIK1 9KR was sub-cloned into pHBT-FLAG or pHBT-GFP with BamHI and StuI digestion to generate pHBT-BIK1 9KR -FLAG or pHBT-BIK1 9KR -GFP . BIK1 9KR was sub-cloned into the binary vector pCB302-pBIK1::BIK1-HA or pCB302-35S::BIK1-HA with BamHI and StuI digestion to generate pCB302-pBIK1::BIK1 9KR -HA , or pCB302-35S::BIK1 9KR -HA. BIK1-GFP or BIK1 9KR -GFP was sub-cloned into pCB302 with BamHI and PstI digestion to generate pCB302-35S::BIK1-GFP and pCB302-35S::BIK1 9KR -GFP . BIK1 K204R or BIK1 9KR was sub-cloned into a modified GST ( pGEX4T-1 , Pharmacia) vector with BamHI and StuI digestion to generate pGST-BIK1 K204R or pGST-BIK1 9KR , respectively. BIK1-HA or BIK1 9KR -HA was further sub-cloned into the pGST vector as following: digestion with PstI, blunting end by T4 DNA polymerase, digestion with BamHI and ligation into a BamHI/StuI-digested pGST vector to generate pGST-BIK1-HA and pGST-BIK1 9KR -HA . The RHA3A gene ( AT2G17450 ) was cloned by PCR amplification from Col-0 complementary (c)DNAs with primers containing BamHI at the 5′ end and StuI at the 3′ end, followed by BamHI and StuI digestion and ligation into the pHBT vector with an HA or FLAG tag at the C terminus. The RHA3B gene ( AT4G35480 ) was cloned similarly to RHA3A using BamHI and SmaI-containing primers. pHBT-RHA3A I104A was generated by site-directed mutagenesis with primers listed in Supplementary Table 1 . RHA3A CD (amino acids 50–186) and RHA3A CD/I104A were cloned by PCR amplification from RHA3A or RHA3A I104A , respectively, using BamHI- and StuI-containing primers. RHA3A CD and RHA3A CD/I104A were sub-cloned into pGST or a modified pMBP (pMAL-c2, NEB) vector with BamHI and StuI digestion for isolation of E. coli fusion proteins. The promoter of RHA3A or RHA3B was PCR-amplified from genomic DNAs of Col-0 with primers containing SacI and BamHI, and ligated into pHBT . The fragment of pRHA3A::RHA3A-FLAG was digested by SacI and EcoRI, and ligated into pCAMBIA2300 . AmiRNA constructs were generated as previously described 18 . In brief, amiRNA candidates were designed according to instructions at . Three candidates were chosen for each gene with RHA3A for amiRNA480 : TTTTGTCAATACACTCCACGG; amiRNA211 : TCAACGCAGATAAGAGCGCTA; amiRNA109 : TCAAGTAATCTTGACGGTCGT, and RHA3B for amiRNA444 : TTATGCATATTGCACACTCCG; amiRNA113 : TAATCTAGAGGAGCGAGTCAG; amiRNA214 : TCTACGCATACGAGAGCGCAT. Primers for cloning amiRNAs were generated according to instructions at . The cognate fragments were cloned into the pHBT-amiRNA-ICE1 vector 18 . pCB302-pRHA3A::amiRNA-RHA3A-pRHA3B::amiRNA-RHA3B was constructed as follows: the RHA3A promoter was PCR amplified from pRHA3A::RHA3A-FLAG , digested with SacI and BamHI and ligated with pHBT-amiR-RHA3A to generate pHBT-pRHA3A::amiR-RHA3A . The pRHA3A::amiR-RHA3A fragment was further released by SacI and PstI digestion and ligated into pCB302 vector to generate pCB302-pRHA3A::amiRNA-RHA3A . pHBT-pRHA3B::amiR-RHA3B was constructed similarly followed by PCR amplification using a primer containing SacI sites at both the 5′ and 3′ends, subsequent digestion with SacI and ligation into the pCB302-pRHA3A::amiRNA-RHA3A vector. Tandem pRHA3A/B-amiRNA-RHA3A/B in the same direction was confirmed by digestion and selected for further experiments. The rha3a/b mutant was generated by the CRISPR–Cas9 system following the published protocol 25 . In brief, primers containing guide RNA (gRNA) sequences of RHA3A and RHA3B were used in PCR to insert both gRNA sequences into the pDT1T2 vector. The pDT1T2 vector containing both gRNAs was further PCR amplified, digested with BsaI and ligated into a binary vector pHEE401E . Agrobacterium-tumefaciens -mediated floral dip was used to transform the pHEE401E vector into Col-0 plants. Genomic DNAs from hygromycin (25 μg/ml)-positive plants were extracted, PCR amplified with gene-specific primers and sequenced by Sanger sequencing. The monomer ubiquitin of Arabidopsis ubiquitin gene 10 ( UBQ10 , At4g05320 ) carrying lysine-to-arginine mutations at all the seven lysine residues ( UBQ K0 : 5′-ATGCAGATCTTTGTTAGGACTCTCACCGGAAGGACTATCACCCTCGAGGTGGAAAGCTCTGACACCATCGACAACGTTAGGGCCAGGATCCAGGATAGGGAAGGTATTCCTCCGGATCAGCAGAGGCTTATCTTCGCCGGAAGGCAGTTGG AGGATGGCCGCACGTTGGCGGATTACAATATCCAGAGGGAATCCA CCCTCCACTTGGTCCTCAGGCTCCGTGGTGGTTAA-3′) was synthesized and cloned into a pUC57 vector by GenScript USA Incorporation. UBQ K0 was then amplified by PCR with primers listed in the Supplementary Table 1 and further sub-cloned into a modified pHBT vector with BamHI and PstI digestion to generate pHBT-FLAG-UBQ K0 . Plasmids used for transient expression in N. benthamiana were constructed as reported previously 26 . In brief, FLS2 , BIK1 , and BIK1 9KR were PCR amplified and recombined into pDONR207-YFP , pDONR207-TagRFP , and pDONR207-GFP vectors by In-Fusion HD Cloning (TaKaRa Bio). The pDONR207 vectors were subsequently transferred to a destination vector pmAEV (derived from binary vector pCAMBIA with a 35S promoter) using the Gateway LR reaction (Thermo Fisher scientific). DNA fragments cloned into the final constructs were confirmed via Sanger sequencing. A.-tumefaciens -mediated floral dip was used to transform the above binary vectors into bik1 or Col-0 plants. The transgenic plants were selected using glufosinate-ammonium (Basta, 50 μg/ml) for the pCB302 vector or kanamycin (50 μg/ml) for the pCAMBIA2300 vector. Multiple transgenic lines were analysed by immunoblotting for protein expression. Two lines with 3:1 segregation ratios for antibiotic resistance in the T3 generation were selected to obtain homozygous seeds for further studies. amiR-RHA3A/B transgenic plants that were resistant to Basta in the T2 generation were used for assays. Yeast two-hybrid screen The cDNA library constructed in a modified pGADT7 vector (Clontech) has been previously described 15 . BIK1(G2A) from pHBT-BIK1 G2A -HA was sub-cloned into a modified pGBKT7 vector with BamHI and StuI digestion. pGBK-BIK1 G2A was transformed into the yeast AH109 strain. The resulting yeast transformants were then transformed with the cDNA library and screened in synthetic defined (SD) medium without Trp, Leu, His, Ade (SD-T-L-H-A) and SD-T-L-H containing 1 mM 3-amino-1, 2, 4-triazole (3-AT). The confirmed yeast colonies were subjected to plasmid isolation and sequencing. Pathogen infection assays Pst DC3000 was cultured overnight at 28 °C in King’s B medium supplemented with rifamycin (50 μg/ml). Bacteria were collected by centrifugation at 3,000 g , washed and re-suspended to a density of 10 6 colony-forming units (cfu)/ml with 10 mM MgCl 2 . Leaves from four-week-old plants were hand-inoculated with bacterial suspension using a needleless syringe. To measure in planta bacterial growth, five to six sets of two leaf discs, 6 mm in diameter, were punched and ground in 100 μl ddH 2 O. Serial dilutions were plated on TSA plates (1% tryptone, 1% sucrose, 0.1% glutamatic acid and 1.8% agar) containing 25 μg/ml rifamycin. Plates were incubated at 28 °C and bacterial cfu were counted 2 days after incubation. For spray inoculation, Pst DC3000 or Pst DC3000 hrcC − bacteria were collected and re-suspended to 5 × 10 8 cfu/ml with 10 mM MgCl 2 , silwet L-77 (0.02%) and sprayed onto the leaf surface. Plants were covered with a transparent plastic dome to maintain humidity after spraying. After incubation, the third pair of true leaves was detached, soaked in 70% ethanol for 30 s and rinsed in water, and bacterial growth was measured as described above. Protoplast transient expression and co-IP assays Protoplast isolation and the transient expression assay have been described previously 27 . For protoplast-based co-IP assays, protoplasts were transfected with a pair of constructs (the empty vectors as controls, 100 μg DNA for 500 μl protoplasts at a density of 2 × 10 5 /ml for each sample) and incubated at room temperature for 6–10 h. After treatment with flg22 at the indicated concentrations and time points, protoplasts were collected by centrifugation and lysed in 300 μl co-IP buffer (150 mM NaCl, 50 mM Tris-HCl, pH7.5, 5 mM EDTA, 0.5% Triton, 1 × protease inhibitor cocktail, before use, adding 2.5 μl 0.4 M DTT, 2 μl 1 M NaF and 2 μl 1 M Na 3 VO 3 for 1 ml IP buffer) by vortexing. After centrifugation at 10,000 g for 10 min at 4 °C, 30 μl supernatant was collected for input controls and 7 μl anti-FLAG–agarose beads were added to the remaining supernatant and incubated at 4 °C for 1.5 h. Beads were collected and washed three times with washing buffer (150 mM NaCl, 50 mM Tris-HCl, pH 7.5, 5 mM EDTA, 0.5% Triton) and once with 50 mM Tris-HCl, pH 7.5. Immunoprecipitates were analysed by immunoblotting with the indicated antibodies. The amiRNA candidate screens were performed as previously described 18 . In vivo ubiquitination assay FLAG-tagged UBQ (FLAG–UBQ) or a vector control (40 μg DNA) was co-transfected with the target gene with an HA tag (40 μg DNA) into 400 μl protoplasts at a density of 2 × 10 5 /ml for each sample, and protoplasts were incubated at room temperature for 6–10 h. After treatment with 100 nM flg22 at the indicated time points, protoplasts were collected for co-IP assay in co-IP buffer containing 1% Triton X-100. PYR-41 (Sigma, cat # N2915) was added at the indicated concentrations and time points (see Figure legends). Recombinant protein isolation and in vitro kinase assays Fusion proteins were produced from E. coli BL21 at 16 °C using LB medium with 0.25 mM isopropyl β-D-1-thiogalactopyranoside (IPTG). GST fusion proteins were purified with Pierce glutathione agarose (Thermo Scientific), and MBP fusion proteins were purified using amylose resin (New England Biolabs) according to the standard protocol from companies. The in vitro kinase assays were performed with 0.5 μg kinase proteins and 5 μg substrate proteins in 30 μl kinase reaction buffer (10 mM Tris-HCl, pH 7.5, 5 mM MgCl 2 , 2.5 mM EDTA, 50 mM NaCl, 0.5 mM DTT, 50 μM ATP and 1 μCi [γ - 32 P] ATP). After gentle shaking at room temperature for 2 h, samples were denatured with 4 × SDS loading buffer and separated by 10% SDS–PAGE gel. Phosphorylation was analysed by autoradiography. In vitro ubiquitination assay Ubiquitination assays were performed as previously described with modifications 28 . Reactions containing 1 μg substrate, 1 μg HIS 6 –E1 (AtUBA1), 1 μg HIS 6 –E2 (AtUBC8), 1 μg GST–E3, 1 μg ubiquitin (Boston Biochem, cat # U-100AT-05M) in the ubiquitination reaction buffer (20 mM Tris-HCl, pH 7.5, 5 mM MgCl 2 , 0.5 mM DTT, 2 mM ATP) were incubated at 30 °C for 3 h. The ubiquitinated proteins were detected by immunoblotting with indicated antibodies. The rabbit monoclonal anti-RHA3A antibody was generated according to the company’s standard protocol against the peptide: AGGDSPSPNKGLKKC (GenScript). In vitro deubiquitination assay Mouse USP2-cc was cloned by PCR amplification from mouse cDNAs with primers containing BamHI at the 5′ end and SmaI at the 3′ end, followed by BamHI and SmaI digestion and ligation into the pGST vector to construct pGST-Usp2-cc . GST–USP2-cc fusion proteins were produced in E. coli BL21 and purified with Pierce glutathione agarose (Thermo Scientific) according to the manufacturer’s standard protocol. Deubiquitination (DUB) assays were performed as previously described with modifications 29 . In brief, an in vitro ubiquitination assay was performed overnight at 28 °C as described above. The reaction was aliquoted into individual tubes containing USP2-cc or heat-inactivated (HI) (95 °C for 5 min) USP2-cc as a control in the DUB reaction buffer (50 mM Tris-HCl, pH 7.5, 50 mM NaCl and 5 mM DTT) and incubated at 28 °C for 5 h. Samples were then denatured and analysed by immunoblotting. For in vitro DUB assay with flg22-induced ubiquitinated BIK1, BIK1–HA and FLAG–UBQ were expressed in Arabidopsis protoplasts treated with 100 nM flg22 for 30 min. The ubiquitinated BIK1–HA proteins were immunoprecipitated as described above. After washing with 50 mM Tris-HCl, agarose beads were washed once with DUB dilution buffer (25 mM Tris-HCl, pH 7.5, 150 mM NaCl and 10 mM DTT) and mixed with GST–USP2-cc in DUB reaction buffer. After overnight incubation, beads were denatured in SDS buffer and analysed by immunoblotting. MAPK assay Five 11-day-old Arabidopsis seedlings per treatment, grown on vertical plates with ½MS medium, were transferred into water overnight before flg22 treatment. Seedlings were collected, drilled and lysed in 100 μl co-IP buffer. Protein samples with 1 × SDS buffer were separated in 10% SDS–PAGE gel to detect pMPK3, pMPK6 and pMPK4 by immunoblotting with anti-pERK1/2 antibody (Cell Signaling, cat # 9101). Detection of ROS production The third or fourth pair of true leaves from 4- to 5-week-old soil-grown Arabidopsis plants were punched into leaf discs (diameter 5 mm). Leaf discs were incubated in 100 μl ddH 2 O with gentle shaking overnight. Water was replaced with 100 μl reaction solution containing 50 μM luminol, 10 μg/ml horseradish peroxidase (Sigma-Aldrich) supplemented with or without 100 nM flg22. Luminescence was measured with a luminometer (GloMax-Multi Detection System, Promega) with a setting of 1 min as the interval for 40–60 min. Detected values of ROS production were indicated as means of relative light units (RLU). In vitro GST pull-down assay GST or GST–BIK1 agarose beads were obtained after elution and washed with 1 × PBS (137 mM NaCl, 2.7 mM KCl, 15 mM Na 2 HPO 4 , 4.4 mM KH 2 PO 4 ) three times. HA-tagged MBP-RHA3A CD or MBP proteins (2 μg) were pre-incubated with 10 μl prewashed glutathione agarose beads in 300 μl pull-down incubation buffer (20 mM Tris-HCl, pH 7.5, 100 mM NaCl, 0.1 mM EDTA, and 0.2% Triton X-100) for 30 min at 4 °C. Five microlitres of GST or GST–BIK1 agarose beads were pre-incubated with 20 μg bovin serum albumin (BSA, Sigma, cat # A7906) in 300 μl incubation buffer for 30 min at 4 °C with gentle shaking. The supernatant containing MBP-RHA3A CD or MBP was incubated with pre-incubated GST or GST–BIK1 agarose beads for 1 h at 4 °C with gentle shaking. The agarose beads were precipitated and washed three times in pull-down wash buffer (20 mM Tris-HCl, pH 7.5, 300 mM NaCl, 0.1 mM EDTA, and 0.5% Triton X-100). The pulled-down proteins were analysed by immunoblotting with an anti-MBP antibody (Biolegend, cat # 906901). Mass spectrometry analysis of ubiquitination sites In vitro ubiquitination reactions with GST–RHA3A CD and GST–BIK1 or GST–BIK1(K204R) were performed as mentioned above with overnight incubation. Reactions were loaded on an SDS–PAGE gel (7.5%) and ran for a relatively short time until the ubiquitinated bands could be separated from the original GST–BIK1 (GST–BIK1 band ran less than 0.5 cm from the separating gel). Ubiquitinated bands were sliced and trypsin-digested before LC–MS/MS analysis on an LTQ-Orbitrap hybrid mass spectrometer (Thermo Fisher) as previously described 30 . The MS/MS spectra were analysed with SEQUEST software, and images were exported from SEQUEST. In vivo BIK1 ubiquitination sites were identified as follows: 20 ml of wild-type Arabidopsis protoplasts at a concentration of 2 × 10 5 per ml were transfected with BIK1–GFP and FLAG–UBQ and the protoplasts were treated with 200 nM flg22 for 30 min after 7 h of incubation. GFP-trap-Agarose beads (Chromotek, cat # gta-20) were incubated with cell lysates at a ratio of 10 μl beads to 4 × 10 5 cells for 1 h at 4 °C and beads were pooled from 10 tubes, washed using IP buffer three times, and denatured in SDS buffer. Samples were separated by 10% SDS–PAGE and stained with GelCode Blue Stain Reagent (Thermo Fisher cat # 24590). Ubiquitinated bands were sliced and analysed as described above. Confocal microscopy and image analysis For laser scanning confocal microscopy, images were taken using a Leica SP8X inverted confocal microscope equipped with a HC PL APO CS2 40×/1.10 and 63×/1.20 water-corrected objective. The excitation wavelength was 488 nm for both GFP and FM4-64 (Thermo Fisher T13320), 514 nm for YFP and 555 nm for TagRFP using the white light laser. Emission was detected at 500–530 nm for GFP, 570–670 nm for FM4-64, 519–549 nm for YFP, and 569–635 nm for TagRFP by using Leica hybrid detectors. Autofluorescence was removed by adjusting the time gate window between 0.8 and 6 ns. Intensities were manipulated using ImageJ software. For SDCM, image series were captured using a custom Olympus IX-71 inverted microscope equipped with a Yokogawa CSU-X1 5,000 rpm spinning disc unit and 60× silicon oil objective (Olympus UPlanSApo 60×/1.30 Sil) as previously described 11 . For the custom SDCM system, GFP and FM4-64 were excited with a 488-nm diode laser and fluorescence was collected through a series of Semrock Brightline 488-nm single-edge dichroic beamsplitter and bandpass filters: 500–550 nm for GFP and 590–625 nm for FM4-64. Camera exposure time was set to 150 ms. For each image series, 67 consecutive images at a z -step interval of 0.3 μm (20 μm total depth) were captured using Andor iQ2 software (Belfast, UK). Images captured by custom SDCM were processed with the Fiji distribution of ImageJ 1.51 ( ) software, and BIK1–GFP and FLS2–GFP endosomal puncta were quantified as the number of puncta per 1,000 μm 2 as previously described 11 , 31 , with the exception that puncta were detected within a size distribution of 0.1–2.5 μm 2 . For colocalization of BIK1–GFP with FM4-64 by custom SDCM, cotyledons were stained with 2.5 μM FM4-64 for 10 min, washed twice, and imaged after a 5-min chase. For quantification of flg22-induced puncta containing BIK1–GFP or BIK1(9KR)-GFP over time, the maximum number of FM4-64 labelled spots per image area was set to 100%, and the percentage of GFP-colocalizing spots per time interval relative to the maximum was calculated; 20–25 images per time interval, captured from 5 individual plants per genotype were used for quantification. For transient expression in N. benthamiana, Agrobacterium strain C58 carrying the constructs of interest was co-infiltrated in the abaxial side of tobacco leaves as described previously 32 . Between 48 and 72 h after infiltration, multiple infiltrated leaves were treated with 100 μM flg22 and imaged at the indicated time points. The number of puncta per 1,000 μm 2 was quantified as previously described 11 , 31 . The percentage colocalization of BIK1 and FLS2 was calculated by dividing the number of BIK1–FLS2 colocalizing puncta by the total number of BIK1 puncta. The percentage colocalization of BIK1 and ARA6 was calculated by dividing the number of BIK1–ARA6 or BIK1(9KR)–ARA6 colocalizing puncta by the total number of ARA6 puncta. qRT–PCR analysis Total RNA was isolated from the leaves of four-week-old plants with TRIzol reagent (Invitrogen). One microgram of total RNA was treated with RNase-free DNase I (New England Biolabs) followed by cDNA synthesis with M-MuLV reverse transcriptase (New England Biolabs) and oligo(dT) primer. qRT–PCR analysis was performed using iTaq SYBR green Supermix (Bio-Rad) with primers listed in Supplementary Table 1 in a Bio-Rad CFX384 Real-Time PCR System. The expression of RHA3A and RHA3B was normalized to the expression of ACTIN2 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The data supporting the findings of this study are available within the paper and its Supplementary Information files . Source Data (gels and graphs) for Figs. 1 – 4 and Extended Data Figs. 1 – 10 are provided with the paper. | A collaboration led by Texas A&M AgriLife researchers has identified an early immune response step that could have broad-ranging implications for crop, animal and human health. The work could lead to positive impacts in both agriculture and medicine by uncovering new ways to improve immune responses. For example, the work creates new ideas for treating allergies and immune deficiencies. "We discovered a fine-tuned mechanism for how the host recognizes microbial components and quickly activates the immune response," said Libo Shan, Ph.D., the study's corresponding author and director of the Institute for Plant Genomics and Biotechnology, Texas A&M AgriLife Research. "It's a phenomenon that is conserved in plants, humans and animals." The study results were published in the scientific journal Nature on May 14. Coauthors included Ping He, Ph.D., professor in the Texas A&M Department of Biochemistry and Biophysics, and colleagues from Ghent University in Belgium, the University of Missouri-Columbia, Oregon State University and St. Jude Children's Research Hospital in Memphis. Grants from the National Institutes of Health, the National Science Foundation and the Robert A. Welch Foundation supported the research. Two types of immune responses Humans constantly come across disease-causing germs, but we can fight off most of them. In fact, we are born with the ability to defend against a broad range of bacteria, viruses and fungi. This part of our immune defense, known as innate immunity, also exists in plants and animals. It kicks in minutes after our cells perceive a microbe. A few days later, another level of defense, the adaptive immune system, also builds up. This level of defense occurs in animals and humans. The innate immune system can be ineffective and unable to fight off diseases, or it can overreact in different ways that are detrimental to good health. Because the building blocks of innate immunity are conserved across species, Shan and her collaborators decided to conduct their study on a small model plant, Arabidopsis, that is easy to grow and manipulate genetically. Creating a new paradigm The team performed cellular, biochemical, genetic and transgenic experiments on Arabidopsis, following clues from their previous work. The results paint a picture of the very first steps of Arabidopsis' immune response to a bacterial infection. To understand that picture, imagine soldiers steadfastly watching for attackers from a castle wall. If invaders attack, the soldiers take them prisoner and send a message to the king. This message is the first response to an imminent invasion. Something similar happens in an Arabidopsis cell, which is like the castle in the anecdote. Specialized proteins at the cell wall 'watch' for evidence of invasion. When they detect a part of a bacterium's swimming mechanism, a flagellum, they grab the flagellum. To send a message to the 'king,' or the cell nucleus, the 'soldiers' use different approaches. One approach, the recent study discovered, is to attach a small protein, ubiquitin, to a messenger protein called BIK1. When the signal is relayed to the cell nucleus, the message is deciphered. Reinforcements are sent to the cell wall and beyond. "This immediate response allows the cell to quickly respond by mobilizing a signaling relay and cellular energy and making metabolic changes," Shan said. Agricultural and human applications "Our study fills a critical gap in the early signal transduction step," Shan said. "So, from both the agricultural perspective and the human health perspective, this discovery holds potential for strategic development." The rapid signal the team discovered might help monitor the immune response in humans, Shan said. "Our study lays the foundation for screening drug targets involved in ubiquitin modification," she added. And, in agriculture, the discovery could help breed plants with stronger resistance to a broad spectrum of infections, Shan said. "This will generate impacts in both agricultural practice and human health, to fine-tune immunity," she said. "We provided fundamental knowledge contributing to general science advancement." | 10.1038/s41586-020-2210-3 |
Nano | Discovery advances ferroelectrics in quest for lower power transistors | "Negative capacitance in a ferroelectric capacitor." Nature Materials (2014) DOI: 10.1038/nmat4148 Journal information: Nature Materials | http://dx.doi.org/10.1038/nmat4148 | https://phys.org/news/2014-12-discovery-advances-ferroelectrics-quest-power.html | Abstract The Boltzmann distribution of electrons poses a fundamental barrier to lowering energy dissipation in conventional electronics, often termed as Boltzmann Tyranny 1 , 2 , 3 , 4 , 5 . Negative capacitance in ferroelectric materials, which stems from the stored energy of a phase transition, could provide a solution, but a direct measurement of negative capacitance has so far been elusive 1 , 2 , 3 . Here, we report the observation of negative capacitance in a thin, epitaxial ferroelectric film. When a voltage pulse is applied, the voltage across the ferroelectric capacitor is found to be decreasing with time—in exactly the opposite direction to which voltage for a regular capacitor should change. Analysis of this ‘inductance’-like behaviour from a capacitor presents an unprecedented insight into the intrinsic energy profile of the ferroelectric material and could pave the way for completely new applications. Main Owing to the energy barrier that forms during the phase transition and separates the two degenerate polarization states, a ferroelectric material could show negative differential capacitance while in non-equilibrium 1 , 2 , 3 , 4 , 5 . The state of negative capacitance is unstable, but just as a series resistance can stabilize the negative differential resistance of an Esaki diode, it is also possible to stabilize a ferroelectric in the negative differential capacitance state by using a series dielectric capacitor 1 , 2 , 3 . In this configuration, the ferroelectric acts as a ‘transformer’ that boosts the input voltage. The resulting amplification could lower the voltage needed to operate a transistor below the limit otherwise imposed by the Boltzmann distribution of electrons 1 , 2 , 3 , 4 , 5 . For this reason, the possibility of a transistor that exploits negative differential capacitance has been widely studied in recent years 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 . However, despite the fact that negative differential capacitance has been predicted by the standard Landau model going back to the early days of ferroelectricity 16 , 17 , 18 , 19 , 20 , a direct measurement of this effect has never been reported. In this work, we demonstrate a negative differential capacitance in a thin, single-crystalline ferroelectric film, by constructing a simple R – C network and monitoring the voltage dynamics across the ferroelectric capacitor. We start by noting that capacitance is, by definition, a small signal concept—capacitance C at a given charge Q F is related to the potential energy U by the relation C = [d 2 U /d Q F 2 ] −1 . For this reason we shall henceforth use the term ‘negative capacitance’ to refer to ‘negative differential capacitance’. For a ferroelectric material, as shown in Fig. 1a , the capacitance is negative only in the barrier region around Q F = 0. Starting from an initial state P , as a voltage is applied across the ferroelectric capacitor, the energy landscape is tilted and the polarization will move to the nearest local minimum. Figure 1b shows this transition for a voltage that is smaller than the coercive voltage V c . If the voltage is larger than V c , one of the minima disappears and Q F moves to the remaining minimum of the energy landscape ( Fig. 1c ). Notably, as the polarization state descends in Fig. 1c , it passes through the region where C = [d 2 U /d Q F 2 ] −1 < 0. Therefore, while switching from one stable polarization to the other, a ferroelectric material passes through a region where the differential capacitance is negative. Figure 1: Energy landscape description of the ferroelectric negative capacitance. a , Energy landscape U of a ferroelectric capacitor in the absence of an applied voltage. The capacitance C is negative only in the barrier region around charge Q F = 0. b , c , Evolution of the energy landscape on the application of a voltage across the ferroelectric capacitor that is smaller ( b ) or greater ( c ) than the coercive voltage V c . If the voltage is greater than the coercive voltage, the ferroelectric polarization descends through the negative capacitance states. P, Q and R represent different polarization states in the energy landscape. Full size image To experimentally demonstrate the above, we applied voltage pulses across a series combination of a ferroelectric capacitor and a resistor R and observed the time dynamics of the ferroelectric polarization. A 60 nm film of ferroelectric Pb(Zr 0.2 Ti 0.8 )O 3 (PZT) was grown on metallic SrRuO 3 (60 nm)-buffered SrTiO 3 substrate using the pulsed laser deposition technique. Square gold top electrodes with a surface area A = (30 μm) 2 were patterned on top of the PZT films using standard micro-fabrication techniques. The remnant polarization of the PZT film is measured to be ~0.74 C m −2 and the coercive voltages are roughly +2.1 V and −0.8 V. A resistance value R = 50 kΩ is used as the series resistor. Figure 2a shows the schematic diagram of the experimental set-up and Fig. 2b shows the equivalent circuit diagram. The capacitor C connected in parallel with the ferroelectric capacitor in Fig. 2b represents the parasitic capacitance contributed by the probe station and the oscilloscope in the experimental set-up, which was measured to be ~60 pF. An a.c. voltage pulse sequence of V S : −5.4 V → +5.4 V → −5.4 V was applied as input. The total charge in the ferroelectric and parasitic capacitors at a given time t , Q ( t ), is calculated using Q ( t ) = ∫ 0 t i R ( t )d t , with i R being the current flowing through R . The charge across the ferroelectric capacitor Q F ( t ) is calculated using the relation: Q F ( t ) = Q ( t ) − CV F ( t ), with V F being the voltage measured across the ferroelectric capacitor. Figure 2c shows the transients corresponding to V S , V F , i R and Q . We note in Fig. 2c that after the −5.4 V → +5.4 V transition of V S , V F increases until point A, after which it decreases until point B. We also note in Fig. 2c that, during the same time segment AB, i R is positive and Q increases. In other words, during the time segment AB, the changes in V F and Q have opposite signs. As such, d Q /d V F is negative during AB, indicating that the ferroelectric polarization is passing through the unstable states. A similar signature of negative capacitance is observed after the +5.4 V → −5.4 V transition of V S during the time segment CD in Fig. 2c . The charge density of the ferroelectric capacitor or the ferroelectric polarization, P ( t ) = Q F ( t )/ A , is plotted as a function of V F ( t ) in Fig. 3a , from which we can observe in that the P ( t )– V F ( t ) curve is hysteretic, and for sections AB and CD the slope of the curve is negative, indicating that the capacitance is negative in these regions. Figure 2: Transient response of a ferroelectric capacitor. a , Schematic diagram of the experimental set-up. b , Equivalent circuit diagram of the experimental set-up. C F , C and R represent the ferroelectric and the parasitic capacitor and the external resistor, respectively. V S , V F and i R are the source voltage, the voltage across C F and the current through R , respectively. c , Transients corresponding to the source voltage V S , the ferroelectric voltage V F and the charge Q on the application of an a.c. voltage pulse V S : −5.4 V → +5.4 V → −5.4 V. R = 50 kΩ. Negative capacitance transients are observed during the time segments AB and CD. The source voltage pulse is shown as the line connecting open circles and transients as the lines connecting green circles. Full size image Figure 3: Experimental measurement of negative capacitance. a , Ferroelectric polarization P ( t ) as a function of V F ( t ) with R = 50 kΩ for V S : −5.4 V → +5.4 V → −5.4 V. In sections AB and CD, the slope of the P ( t ) − V F ( t ) curve is negative, indicating a negative capacitance in these regions. b , Comparison of the P ( t ) − V F ( t ) curves corresponding to R = 50 kΩ and 300 kΩ for V S : −5.4 V → +5.4 V → −5.4 V. Full size image We also experimented with a.c. voltage pulses of different amplitudes and two different values of the series resistance. The P ( t )– V F ( t ) characteristic is found to be qualitatively similar (see Supplementary Section 2 for detailed measurements). There are, however, some interesting differences. For example, Fig. 3b shows a comparison of the P ( t )– V F ( t ) curves corresponding to R = 50 kΩ and 300 kΩ for V S : −5.4 V → +5.4 V → −5.4 V. We note that for a smaller value of R the hysteresis loop is wider, which we discuss later. We simulated the experimental circuit shown in Fig. 2b , starting from the Landau–Khalatnikov equation 16 , where the energy density U = αQ F 2 + βQ F 4 + γQ F 6 − Q F V F . α , β and γ are the anisotropy constants and ρ is a material dependent parameter that accounts for dissipative processes during the ferroelectric switching. Equation (1) leads to an expression for the voltage across the ferroelectric capacitor: where C F ( Q F ) = (2 αQ F + 4 βQ F 3 + 6 γQ F 5 ) −1 . From equation (2), we note that the equivalent circuit for a ferroelectric capacitor consists of an internal resistor ρ and a nonlinear capacitor C F ( Q F ) connected in series. We shall denote Q F / C F ( Q F ) as the internal ferroelectric node voltage V int . Figure 4a shows the corresponding equivalent circuit. The transients in the circuit were simulated by solving equation (2). Figure 4b shows the transients corresponding to V S , V F , V int , i R and Q on the application of a voltage pulse V S : −14 V → +14 V → −14 V with R = 50 kΩ and ρ = 50 kΩ. In Fig. 4b , we observe opposite signs of changes in V F and Q during the time segments AB and CD, as was seen experimentally in Fig. 2b . We also note that the P – V F curve shown in Fig. 4c is hysteretic, as was observed experimentally in Fig. 2d . To understand the difference between the P – V F and P – V int curves we note that V F = V int + i F ρ , with i F being the current through the ferroelectric branch; the additional resistive voltage drop, i F ρ , results in the hysteresis in the P – V F curve. Nevertheless, it is clear from Fig. 4c that the negative slope of the P – V int curve in a certain range of P , due to C F being negative in that range, is reflected by the negative slope in the P – V F curve in the segments AB and CD. Figure 4: Simulation of the time dynamics of the ferroelectric switching. a , Equivalent circuit diagram of the simulation. C F , ρ , C and R represent the ferroelectric capacitor, the internal resistor, the parasitic capacitor and the external resistor, respectively. V S , V int and V F are the voltages across the source and the capacitors C F and C , respectively. i R , i F and i C are the currents through R , C F and C , respectively. b , Simulated transients corresponding to the source voltage V S , the ferroelectric voltage V F and the charge Q on the application of a voltage pulse V S : −14 V → +14 V → −14 V. c , Ferroelectric polarization P ( t ) as a function of V F ( t ) and V int ( t ). d , Comparison of the simulated P ( t )– V F ( t ) curves for R = 50 kΩ and 200 kΩ on the application of V S : − 14V → +14 V → −14 V. Full size image We also simulated the transients for the same circuit with R = 200 kΩ for V S : −14 V → +14 V → −14 V. Figure 4d compares the simulated P – V F curves for R = 50 kΩ and 200 kΩ. We observe that, for a smaller value of R , the hysteresis loop of the simulated P – V F curve is wider, as was observed experimentally in Fig. 3b . This is due to the fact that, for a larger R , the current through the ferroelectric is smaller, resulting in a smaller voltage drop across ρ . The value of the internal resistance ρ can be extracted by comparing experimentally measured P – V F curves for two different values of R for the same voltage pulse : ρ ( P ) = ( V F1 ( P ) − V F2 ( P ))/( i F1 ( P ) − i F2 ( P )). Here V F ( P ) and i F ( P ) are the voltage across and the current through the ferroelectric material, respectively. Indices 1 and 2 denote values for two different values of R . The average value of ρ is found to decrease monotonically from a value of ~15 kΩ with an increasing amplitude of the applied voltage, whereas the average magnitude of the negative capacitance remains reasonably constant within the range 400–500 pF ( Supplementary Fig. 17 ). Interestingly, this value of the negative capacitance is similar to that extracted by stabilizing PZT in a negative capacitance state by an in-series STO capacitor 9 ( Supplementary Section 8 ). If the applied voltage amplitude is smaller than the coercive voltage, such that the ferroelectric lies in one of the potential wells ( Fig. 1a ), its capacitance is positive and so it should behave just like a simple capacitor. On the other hand, if the applied voltage amplitude is larger than the coercive voltage, the ferroelectric switches and a negative capacitance transient is expected. This is exactly what is observed in our experiments ( Supplementary Section 4.3 ). The fact that in the same circuit both positive and negative capacitance transients can be achieved just by changing the amplitude of the voltage also indicates that any influence of the parasitic components, if present, is minimal. Also, detailed measurements ( Supplementary Section 3 ) show that the influence of defects is also minimal. Furthermore, the observed effect is robust against material variations. Supplementary Section 9 shows data for a different material stack, where the PZT thickness is increased to 100 nm and the bottom electrode is changed from SRO to La 0.7 Sr 0.3 MnO 3 (20 nm). A similar negative capacitance transient is observed. The addition of a series resistance ( R ) is critically important in revealing the negative capacitance region in the dynamics. An appreciable voltage drop across the series resistance R allows the voltage across the ferroelectric capacitor to be measured without being completely dominated by the source voltage—in the limit when R → 0, the voltmeter would be directly connected across the voltage source. Indeed, most model studies 18 , 21 , 22 , 23 have been done in the latter limit where the ferroelectric capacitor is directly connected across a voltage source (or through a small resistance). Note that the dynamics in our experiments is intentionally slowed down by adding a large series resistance. The duration of the negative capacitance transient can be probed by varying the value of the series resistance and is found to be less than 20 ns for the given PZT thickness and electrode size ( Supplementary Section 7 ). A negative slope in the polarization–voltage characteristic has been predicted since the early days of ferroelectricity 16 , 17 , 18 , 19 , 20 . An S-like polarization–voltage behaviour in one branch of the hysteresis has been measured in a transistor structure 13 . However, a successful measurement of the entire intrinsic hysteresis loop has been performed only indirectly 20 . In contrast, our results provide a direct measure of the intrinsic hysteresis and negative capacitance of the material. Given the size of the capacitor used (30 μm × 30 μm), the switching invariably occurs through domain-mediated mechanisms. Importantly, our results show that, even in such a domain-mediated switching, a regime of abrupt switching is present that leads to negative capacitance transience. Thus, the double-well picture shown in Fig. 1a , which is typically associated with a single-domain configuration (equation (1)), can still qualitatively predict the experimental outcome. Interestingly, from Fig. 2c , it is clear that the negative capacitance ensues in the initial period of the switching. This indicates that microscopically abrupt switching events dominate the early part of the dynamics. By varying the external stimuli, it is also possible to probe the behaviour of intrinsic parameters such as ρ ( Supplementary Section 6 ) that govern the ferroelectric switching. Before concluding, it is worth noting that the concept of negative capacitance goes beyond the ferroelectric hysteresis and can be applied in general to a two-state system separated by an intrinsic barrier (stored energy) 24 , 25 , 26 , 27 , 28 . The measurement presented here could provide a generic way to probe the intrinsic negative capacitance in all such systems. A robust measurement of the negative capacitance could provide a guideline for stabilization, which could then overcome Boltzmann Tyranny in field-effect transistors, as mentioned earlier. The inductance-like behaviour observed in this experiment could also lead to many other applications, such as negating capacitances in an antenna, boosting voltages at various part of a circuit, developing coil-free resonators and oscillators, and so on. | (Phys.org)—An article released today by the journal Nature Materials describes the first direct observation of a long-hypothesized but elusive phenomenon called "negative capacitance." The work describes a unique reaction of electrical charge to applied voltage in a ferroelectric material that could open the door to a radical reduction in the power consumed by transistors and the devices containing them. Capacitance is the ability of a material to store an electrical charge. Ordinary capacitors—found in virtually all electronic devices—store charge as a voltage is applied to them. The new phenomenon has a paradoxical response: when the applied voltage is increased, the charge goes down. Hence its name, negative capacitance. "This property, if successfully integrated into transistors, could reduce the amount of power they consume by at least an order of magnitude, and perhaps much more," says the paper's lead author Asif Khan. That would lead to longer-lasting cell phone batteries, less energy-consumptive computers of all types, and, perhaps even more importantly, could extend by decades the trend toward faster, smaller processors that has defined the digital revolution since its birth. Without a major breakthrough of this sort, the trend toward miniaturization and increased function is threatened by the physical demands of transistors operating at a nano scale. Even though the tiny switches can be made ever smaller, the amount of power they need to be turned on and off can be reduced only so much. That limit is defined by what is known as the Boltzmann distribution of electrons—often called the Boltzmann Tyranny. Because they must be fed an irreducible amount of electricity, ultra-small transistors that are packed too tightly cannot dissipate the heat they generate to avoid self-immolation. In another decade or so, engineers will exhaust options for packing more computing power into ever tinier spaces, a consequence viewed with dread by device manufacturers, sensor developers, and a public addicted to ever smaller and more powerful devices. The new research, conducted at UC Berkeley under the leadership of CITRIS researcher and associate professor of electrical engineering and computer sciences Sayeef Salahuddin, provides a possible way to overcome the Boltzmann Tyranny. It relies on the ability of certain materials to store energy intrinsically and then exploit it to amplify the input voltage. This could, in effect, potentially "trick" a transistor into thinking that it has received the minimum amount of voltage necessary to operate. The result: less electricity is needed to turn a transistor on or off, which is the universal operation at the core of all computer processing. The material used to achieve negative capacitance falls in a class of crystalline materials called ferroelectrics, which was first described in the 1940s. These materials have long been researched for memory applications and commercial storage technologies. Ferroelectrics are also popular materials for frequency control circuits and many microelectromechanical systems (MEMS) applications. However, the possibility of using these materials for energy efficient transistors was first proposed by Salahuddin in 2008, right before he joined Berkeley as an assistant professor. Over the past six years, Khan—one of Salahuddin's first graduate students at Berkeley—has used pulse lasers to grow many kinds of ferroelectric materials and has devised and revised ingenious ways to test for their negative capacitance. In addition to transforming the way transistors work, negative capacitance could also potentially be used to develop high-density memory storage devices, super capacitors, coil-free oscillators and resonators, and for harvesting energy from the environment. Exploiting the negative capacitance of ferroelectrics is one in a list of strategies for reducing the per-joule cost of storing a single bit of information, says UC Berkeley professor of materials science, engineering, and physics Ramamoorthy Ramesh, another of the paper's authors. Ramesh's decades of seminal work on ferroelectric materials and device structures for manipulating them underlies the group's findings. "We have just launched a program called the attojoule-per-bit program. It is an effort to reduce the total energy consumed for manipulating a bit to one attojoule (10-18)," says Ramesh. To achieve that kind of per-bit energy consumption, we need to take advantage of all possible pathways. The negative capacitance of ferroelectrics is going to be a very important one," he says. This work was enabled by access to CITRIS's Marvell Nanofabrication Laboratory, a research facility on the UC Berkeley campus that specifically encourages the exploration of new materials and processes. One of the most advanced academic nanofabrication labs of its type in the world, the NanoLab is the birthplace of other game-changing technologies, such as the three-dimensional FinFET transistor that has led the way to scaling far beyond the limits of ordinary transistors. "Today," says professor Ming Wu, Marvell NanoLab Faculty Director, "every single transistor built for next-generation microprocessors or computers is FinFET." "CITRIS's Marvell NanoLab has state-of-the-art equipment for making semiconductor devices and integrated circuits," says Wu. "But we take these tools and capabilities and apply them to materials that are so new that industry fabrication labs would not touch them. New materials like these negative capacitance ferroelectrics are not only welcome here, they are actively encouraged." "The next step," says Salahuddin, "is to try to make actual transistors such that they can exploit the new phenomenon, We need to make sure they are compatible with silicon processing, that they are manufacturable, and that the measurement techniques we've now proven in principle are practical and scalable." | 10.1038/nmat4148 |
Physics | Researchers demonstrate the world's first white lasers | "A monolithic white laser." Nature Nanotechnology (2015) DOI: 10.1038/nnano.2015.149 Journal information: Nature Nanotechnology | http://dx.doi.org/10.1038/nnano.2015.149 | https://phys.org/news/2015-07-world-white-lasers.html | Abstract Monolithic semiconductor lasers capable of emitting over the full visible-colour spectrum have a wide range of important applications, such as solid-state lighting, full-colour displays, visible colour communications and multi-colour fluorescence sensing. The ultimate form of such a light source would be a monolithic white laser. However, realizing such a device has been challenging because of intrinsic difficulties in achieving epitaxial growth of the mismatched materials required for different colour emission. Here, we demonstrate a monolithic multi-segment semiconductor nanosheet based on a quaternary alloy of ZnCdSSe that simultaneously lases in the red, green and blue. This is made possible by a novel nanomaterial growth strategy that enables separate control of the composition, morphology and therefore bandgaps of the segments. Our nanolaser can be dynamically tuned to emit over the full visible-colour range, covering 70% more perceptible colours than the most commonly used illuminants. Main Multi-colour or multi-wavelength lasers with a wavelength span beyond the capability of a single laser material have been a subject of great interest in recent years 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , with the realization of white lasers as the ultimate goal. In fact, lasers that span the full visible spectrum, particularly the red, green and blue (RGB) colours, are particularly useful for laser lighting 10 , 11 , full-colour laser imaging and display 12 , 13 , biological and chemical sensing 14 , 15 , as well as on-chip wavelength-division multiplexing 16 , 17 . As an illuminant, lasers offer higher energy conversion efficiencies and potentially higher output powers than white light-emitting diodes (LEDs) and other traditional illuminants. It has recently been demonstrated that illumination with four monochromatic lasers is visually equivalent to a continuous-spectrum white reference illuminant as seen by the human eye 10 , 11 . Furthermore, the highly monochromatic component colours allow for a wider achievable colour gamut (more than 90% of all colours perceptible to human eyes), a higher contrast ratio and more vivid colours than traditional display systems based on broadband light sources ( Supplementary Section 12 ). Despite the great efforts made to achieve multi-colour lasing using a variety of approaches, a single monolithic semiconductor laser capable of lasing with all three elementary colours has not yet been realized due to several significant challenges. First, most of the previous approaches use non-semiconductor materials such as nonlinear optical crystals 3 , rare-earth doped materials 4 , dye-doped polymers 5 or liquids 6 , or microfibres 7 . Such set-ups are bulky, inefficient and incompatible with electrical injection, an important ultimate requirement for many applications. Second, semiconductor-based approaches 7 , 9 combine several discrete devices in a heterogeneous manner, thus increasing the volume, complexity and cost of the overall system. Producing emission covering all visible wavelengths in a single structure requires the growth of potentially very dissimilar semiconductors into a monolithic structure with high crystal quality. This has been a goal pursued by the crystal growth community for decades and remains a challenging one, especially using conventional planar epitaxy techniques, because of the large lattice mismatch involved. As an alternative to standard planar epitaxial structures, developments in nanotechnology over the last two decades have demonstrated the use of quantum dots 18 and nanowires 19 , 20 , 21 as means of producing emission in a wide spectral range, but serious issues remain. For quantum dots made with solution-based techniques 2 , control of their spatial distribution to avoid the absorption of short wavelength emission by narrow-gap dots remains difficult. More importantly, electrical injection remains a fundamental challenge for lasers, despite successful demonstrations of full-colour LEDs 22 , 23 . Beyond material growth, it is also critically important to realize a growth-compatible cavity structure in which the lasing of all three elementary colours can be supported simultaneously, while minimizing absorption of the short-wavelength emission by narrow-gap materials. In our efforts to achieve such an ultimate goal, we have already demonstrated two-colour lasing 24 , 25 from a single two-segment CdSSe nanosheet 24 , 26 and from a nanowire with a looped end 25 . We determined that two-colour lasing from a monolithic structure could best be achieved using a side-by-side cavity geometry ( Supplementary Sections 1–4 ) rather than a longitudinal heterostructure, due to the absorption of short-wavelength emission by narrow-gap material. However, adding a third segment capable of blue lasing to ultimately enable white lasing was challenging. ZnS is the most compatible material and an ideal candidate to add into the ternary CdSSe alloy. It is known to alloy in various combinations with Cd, S and Se, and it would extend the bandgap to allow blue emission. Unfortunately, due to the low vapour pressure and low supersaturation of ZnS 27 , such ZnS-dominant alloys typically grow into nanowires or nanoribbons with very high length–width aspect ratios 28 , 29 , 30 . This is why, to date, ultraviolet and blue lasing in this material system has only been demonstrated in nanowires 31 and nanoribbons 32 with high aspect ratios (>20). Such structures are intrinsically incompatible with the low-aspect-ratio nanosheet morphology of CdS and CdSe required to achieve simultaneous green and red lasing ( Supplementary Section 4 ). In this Article we report a monolithic semiconductor white laser. To overcome the fundamental challenges to obtain the required materials and structures for white lasing, we have made systematic efforts to understand and control the interplay of various growth mechanisms, including the vapour–liquid–solid (VLS) and vapour–solid (VS) mechanisms, and a novel gas-phase dual-ion exchange process. This has led to the successful growth of multi-segment heterostructure nanosheets of ZnCdSSe alloys with the appropriate compositions, cavity geometries and aspect ratios to emit red, green and blue light simultaneously. These multi-segment structures were achieved by using a highly coordinated, dynamical positioning of the substrates and switching of precursors during growth. Rather than attempt to directly grow a ZnS-rich segment next to the CdS- and CdSe-rich segments, we optimized the growth sequence and conditions such that CdSe nanosheet growth was followed by a dual-ion exchange reaction, a mechanism that has not been reported previously, to finally achieve a ZnS-rich nanosheet structure. By independently controlling the optical pumping power to each segment we demonstrate full-colour tunable lasing over the entire triangular colour gamut, and white colour lasing in particular. The wavelength spans 191 nm and is the largest ever reported for a monolithic structure. Growth of monolithic multi-segment heterostructure Our multi-segment heterostructure was grown by chemical vapour deposition (CVD) (see Methods ). Temperature-dependent composition control was used in the growth of the alloy semiconductors 21 , 32 , 33 , 34 . The essence of this technique is to manipulate the position of the substrate along the axial temperature gradient ( Supplementary Fig. 6 ) in the reactor to optimize the substrate temperature for the desired alloy composition. Direct growth of the desired morphologies (such as segmented nanosheets with low aspect ratios) with the desired compositions is not possible, especially for wide-gap materials. Our strategy was to decouple the realization of the desired morphology and composition by achieving them separately, indirectly. More specifically, we obtained the desired nanosheet morphologies using materials amenable to growing with the desired morphology. Subsequent growth was performed to obtain the desired alloy composition through cation or anion exchange 35 , 36 , 37 , 38 , without significant modification of the initial morphology. Through this highly non-equilibrium exchange process, the desired material composition–morphology combination could be achieved. Such an indirect route to achieve the desired morphologies and compositions separately can serve as a more general strategy for other materials. The morphological evolution of ZnCdSSe nanostructures as the alloy composition changes is described in Supplementary Fig. 7 . Given their known alloying capabilities 33 , 39 , 40 , CdSe provides an ideal template for morphology transfer to a Zn- and Se-rich ZnCdSSe alloy. It is important to note that all previous studies of substitutional reactions involved the exchange of only one of the cations or anions, but, in our case, conversion of CdSe by a ZnS source necessarily deals with the simultaneous substitution of cations and anions. Such simultaneous anion and cation exchange in the vapour phase during uninterrupted growth is crucial for the successful growth of our materials with the desired morphologies and alloy compositions (and thus bandgaps). Consistent with common understanding 41 , 42 , the catalyst-led VLS mechanism dominates growth at low levels of supersaturation, producing a wire-like morphology. At high supersaturation, the VS mechanism dominates growth, producing two-dimensional belts or ribbons and sheets. The optimal growth conditions are illustrated in Fig. 1 and Supplementary Fig. 6 , where substrate positions R3, R2 and R1 and the corresponding temperatures are best suited for the growth of alloys with red, green and blue emission, respectively. Extensive study was carried out to determine the alloy composition and morphology at a given substrate location and temperature. Such characterization is critically important to successful control of the final morphology and alloy composition. Figure 1: Growth procedure of multi-segment heterostructure nanosheets. a , Schematic of the CVD set-up with a temperature gradient of 66 °C cm –1 in the region used for positioning the substrate (see Supplementary Section 5 for more details). b , Illustration of the growth procedure. Samples are grown starting at position R3, then at positions R1, R2 and finally back to R3, with corresponding temperatures labelled T1, T2 and T3. The associated product samples after these steps are labelled P3, P31, P312 and P3123, respectively, where the numbers following ‘P’ represent the growth sequence at various locations. For example, P312 represents a product grown first at R3, followed by growths at R1 and then at R2. c , Photoluminescence images of individual structures after the corresponding growth sequences. Note that the images were taken after the structures were transferred onto a glass substrate from their as-grown ones using a contact printing method. Inset in rightmost panel a multi-segment nanowire structure resulting from the P123 growth sequence. Scale bars, 15 μm. d , Optical images of the samples under ambient lighting. Scale bar, 1 cm. e , Photoluminescence spectra of the samples shown in c , d . Full size image The growth procedure is shown schematically in Fig. 1 . First, Cd- and Se-rich ZnCdSSe nanosheet structures (P3) were grown at a low temperature (at R3 with T3 ≈ 640 °C). The substrate was then moved to the higher-temperature region R1 (T1 ≈ 780 °C) using a connected iron rod driven by an external magnet to further promote diffusion processes 38 . This caused the structures to transform uniformly into a Zn- and Se-rich ZnCdSSe alloy (after P31 growth) with no appreciable change in morphology ( Supplementary Section 14 ). Furthermore, the dual-ion exchange process does not change the wurtzite crystal and resulting crystal quality, as can be seen from the transmission electron microscopy (TEM) and photoluminescence characterizations in Figs 2b,c and 1e , respectively. The red photoluminescence peak from P3 is entirely converted to a blue peak in P31 ( Fig. 1e ). The defect-free photoluminescence features in P31 indicate the high crystal quality of the transformed structures, as imperfect wide-gap semiconductors typically show strong emission below bandgaps. Based on our study, this indirect route is practically the only successful means for growing low-aspect-ratio nanosheets capable of blue emission. No combination of growth parameters could be found to produce such structures directly due to the low vapour pressures of ZnS and ZnSe. All attempts to grow wide-gap materials directly have resulted in small, multi-segment nanowires (see inset in panel 4 of Fig. 1c , indicated by P123), which are unsuitable for multi-colour lasing ( Supplementary Section 1 ). The growth process of P31 was then followed by growth at position R2 (at T2 ≈ 740 °C) and the second segment was synthesized by incorporating more Cd ions to add green light emission (P312 in Fig. 1b ). The final growth step at position R3 added the red-emitting segment, resulting in the final multi-segment heterostructure capable of simultaneous RGB light emission (P3123 in Fig. 1b ). At positions R2, and later R3, the lower substrate temperatures favour the VS mechanism for two-dimensional nanosheet growth, as ion transport is dominant over the exchange processes at lower temperatures 35 . Our unique procedure relies on exploring the interplays among the VLS and VS mechanisms and the simultaneous gas-phase anion and cation exchange processes in the right sequence. By simply increasing the number of steps after the dual-ion exchange process, heterostructures can be grown with more segments, which emit more colours (any colour in the visible spectrum; see Supplementary Fig. 8 in Supplementary Section 6 ). More details about the growth and mechanisms are provided in Supplementary Sections 5–7 . Figure 2: Structural characterization of a multi-segment heterostructure nanosheet. a , Low-resolution TEM image of a multi-segment heterostructure nanosheet. Scale bar, 5 μm. b , HRTEM images of the regions inside the coloured squares in a , with the corresponding colour code. Scale bars, 5 nm. c , Indexed SAD patterns of the regions inside the corresponding coloured circles in a . The HRTEM images have a 30° rotation compared with the SAD patterns due to image rotation at higher magnifications. d , A 60° tilted SEM image of the structure with close-up views of the cross-section. The thickness was measured to be 70 nm after compensating for the tilt angle. e , f , Photoluminescence image ( e ) and scanning TEM image ( f ) of the structure. Scale bars ( d–f ), 5 μm. g , Calculated composition change along the width of the multi-segment heterostructure nanosheet based on the EDS line scan performed along the solid red line in f . h , Correlation of the EDS mapping inside the dashed rectangular area in f , with the atomic percentages gathered from the EDS line scan. Full size image Structural characterization Scanning electron microscopy (SEM) images show that the structures have lengths of up to 60 μm, widths of up to 45 μm and thicknesses in the range of 60–350 nm ( Fig. 2d ). The high-resolution TEM (HRTEM) images and selected area diffraction (SAD) patterns in Fig. 2b and c, and the photoluminescence spectra in Fig. 1d show that each segment of the structure is a high-quality wurtzite monocrystal, with no visible defects or strains detected. Energy dispersive spectroscopy (EDS) analysis of the representative nanosheet in Fig. 2 shows that it is composed of Zn, Cd, S and Se, and that the concentration of those elements changes along the c axis. As one moves from the blue-emitting towards the red-emitting region, the concentrations of Cd and Se increase while those of Zn and S decrease. Based on the EDS line scan, the (010) and (001) plane spacings and emission wavelengths along the structure were extrapolated and correlated with the measured values from the HRTEM images and photoluminescence spectra. These results are all in good agreement with those measured on a different nanosheet ( Supplementary Fig. 10 in Supplementary Section 8 ). An EDS line scan and elemental mapping of the structure ( Fig. 2g,h ) show that the relative abundance of anions does not change as much as that of cations during the ion exchange process, assuming that the Zn- and Se-rich segment had approximately the same composition as the Cd- and Se-rich segment due to the identical growth conditions. This implies that the cation exchange process is faster than the anion exchange process, which would be expected due to the inhibited diffusion of the larger anions 36 (see Supplementary Section 14 for a more detail comparison). More details of the structural characterization can be found in the Methods. Multi-colour lasing An extensive optical study was conducted to demonstrate lasing behaviour, including the polarization dependence of the pump lasers ( Supplementary Section 18 ). The multi-segment heterostructure nanosheets were excited by a 355 nm pulsed laser (pulse width of 9 ns). Details of the pumping and collecting configuration are provided in the Methods and in Supplementary Section 9 . Figure 3 shows simultaneous multi-colour lasing behaviour from a representative sample under uniform pumping after being transferred onto an MgF 2 substrate. This sample was intentionally cleaved from a longer piece by the bend-to-fracture method 43 to create a high-quality reflecting end facet. The main part of the multi-colour lasing cavity (enclosed by the white dashed box in Fig. 3a ) had dimensions of 28.0 μm × 18.0 μm × 0.3 μm. This particular nanosheet also featured narrow protruding segments of irregular shape extending to the right side of the white box, indicating that this nanosheet was imperfectly detached from a larger nanosheet (for a detailed study of the sizes and uniformity of our material, see Supplementary Section 16 ). Figure 3b presents a photoluminescence image under single pulse pumping at 1.2 μJ. Largely uniform spontaneous emission can be seen, with strong scattering from the edges and an additional strong line in the middle of the nanosheet across the green- and light-red-emitting segments due to a bent edge created during transfer of the nanosheet. There is also weaker scattering of the emission due to surface corrugation, which is discussed in detail in Supplementary Section 13 . Figure 3c shows the emission at 4.0 µJ pulse energy. It is clear that the uniform emission has been replaced by more pronounced scattering lines and a significantly diminished background, indicating that the transition from spontaneous emission to lasing has occurred. The spectral evolution with increasing pulse energy is shown in Fig. 3d . At the lowest pumping energy of 1.3 μJ, only broadband spontaneous emission is observed. On increasing the pumping energy from 1.8 μJ to 3.3 μJ, narrow peaks at red (642 nm and 675 nm), green (530 nm) and blue (484 nm) colours appear sequentially. Both the intensity and the number of peaks of each colour increase with pumping energy, which can be attributed to well-known multimode lasing behaviour 44 . A lasing wavelength span of 191 nm, much larger than the gain bandwidth of any reported monolithic semiconductor, can be observed. The light-red lasing (642 nm) was generated from the cation-terminated growth front and appears as the red-emitting segment in Fig. 3b,c . The deep-red lasing (675 nm) was generated from the anion-terminated growth front on the opposite side of the nanosheet, which is not visible in Fig. 3b,c because of the weak response of our camera at 675 nm (see the Infinity2-1R Camera Datasheet at ). Due to the large size of the nanosheet structures, different longitudinal and transverse modes cause very close mode spacing 24 , so more modes are excited above the threshold with increasing pumping intensity. At a pumping level of 3.9 μJ, multimode lasing can be clearly observed. High-resolution photoluminescence measurements ( Supplementary Fig. 13 and Supplementary Section 10 ) reveal that the linewidth of each individual mode narrows down to ∼ 0.4 nm once pumped above the lasing threshold. Figure 3: Simultaneous multi-colour lasing from a single multi-segment heterostructure nanosheet. a , Bright-field optical microscope image. The dashed white box indicates the main part of the cavity. b , c , Real colour images under low ( b ) and high ( c , above threshold) pumping of a single pumping beam (180 μm in diameter). Scale bar, 10 µm. The three strong vertical lines indicate significant scattering from the two edges and from the bent edge in the centre. d , Spectra at different pumping levels (as labelled on the lower right side of the figure). The intensities of the first two spectra have been multiplied by a factor of 5 to show the details. Full size image Further evidence for multi-colour lasing behaviour can be seen for each of the four colours in Fig. 3d in the light-in–light-out curves plotted in Fig. 4a–d , together with theoretical fittings based on multimode lasing equations 45 . Typical S-like curves covering the three regimes of operation are clear in the double-log-scale plots. For all four colours, both the spontaneous emission regime, dominating at lower pumping intensity, and the stimulated emission regime, dominating at higher pumping intensity, have slopes of ∼ 1. The maximum slope of the superlinear transition regime, in which amplified spontaneous emission is the dominant process, varies by colour. These transition slopes are 2.1, 7.3, 2.7 and 2.2 for blue, green, light red and deep red, respectively. Because the maximum slope of the superlinear regime represents the most dramatic transition from spontaneous emission to lasing, this is often used to define the lasing threshold. Based on the four individual light-in–light-out plots, the thresholds for blue, green, light red and deep red lasing are 3.3, 2.9, 2.0 and 3.0 μJ, respectively, for single pulse excitation, corresponding to power densities of 1,441, 1,266, 873 and 1,310 kW cm –2 , respectively. A more detailed study of the lasing threshold, output power, lasing efficiency and stability of our multi-segment heterostructure nanosheets is provided in Supplementary Sections 17, 15 and 19 . Figure 4: Light-in–light-out curves with multimode lasing fitting. a – d , Light-in–light-out curves of the 484, 530, 642 and 675 nm lasing peaks on a log–log scale. Insets: plots on a linear scale. Circles represent direct measurements and solid lines are fits using multimode laser theory. Full size image Full-colour range tuning and white lasing To illustrate the potential of our multi-segment heterostructure nanosheet for general illumination, we studied dynamic tuning of the mixed colours in the full-colour range and white colour lasing in particular. Three beams were focused into long, narrow, parallel stripes ( Supplementary Fig. 11b , Supplementary Section 9 ) to pump one of the three segments each. The power of each pumping beam could be adjusted independently to allow for precise, independent tuning of the lasing intensity of each colour. As a result, the mixed lasing colours in the far field can be controllably varied in the full-colour range and the desired white colour can be achieved. The results are summarized in Fig. 5 , with the photoluminescence image and the structure of the nanosheet shown in Fig. 5c . By pumping one, two and all three of the segments above their lasing thresholds, we demonstrate independent lasing of each RGB colour, simultaneous two-colour lasing of any two of the three primary colours, and finally simultaneous RGB lasing (shown in Fig. 5d–j ). Figure 5a presents the emission spectra for all seven cases, while Fig. 5b shows the calculated chromaticity for these lasing spectra on a CIE1931 colour diagram (red, green, blue, magenta, yellow, cyan and white, respectively; Supplementary Section 11 ). The chromaticity of the carefully balanced white lasing is very close to that of the white point CIE standard white illuminant D65 46 (shown in Fig. 5b ). In addition, according to Grassmann's law, all colours inside the triangle formed by the three elementary colours can be realized through appropriate mixing of three colours. One of our best samples (sample #4, Supplementary Section 12 ) can cover 70% more perceptible colours than the standard RGB 47 space after converting to a perceptually uniform colour space ( Supplementary Section 12 and Supplementary Fig.15 ). Such a large colour gamut, enabled by the flexibility of our approach, could be used in the production of high-contrast-ratio displays with better colour saturation than is available today. Figure 5: White and full-colour tunable lasing. a , Lasing spectra when the blue (B), green (G), red (R), red and green (R+G), green and blue (G+B), red and blue (R+B) and red, green and blue (R+G+B) segments are pumped above their respective thresholds. b , Chromaticity of the lasing peaks extracted from the spectra in a , shown as seven white circles (see Supplementary Section 11 for details of the chromaticity calculation.) The chromaticity of the R+G+B lasing is close to the CIE standard white illuminant D65. Dashed lines indicate the range of the achievable colour gamut for this particular heterostructure. c – j , Real colour images under low pumping ( c ) and above-threshold pumping ( d – j ) for the various colour cases corresponding to the seven spectra in a . Scale bar, 20 µm. Full size image To show the far-field mixing of colours from our multi-colour lasers, real colour photographs were taken ( Fig. 6 ) of the laser output while using dynamic differential pumping (corresponding to the cases of Fig. 5 ) to control the lasing colour. Figure 6b–d shows independent red, green and blue lasing achieved by pumping each colour segment individually. Yellow, cyan and magenta mixed lasing emissions were produced by pumping two of the segments, as shown in Fig. 6e–g . Finally, with three beams pumping all three colour segments, simultaneous RGB lasing emission from a single multi-segment heterostructure nanosheet can be mixed together to render as a white-like colour, as shown in Fig. 6h . This colour mixing demonstration provides an experimental proof-of-concept for the use of our multi-colour lasing structure in illumination and display applications. Figure 6: Colour photographs. a , Photograph of the mixed emission colour from a multi-segment heterostructure nanosheet. (Note that the blue emission visible is from the adhesive between the MgF 2 substrate and the glass slide.) b – h , Photographs of the enlarged dashed-box region in a when the different combinations of segments are pumped as indicated by the labels inside each figure, creating the mixed far-field emission colours red, green, blue, yellow, cyan, magenta and white. The top dots in each photograph are the direct image of laser emission, while the tails under these dots are the reflection from the substrate. Full size image Conclusions We have demonstrated simultaneous RGB lasing from individual monolithic ZnCdSSe nanosheets at room temperature. White and tunable lasing over the full range of visible colours has been achieved through controlled pumping of different segments in a multi-segment heterostructure nanosheet of ZnCdSSe quaternary alloys. The key to our successful demonstration lies in the development of a unique growth strategy that exploits the interplay of the VLS, VS and dual-ion exchange mechanisms. Through a detailed characterization and understanding of this growth mechanism, we have designed an optimized growth sequence for each segment, allowing for the growth of nanosheets composed of several parallel segments with the desired alloy compositions, geometry and crystal quality. The parallel cavity arrangement significantly reduces the absorption of shorter-wavelength photons by the narrower-bandgap material when compared with composition-graded one-dimensional nanostructures 25 . As a result, lasing over a wavelength range of 191 nm in a single monolithic structure, the largest ever reported, has been observed. Finally, it is important to note that the term ‘white laser’, which associates concepts linked to broadband and monochromatic emission, might at first appear self-contradictory. Our results demonstrate that the apparently contradictory terms ‘white’ and ‘lasing’ can both be realized in a single monolithic structure. Our work greatly simplifies the process of creating monolithic laser structures with dynamically colour-controllable emissions, and is an important first step towards the realization of an electrically driven white-colour and full-colour nanolaser from a single monolithic structure. Methods Sample synthesis Multi-segment heterostructure nanosheets (MSHNs) were grown on a SiO 2 /Si substrate by a combination of VLS, VS and dual-ion exchange growth mechanisms using a single zone CVD horizontal tube (1.5 inches wide, 4 foot long) reactor. The substrate serves only as a mechanical support. MSHNs are otherwise grown as freestanding structures without epitaxial connection with the substrate, because the original growth substrates can be amorphous or non-lattice-matched crystals. The overall set-up and VLS growth mechanism are described in several earlier papers 24 , 33 , 48 and are shown schematically in Fig. 1a . CdSe and ZnS powders (Sigma Aldrich, 99.99% metal basis) were used as source materials. Specific details that are new to the present paper are described in Fig. 1 and in Supplementary Section 5 . Before growth, the SiO 2 /Si substrate was cleaned and then coated with a 10 nm layer of sputtered Au. After placing ZnS (at the centre, T = 980 °C), CdSe (12 cm upstream from the centre, T ≈ 840 °C) and the substrate (16 cm downstream from the centre, T ≈ 640 °C) inside the reactor, as shown in Fig. 1 , the system was evacuated to 30 mtorr. A 10 s.c.c.m. N 2 inert gas flow was introduced for 30 min to purge the reactor of O 2 . The system pressure was then set to 10 torr with backfilled N 2 and the furnace was turned on. After completion of the growth, the furnace was turned off and allowed to cool naturally to room temperature while continuing the 10 s.c.c.m. N 2 flow. The growth time in each step of the experiment was shorter than 12 min to prevent source depletion. Material characterization Structural characterization of the MSHNs was carried out using a JEOL 2010F TEM equipped with a Link energy-dispersive X-ray spectroscopy detector at 200 kV and an FEI XL30 environmental SEM at 15 kV. It is worth noting that the sample used for TEM characterization was deliberately chosen from a broken nanosheet piece to eliminate bending during dispersion onto the TEM grid, which would introduce difficulty in finding the zone axis. As-grown structures were dispersed onto a glass substrate via contact-printing 49 and then moved to a TEM grid using a homemade tapered fibre. Surface characterization was performed with a Bruker Dimension 3000 atomic force microscope (AFM) in tapping mode using a standard pyramidal SiN tip at a scan rate of 0.5 Hz. This set-up provides a Z -resolution of ∼ 0.07 nm. Optical measurements Samples were first cleaved from as-grown pieces by the bend-to-fracture method 43 . High-quality end facets were thus created as partially reflective mirrors to define the laser cavities. All optical measurements were performed at room temperature. After the fracture, samples were then transferred individually onto MgF 2 substrates (refractive index of ∼ 1.38) for enhanced optical confinement using a homemade tapered fibre. In the single-beam uniform pumping experiment (schematically shown in Supplementary Fig. 11a ) for the multi-colour lasing shown in Figs 3 and 4 , the third harmonic of a Q -switched Nd: YAG laser (Spectra Physics) was used to uniformly pump the individual samples ( λ = 355 nm, repetition rate of 10 Hz, pulse width of 9 ns) at an angle of 15° from the sample normal, with a spot size of 180 µm. The resulting emission was collected through a dark-field objective lens (Olympus LMPlanLF ×50, numerical aperture of 0.5) at an angle of 45° from the sample normal to enhance the collection efficiency of stimulated emission and reduce the spontaneous emission background. The collected light was then directed though a beamsplitter and gathered simultaneously by a charge-coupled device (CCD) camera (Lumenera Infinity 2-1R) and monochromator (Triax 320) equipped with a Si array detector (Jobin Yvon CCD). A long-wavelength-pass 405 filter (Semrock EdgeBasic) was used to remove the pump laser wavelength from the collected spectra. In the lasing colour tuning and mixing experiments shown in Figs 5 and 6 , the third harmonic of a Q -switched Nd:YLF laser (Spectra Physics, λ = 349 nm, repetition rate of 10 Hz, pulse width of 5 ns) was used to provide better pulse-to-pulse stability. Details of the set-up are shown in Supplementary Fig. 11b . The laser was first directed through a set of cylindrical lenses to change the length-to-width ratio of the laser spot. After splitting into three separate beams, each beam was then directed through three independent polarizer and analyser set-ups and focused to a 260 µm × 5 µm narrow stripe-like spot parallel to the cavity length direction. The same collection set-up was used for photoluminescence images and spectra recording, with only the collection angle of the objective being changed. An additional CCD camera (Nikon D60) was used to take ×1 magnification photographs to demonstrate the far-field colour mixing. The laser pump angle, objective collection angle and camera collection angle were 60°, 0° and 75°, respectively, from the sample normal (for detailed optical set-up information see Supplementary Section 9 ). | More luminous and energy efficient than LEDs, white lasers look to be the future in lighting and light-based wireless communication. While lasers were invented in 1960 and are commonly used in many applications, one characteristic of the technology has proven unattainable. No one has been able to create a laser that beams white light. Researchers at Arizona State University have solved the puzzle. They have proven that semiconductor lasers are capable of emitting over the full visible color spectrum, which is necessary to produce a white laser. The researchers have created a novel nanosheet – a thin layer of semiconductor that measures roughly one-fifth of the thickness of human hair in size with a thickness that is roughly one-thousandth of the thickness of human hair – with three parallel segments, each supporting laser action in one of three elementary colors. The device is capable of lasing in any visible color, completely tunable from red, green to blue, or any color in between. When the total field is collected, a white color emerges. The researchers, engineers in ASU's Ira A. Fulton Schools of Engineering, published their findings in the July 27 advance online publication of the journal Nature Nanotechnology. Cun-Zheng Ning, professor in the School of Electrical, Computer and Energy Engineering, authored the paper, "A monolithic white laser," with his doctoral students Fan Fan, Sunay Turkdogan, Zhicheng Liu and David Shelhammer. Turkdogan and Liu completed their doctorates after this research. The technological advance puts lasers one step closer to being a mainstream light source and potential replacement or alternative to light emitting diodes (LEDs). Lasers are brighter, more energy efficient, and can potentially provide more accurate and vivid colors for displays like computer screens and televisions. Ning's group has already shown that their structures could cover as much as 70 percent more colors than the current display industry standard. Another important application could be in the future of visible light communication in which the same room lighting systems could be used for both illumination and communication. The technology under development is called Li-Fi for light-based wireless communication, as opposed to the more prevailing Wi-Fi using radio waves. Li-Fi could be more than 10 times faster than current Wi-Fi, and white laser Li-Fi could be 10 to 100 times faster than LED based Li-Fi currently still under development. "The concept of white lasers first seems counterintuitive because the light from a typical laser contains exactly one color, a specific wavelength of the electromagnetic spectrum, rather than a broad-range of different wavelengths. White light is typically viewed as a complete mixture of all of the wavelengths of the visible spectrum," said Ning, who also spent extended time at Tsinghua University in China during several years of the research. In typical LED-based lighting, a blue LED is coated with phosphor materials to convert a portion of the blue light to green, yellow and red light. This mixture of colored light will be perceived by humans as white light and can therefore be used for general illumination. Sandia National Labs in 2011 produced high-quality white light from four separate large lasers. The researchers showed that the human eye is as comfortable with white light generated by diode lasers as with that produced by LEDs, inspiring others to advance the technology. This photo collage, supplied by the researchers, shows the mixed emission color from a multi-segment nanosheet in the colors of red, green, blue, yellow, cyan, magenta and white. The top dots in each photograph are the direct image of laser emission, while the tails under these dots are the reflection from the substrate. Credit: ASU/Nature Nanotechnology "While this pioneering proof-of-concept demonstration is impressive, those independent lasers cannot be used for room lighting or in displays," Ning said. "A single tiny piece of semiconductor material emitting laser light in all colors or in white is desired." Semiconductors, usually a solid chemical element or compound arranged into crystals, are widely used for computer chips or for light generation in telecommunication systems. They have interesting optical properties and are used to make lasers and LEDs because they can emit light of a specific color when a voltage is applied to them. The most preferred light emitting material for semiconductors is indium gallium nitride, though other materials such as cadmium sulfide and cadmium selenide also are used for emitting visible colors. The main challenge, the researchers noted, lies in the way light emitting semiconductor materials are grown and how they work to emit light of different colors. Typically a given semiconductor emits light of a single color – blue, green or red – that is determined by a unique atomic structure and energy bandgap. The "lattice constant" represents the distance between the atoms. To produce all possible wavelengths in the visible spectral range you need several semiconductors of very different lattice constants and energy bandgaps. "Our goal is to achieve a single semiconductor piece capable of laser operation in the three fundamental lasing colors. The piece should be small enough, so that people can perceive only one overall mixed color, instead of three individual colors," said Fan. "But it was not easy." "The key obstacle is an issue called lattice mismatch, or the lattice constant being too different for the various materials required," Liu said. "We have not been able to grow different semiconductor crystals together in high enough quality, using traditional techniques, if their lattice constants are too different." The most desired solution, according to Ning, would be to have a single semiconductor structure that emits all needed colors. He and his graduate students turned to nanotechnology to achieve their milestone. The key is that at nanometer scale larger mismatches can be better tolerated than in traditional growth techniques for bulk materials. High quality crystals can be grown even with large mismatch of different lattice constants. Recognizing this unique possibility early on, Ning's group started pursuing the distinctive properties of nanomaterials, such as nanowires or nanosheets, more than 10 years ago. He and his students have been researching various nanomaterials to see how far they could push the limit of advantages of nanomaterials to explore the high crystal quality growth of very dissimilar materials. Six years ago, under U.S. Army Research Office funding, they demonstrated that one could indeed grow nanowire materials in a wide range of energy bandgaps so that color tunable lasing from red to green can be achieved on a single substrate of about one centimeter long. Later on they realized simultaneous laser operation in green and red from a single semiconductor nanosheet or nanowires. These achievements triggered Ning's thought to push the envelope further to see if a single white laser is ever possible. Blue, necessary to produce white, proved to be a greater challenge with its wide energy bandgap and very different material properties. "We have struggled for almost two years to grow blue emitting materials in nanosheet form, which is required to demonstrate eventual white lasers, " said Turkdogan, who is now assistant professor at University of Yalova in Turkey. After exhaustive research, the group finally came up with a strategy to create the required shape first, and then convert the materials into the right alloy contents to emit the blue color. Turkdogan said, "To the best of our knowledge, our unique growth strategy is the first demonstration of an interesting growth process called dual ion exchange process that enabled the needed structure." This strategy of decoupling structural shapes and composition represents a major change of strategy and an important breakthrough that finally made it possible to grow a single piece of structure containing three segments of different semiconductors emitting all needed colors and the white lasers possible. Turkdogan said that, "this is not the case, typically, in the material growth where shapes and compositions are achieved simultaneously." While this first proof of concept is important, significant obstacles remain to make such white lasers applicable for real-life lighting or display applications. One of crucial next steps is to achieve the similar white lasers under the drive of a battery. For the present demonstration, the researchers had to use a laser light to pump electrons to emit light. This experimental effort demonstrates the key first material requirement and will lay the groundwork for the eventual white lasers under electrical operation. | 10.1038/nnano.2015.149 |
Computer | Artificial intelligence finds disease-related genes | Sanjiv K. Dwivedi et al, Deriving disease modules from the compressed transcriptional space embedded in a deep autoencoder, Nature Communications (2020). DOI: 10.1038/s41467-020-14666-6 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-14666-6 | https://techxplore.com/news/2020-02-artificial-intelligence-disease-related-genes.html | Abstract Disease modules in molecular interaction maps have been useful for characterizing diseases. Yet biological networks, that commonly define such modules are incomplete and biased toward some well-studied disease genes. Here we ask whether disease-relevant modules of genes can be discovered without prior knowledge of a biological network, instead training a deep autoencoder from large transcriptional data. We hypothesize that modules could be discovered within the autoencoder representations. We find a statistically significant enrichment of genome-wide association studies (GWAS) relevant genes in the last layer, and to a successively lesser degree in the middle and first layers respectively. In contrast, we find an opposite gradient where a modular protein–protein interaction signal is strongest in the first layer, but then vanishing smoothly deeper in the network. We conclude that a data-driven discovery approach is sufficient to discover groups of disease-related genes. Introduction A trend in systems medicine applications is to increasingly utilize the fact that disease genes are functionally related and their corresponding protein products are highly interconnected within networks, thereby forming disease modules 1 , 2 . Those modules defines systematic grouping of genes based on their interaction, which circumvents part of the previous problems using gene-set enrichment analysis 3 that require pathway derived gene-sets, which are less precise since key disease pathways are highly overlapping 1 , 4 . Several module-based studies have been performed on different diseases by us and others, defining a disease module paradigm 1 , 2 , 5 , 6 . The modules generally contain many genes and a general principle for validation has been to use genomic concordance, i.e., the module derived from gene expression and protein interactions can be validated by enrichment of disease-associated SNPs from GWAS. The genomic concordance principle was also used in a DREAM challenge that compared different module-based approaches 7 . Yet these studies require as a rule knowledge of protein–protein interaction (PPI) networks to define such modules, which by their nature are incomplete, and either biased toward some well-studied disease genes 8 , with a few exceptions 9 , 10 , or derived from simple gene–gene correlation studies. Deep artificial neural networks (DNNs) are revolutionizing areas such as computer vision, speech recognition, and natural language processing 11 , but only recently emerging to have an impact on systems and precision medicine 12 . For example, the performance of the top five error rates for the winners in the international image recognition challenge (ILSVRC) dropped from 20% in 2010 to 5% in 2015 upon the introduction of deep learning using pretrained DNNs that were refined using transfer learning 13 . DNN architectures are hierarchically organized layers including a learning rule with nonlinear transformations 14 . The layers in a deep learning architecture correspond to concepts or features in the learning domain, where higher-level concepts are defined or composed from lower-level ones. Variational autoencoders (VAEs) is one example of a DNN that aims to mimic the input signal using a compressed representation, where principal component analysis represents the simplest form of a shallow linear AE. Given enough data, deep AEs have the advantage of being able both to create relevant features from raw data and identify highly complex nonlinear relationships, such as the famous XOR switch, which is true if only one of its inputs is true (Fig. 1a ). Fig. 1: Schematic diagram of interpreting an autoencoder and defining the disease modules. a Training an autoencoder. b The steps of light-up method used for interpreting the hidden layer nodes in terms of PPI and pathways. c Depicts the steps of predicting the disease gene using transcriptomics signals and autoencoder. Full size image Although omics repositories have increased in size lately, they are still several orders of magnitude smaller compared to image data sets used for ILSVRC. Therefore, effective DNNs should be based on as much omics data as possible, potentially using transfer learning from the prime largest repositories and possibly also incorporating functional hidden node representation using biological knowledge. The LINCS project defined and collected microarrays measuring only ~1000 carefully selected landmark genes, which they used to impute 95% of the remaining genes 15 . Note that this compression can at best work for mild perturbations of a cell for which the DNN has been trained to fit. Hence, they may not generalize well on new knockdown experiments 16 . Although interesting and useful for prediction purposes, those representations in a DNN cannot readily be used for data integration or serve as biological interpretation. For that purpose, Tan et al. used denoising AEs derived from the transcriptomics of Pseudomonas aeruginosa and discovered a representation where each node coincided with known biological pathways 17 . Chen et al. used cancer data and showed that starting from pathways represented as a priori defined hidden nodes, allowed the investigators to explain 88% of variance, which in turn produced an interpretable representation 18 . Recently a few authors have shown that unbiased data-driven compression can learn meaningful representations from unlabeled data, which predicted labeled data of single-cells RNA-seq 19 , 20 and drug responses 21 , 22 . These results demonstrate that AEs can use predefined functional representations, and can learn such representations from input data that can be used for other purposes in transfer learning approaches. However, a systematic evaluation of how to balance between predefined features versus purely data-driven learning remains to be determined. To address this question, the interpretation of the representations within NNs is fundamental. The most commonly used tool for this purpose is the Google DeepDream 23 . Briefly, the output effect is analyzed using the light-up of an activation of one hidden node, followed by a forward-propagation of the input to the output. This allows the user to interpret the net effect of a certain node and is referred to by us as light-up (Fig. 1b ). In this work, we investigated different AE architectures searching for a minimal representation that explained gene expression, which in our hands resulted in a 512-node wide and three-layered deepAE capturing ~95% of the variance in the data. Next, we derived a novel reverse supervised training-based approach based on light-up of the top of the transferred representations of the trained AE that defined the disease module (Fig. 1c ). Using the third layer of the AE we identified disease modules for eight complex diseases and cancers which were all validated by highly significant enrichment of GWAS genes of the same disease. In order to understand the role of each of the hidden representations we tested whether they corresponded to genes that were functionally related and disease associated genes. First, unsupervised analysis of the samples in the AE space showed that disease cluster in all layers, while cell types clustered only in the third layer. Then, we decoded the meaning of the outputs from the nonlinear transformations that defined the compressed space of the autoencoder. To this end we utilized closeness and centrality of the PPI data in STRING 24 , as a first step to test if the derived gene-sets was linked to previous disease module research. Conversely, we found that genes within the same hidden AE node in the first layer were highly interconnected in the STRING network, which gradually vanished across the layers. In summary, we believe that our data-driven analysis using deepAE with a subsequent knowledge-based interpretation scheme, enables systems medicine to become sufficiently powerful to allow unbiased identification of complex novel gene-cell type interactions. Results A deepAE with 512 nodes explained 95% of variance Training neural networks requires substantial, well-controlled big data. In this study we therefore performed our analysis using the 27,887 quality-controlled and batch-effect-corrected Affymetrix HG-U133Plus2 array compendium, thus encompassing data from multiple laboratories 25 . Furthermore, the data had previously been analyzed using cluster analysis and linear projection techniques such as principal component analysis 25 . The data set and the ensuing analysis therefore constitute a solid reference point based on which we are in a good position to ask whether successive nonlinear transformations of the data would induce a biologically useful representation(s). Specifically, we investigate whether disease-relevant modules could be discovered by training an autoencoder (AE) using this data set. The underlying hypothesis being that an autoencoder compression represents a nonlinear unbiased representation of the data. Similarly to the knowledge-driven disease module hypothesis 2 , closeness within the autoencoder space suggest functional similarity and could be used to identify upstream disease factors. To this end we partitioned the data into 20,000 training and 7887 test samples. We trained AEs of different widths from 64 to 1024 hidden nodes, incremented stepwise in powers of two, and we contrasted two depths in our analysis, i.e., a single-layered coded shallow AE (shallowAE) and a deep triple layered AE. Depth refers to that the encoder and decoder both contains one extra hidden layer generating two weight sets, respectively, in contrast to the shallow AE which has only a single weight set for encoder and decoder respectively 26 . We calculated the mean squared training and test error (measured using error = 1 − R 2 , where R 2 is computed globally over all genes using a global data variance (Fig. 2a ) and locally for each gene individually using gene-wise variances (Fig. 2b, c )). Comparing the reconstruction errors of the different AEs we found, not surprisingly, that the shallow AE performed poorly (>15% error) whenever we used less than 1024 hidden nodes, whereas increasing the number of nodes to 1024, reduced the error threefold to ~5%. In contrast, the deepAE performed well already for 64 hidden nodes (11% error), which subsequently decreased following a power law up to 512 hidden nodes, best described by R 2 = 0.89 × 2 0.028( x − 64) , where x is the number of hidden nodes. Next, we analyzed the gene-wise R 2 performances of the R 2 distributions (Fig. 2b, c ), which showed that the median gene error was also low ( R 2 > 0.86) already for the 512 deepAEs. In summary, we found that the deepAE with 512 hidden nodes performed comparably to the shallowAE with 1024 nodes, although the latter included almost twice as many parameters. Since the purpose of our study was to discover biologically meaningful disease module we proceeded and analyzed the 512 deepAEs in the remaining part of the paper as this architecture provided an effective compression of the data. Fig. 2: Deep autoencoder (deepAE) outperformed shallow autoencoder (shallowAE) up to 512 hidden nodes in terms of accuracy. 1 − coefficient of determination ( R 2 ), in training and validation set using the full data set variance ( a ) and the gene-wise variances ( b , c ). The left panel shows the mean behavior of R 2 values on the full data set. The distribution of R 2 values across each gene is shown for both models, shallowAE ( b ), and three-layer deepAE ( c ), with increase in the number of hidden nodes in each layer from 64 to 1024. Full size image GWAS genes were highly enriched in the third hidden layer Our overarching aim was to assess to what extent the compressed expression representation within a deepAE could capture molecular disease-associated signatures in a data-driven manner. To this end we downloaded well-characterized genetic associations for each of the diseases in our data set 27 . From this data we found seven diseases in our gene expression compendium in which at least 100 genes were found in DisGeNET, which we reasoned was sufficiently powerful to perform statistical enrichment analysis. These included asthma, colon carcinoma, colorectal carcinoma, Crohn’s disease, nonsmall cell lung cancer, obesity and ulcerative colitis. In order to associate the genes upstream of a disease we designed a procedure which we refer to as reverse training (“Methods”). Briefly, using our hidden node representation and the phenotype vectors (represented as binary coded diseases) we designed a training procedure to predict the gene expression, referred to as ‘reverse’ since we explicitly used the hidden node representation. This procedure was repeated three times using one hidden layer as input at a time, and as a comparison we also included the shallow AE. In a result, we deciphered a gene ranking to each disease based on our functional hidden node representation. Next, we assessed the relevance of this representation by computing the overlap of the top 1000 genes using hyper-geometric test for each disease with GWAS (Fig. 3a, b ) and as a complementary analysis using disease ontology (Supplementary Fig. 1 ). Interestingly, we found a highly significant disease association for at least one layer in all tested diseases (Fisher’s exact 10 −8 < P < 0.05), and for four cases the strongest association was found using the full model. In all the cases, one of the deepAE layers showed higher enrichment than shallowAE. In order to validate the generality of this procedure we downloaded a new data set for MS on the same experimental platform 28 . For this data set we also performed a similar analysis of the control samples with other neurological diseases (OND), similar to the analysis performed in ref. 28 . Reassuringly, we found significant enrichment for MS patients in MS GWAS (Fisher exact test P = 1.1 × 10 −5 , odds ratio (OR) = 2.4, n = 30). Comparing these patients with OND patients showed lower enrichments (Fisher exact test P = 8.6 × 10 −3 , OR = 1.7, n = 22) (Fig. 3b ) and a similar amount of top ranked differentially expressed genes between MS and OND showed no significance (Fisher exact test P = 0.50, OR = 1.03, n = 13). Lastly, to test that the enrichment of GWAS in the third hidden layer was not due to the batch effect and cell-type differences within the compendia we performed similar tests for the control samples. For each study of the eight studied diseases we found higher enrichment than the controls (binomial test P = 3.5 × 10 −2 , in Supplementary Table 1 ), supporting the relevance of our disease signatures. Moreover, we compared our unsupervised AE approach followed by reverse training by a naive supervised neural network with 512 hidden nodes (“Methods”). This showed for six out of eight diseases a lower enrichment of GWAS with a tie on the colon carcinoma and nonsmall cell lung cancer. Taken together, the high enrichment of GWAS for the same disease supports our claim that our unbiased nonlinear approach can indeed identify relevant upstream markers, generally with a higher accuracy than the shallower and narrower neural networks. Fig. 3: Disease association enrichment of autoencoder (AE)-derived gene sets. a , b Enrichment score (−log10(P)) resulting from the hyper-geometric test between disease gene overlap of the predicted genes by the deep neural network derived by first (green), second (blue), and third (violet) hidden layers of the deep autoencoder (deepAE). As references, we show with a method based on a vanilla supervised neural network (orange) and also the hidden layer of the shallow autoencoder 512 nodes (shallowAE; magenta). MS. c The Fisher’s combined p value across all eight diseases predicted by a three-layer deep autoencoder. The dotted (brown) line corresponds to the p value, cut-off 0.05. Full size image Functionally similar samples colocalized in the third layer Next, we asked why disease genes preferentially associate with the third but not the other layers in a deepAE. However, to disentangle what is represented by each layer in a deepAE is not straightforward and has previously been the target of other studies 29 . In order to provide insight into what each layer represented in our case, we performed unsupervised clustering of the samples using the compressed representation. Since this was still a 512-dimensional analysis we further visualized the deepAE representation using the first two linear principal components (PCs) of the compressed space. This representation is henceforth referred to as the deepAE-PCA. Previously it has been shown that classical PCA on the full ~20,000-dimensional gene space can discriminate cell types and diseases very well, which we therefore used as a reference in our analysis 25 . To analyze whether samples close in these spaces were biologically more similar than two random samples, we computed the Silhouette index (SI) for phenotypically defined groups, governed by their cell type and disease status, respectively (Fig. 4 ). Note that SI = 1 reflects a perfect phenotypic grouping and SI = −1 indicates completely mixed samples. Next, the samples were grouped based on the different cell types in the data ( n = 56) and tested to determine whether the deepAE-PCA or PCA had the highest SI based on each of their respective, different hidden layers (see “Methods”). We filtered the compressed coordinates of normal cell types and found significantly more cell types having a higher SI by at least 0.1 for the third hidden deepAE layer than was the case in the PCA-based approach ( n = 38 out of 56, odds ratio = 2.11, binomial test P = 2.28 × 10 −3 ). Interestingly, smaller enrichments were also found for the first ( n = 30, OR = 1.15, binomial test P = 0.25) and second ( n = 31, OR = 1.24, binomial test P = 0.18) layers. Next, we repeated this analysis for the 128 diseases in our data-set, and we found again that the third layer ( n = 86, OR = 2.05, binomial test P = 6.27 × 10 −5 ) exhibited the strongest association with respect to first layer ( n = 71, OR = 1.25 binomial test P = 1.25 × 10 −1 ) and second layer ( n = 81, OR = 1.72, binomial test P = 1.69 × 10 −3 ). These observations suggested that samples originating from similar conditions and phenotypes were automatically grouped according to the hidden layers, most significantly for the third. Fig. 4: Deep autoencoder (deepAE) representation clustering samples into cell types and diseases. a Significance score (−log10(p)) for first (green), second (blue), and third (violet) deepAE layers are more coherent (measured by a high Silhouette index (SI)) with respect to cell types (lower) and diseases (upper) than the standard principal component (PC) analysis-based approach. b SI defined by the two PCs for diseases and control samples on compressed signals at the third hidden of deepAE with each of the three hidden layers having 512 nodes. Full size image First layer associated genes colocalized in the interactome In order to further interpret the different layers and uncover their role in defining disease modules, we proceeded to analyze the relationship between the signature genes of each hidden node. Since cellular function is classically described in terms of biological pathways, or lately has also been abstracted to densely interconnected subregions in the interactome (so-called network communities) we analyzed the parameters in the deepAE and their connection to the global pattern of expressed genes 30 , 31 . There are different ways one could potentially interpret parameters in a deepAE. To this end, we created a procedure to associate genes with hidden nodes, which we refer to as light-up. Briefly, a light-up input vector was defined for each hidden node by activating it to the maximum value of the activation function, clamping all other nodes at the same layer deactivated by zero values. Then we forward propagated this input vector through all layers to the output pattern response on the gene expression space (“Methods”). This resulted in a ranked list of genes for each hidden node, identifying which genes were most influenced by the activation of that node. We repeated this procedure for all hidden nodes and layers. In order to test if these lists corresponded to functional units, we analyzed their localization within the PPI network STRING 24 . We hypothesized that genes co-influenced by a hidden node could represent protein patterns involved in the same function. Also, the STRING database captures proteins associated with the same biological function and which are known to be within the same neighborhood of their physical interactome. By first ranking the most influenced genes we systematically analyzed the cutoffs thereby showing whether a gene was considered as associated with the node by powers of two from 100 to 10,000. Next, we calculated the average shortest path distance between these genes within the STRING network, using the harmonic mean distance to include also disconnected genes. This analysis revealed that the top-nodes in the ranked lists of the first hidden layer, had a high betweenness centrality (Fig. 5a ) while exhibiting a low average graph distance between each other (Fig. 5b–d ). Thus, highly co-localized genes were the most central part of the PPI. Both findings were tested using several different cut-offs (Fig. 5a-f ), and the effect was most evident for the first layer, appearing to a weaker extent for the second layer and fully vanishing at the third layer. In order to assess the robustness of these results, we investigated the effect of the selection of a specific database and different variants of AE. Specifically, we computed the similarity to our results in the case we used three different annotation databases (BioSNAP, KEGG, REACTOME, and GO, Supplementary Fig. 4 and 5 ) and compared our approach against deepAE constructed by denoising, and sparse AE as well as funnelings with similar results (Supplementary Figs. 6 – 9 ). In all cases we found similar associations across the layers. This analysis suggests that our interpretable gradients in the different layers are robust across these variations. Fig. 5: Genes that co-localised in the first and seccond hidden layers also co-localised in the interactome. a The betweenness centrality behavior of the top ranked genes on the basis of the first (green), second (blue), and third (violet) hidden layers of the deep autoencoder. b – d The distribution of harmonic average distances of the top rank genes based on each hidden node of the first, second, and third hidden layers of the deep autoencoder, respectively. Also, these results are robust across 256 and 1024 hidden nodes of the deep autoencoder ( e , f ). Full size image Replications of interpretable gradients using RNA-seq In order to assess the generality and to increase the domain of applicability of the AE approach to interpret emerging large RNA-seq data sets, we identified a large publicly available body of RNA-seq material 32 . These data were divided into 50,000 training samples and 9532 validation samples for 18,999 genes, and was used to train a deep AE with similar hyperparameters as for the microarrays, i.e., using a three-layered AE with 512 hidden nodes in each layer. Unfortunately, this data did not contain sufficient complex disease samples, and we therefore searched for additional RNA-seq data sets for our previously tested complex diseases, namely asthma (GSE75011), Crohn’s disease, ulcerative colitis (GSE112057), obesity (GSE65540) and multiple sclerosis 33 . Similar to the microarray AE we found a highly consistent significant overlap between GWAS and the associated disease genes derived from the third layer for each of the diseases (Fisher combined P < 10 −15 ), and to a lesser extent in the other two layers, see Fig. 6 . Next, we tested whether the hidden nodes corresponded to close sets of interconnected protein–protein interactions by repeating the light-up procedure. Interestingly, we found that the top ranked genes in the first, and to a lesser extent also in the second hidden layer, had low average betweenness centrality and had low average distance. Strikingly, this association was even stronger than in the analysis using the AE of the microarrays. In order to understand the reason behind increase in the interpretability level of the PPI association we trained the deep AE on 20 K samples. We found similar association levels as for the 50 K samples (Supplementary Fig. 8 ). Hence, we conclude that the discrepancies between the microarray and RNA-seq based AEs are not due to the training samples sizes. In summary, our replication of our findings that the relationship between disease gene and the protein interaction confirms our findings of deep AEs as an unbiased estimator of functional disease associations (Fig. 7 ). Fig. 6: Generalization of disease association enrichment results in the deep autoencoder (deepAE) of derived gene sets using RNA-seq data. a Enrichment score (−log10(P)) resulting from the hyper-geometric test between disease gene overlap of the predicted genes by the deep neural network derived by the first (green), second (blue), and third (violet) hidden layers, of the deepAE. b Fisher’s combined p value across all five complex diseases predicted by the three-layer deep autoencoder. The dotted (brown) line corresponds to the p value, cut-off 0.05. Full size image Fig. 7: RNA-seq replicated gene co-localisation pattern from micro-array data. a Betweenness centrality behavior of the top ranked genes on the basis of the first (green), second (blue), and third (violet) hidden layers of the deep autoencoder trained on the RNA-seq data. b – d Distribution of harmonic average distances of the top rank genes based on each hidden node of the first, second, and third hidden layers of the deep autoencoder respectively. Full size image Discussion In summary, our study aimed at using deep neural networks for identification of a new unbiased data-driven functional representation that can explain complex diseases without the reliance of the PPI network which is known to be incomplete 7 and strongly affected by the study bias of some early discovered cancer genes. We showed, to the best of our knowledge, for the first time a deep learning analysis do find disease relevant signals and that the different layers capture gradients of biology. This suggests that a data-driven learning approach could eventually complement the findings and techniques derived from network medicine for understanding complex diseases. In order to find the similar inferences between structural features of the PPI and the estimated parameters of neural networks, we began a systematic demonstration of the light-up concept 23 motivated by the need to prioritize genes based on their contributions in the compressed space of the deepAE. Furthermore, we showed that the top genes prioritized by each node in the first and middle layers are localized and belong to the core part of the PPI. Moreover, the third layer nodes possess long-range variability in showing the localization to delocalization of their top genes compared to the random genes. This kind of gradient in terms of interpretability with respect to localization within the PPI network suggests that each layer indeed encodes different types of biological information. These results also suggest that the transformed signals in the compressed space first decode the modular features of the underlying interactome which then vanishes smoothly layer by layer as a deeper representation is encoded. Concurrently, with such a decreasing protein-defined modular gradient, an increase in disease-relevant genes and modules thereof is progressively discovered in the deepest layers of the AE. Next, we presented a novel method that uses a supervised neural network to determine a disease-specific feature vector in the compressed space of the deepAE. The disease-specific feature vector of the compressed space was transformed into a gene space that defined the disease module. We compared different AEs and found deepAEs using 512-state variables of ~20,000 genes at a 95% R 2 for microarrays and 80–85% for RNA-seq. Interestingly, AE depth reduced the number of learnable parameters approximately twofold from shallow networks and the number of required hidden units, similar to what was reported for a microbiome 17 , and also represented a two-fold compression in number of latent variables compared to the number of variables in the LINCS project. The high degree of compression for the deep AE with fewer state variables suggests that this representation is indeed preferable compared to shallow representations. One reason for the need of depth is that such AEs are theoretically capable of capturing more complex relations between genes, such as the XOR relations 26 , 34 , 35 , which shallow AEs cannot. Importantly, our biological and disease interpretations of the layers were robust w.r.t using microarray or RNAseq data; different data bases for interpretations, as well as different versions of AEs. Our findings suggest the usefulness of deep learning analysis to decompose different hierarchy levels hidden within the relations between genes. For example, the first layer encodes the modular features belonging to the central part of the interactome. These features are synonymously selected by the interactome-based approaches to find the components that have control over the entire system 36 . In contrast, these features are not necessarily transferable by cell type-specific transcriptomic signals. Next, the third layer is close to the middle as well as output layer, hence it is proximal in capturing the true cell-type as well as disease specific signals that are coded in terms of interactome. More interestingly, we showed that the third layer efficiently encodes cell type-specific functional features; therefore, it might be reason to increase the likelihood of mapping the disease-specific functional genes by disease-related cell type signals in the light-up. Also, the presented approach can play a crucial role in utilizing the resolution level of the single cell transcriptomic signals in prioritizing genes that are enriched with the upstream dysregulated genes and their relationship with causal genetic variants 37 . Another important application of our approach can indeed provide new insights in the multiscale organization about disease–disease, disease pathways disease–gene associations 38 . An alternative approach to the AE would be using NNs for the particular disease of interest and thereby finding a best representation of the disease. However, although potentially feasible for some diseases, such an approach would in our opinion not likely make best use of the existing compendia for forming latent variables, suggesting that such a representation could fail in generality. Instead, in using transfer learning, our AEs could help stratify disease groups of limited samples as the number of parameters could decrease by about ~40-fold (from ~20,000 to 512), which decrease the analysis complexity. Therefore, transfer could be applied by other clinically interested researchers starting from our derived representation, which could lead to increased power for building classification systems. We think that the approach is applicable to other omics and using our derived single omics representations together with others they open a door to multi-omics neural networks using transfer learning, similar to what is currently routinely done within the field of image recognition. Methods Data preparation and normalization The available microarray data at Ref. 25 represents normalized log-transformed values. Similarly, we normalized RNA-seq data by the upper quantile method using the function uqua of the R package NOISeq and log transformed the normalized gene expression values by log 2 (1 + normalized expression value) 39 . Also, we discarded the noisy log-transformed expression values those are less than the 3.0. Next, we renormalized both microarray and RNA-seq data such that each ith gene mRNA expression level in the jth sample E i,j , across the samples to be in the range between zero and unit,.i.e., \(e_{i,j} = \frac{{E_{i,j} - {\rm{min}}\left( E \right)}}{{{\rm{max}}\left( E \right) - {\rm{min}}\left( E \right)}}\) . Parameter optimization The normalized expression matrix [ e i,j ] is used both as input and output for training the AE with sigmoid activation function. We have chosen a dense layer so that the optimizer starts with an initial point that has unbiased dependency among the data features. We used optimizer ADAM, with learning rate = 1.0 × 10 −4 , β 1 = 9.0 × 10 −1 , β 2 = 9.99 × 10 −1 , ε = 1.0 × 10 −8 and decay = 1.0 × 10 −6 , to train the model which we have observed as an optimal choice in predicting the high level of accuracy in both training and validation data sets 40 . The batch size was 256 for the training. In order to systematically investigate the impact of number of hidden nodes on the prediction accuracy, we fixed the number of hidden nodes in all the three hidden layers of the deep AE, termed as a three-layer model (Fig. 1a ). In our case, the three-layer model with 512 hidden nodes was more suitable for capturing the biological features. This model has fewer reconstruction errors in comparison with similar hidden node of the one-layer model (shallow AE). Also for the denoising the deepAE, we corrupted the input transcriptomics signals by adding the Gaussian distributed numbers with mean zero and standard deviation 0.5. Next, we replaced the values which are less than zero by zero and greater than 1 by 1. In order to sparsify the deepAE, we used the weight parameter 1.0 × 10 −8 in the L1 constraint to the kernel regularizer in keras. We implemented our methods using the tensorflow backend ( ) and Keras ( ) neural network Python library. Interpreting the trained AE with PPI The preserved biology in the compressed space is confined in each hidden layer. Therefore, our objective was to understand the meaning of all the nodes in each hidden layer. For this objective, we computed activations at the output layer for each node of a hidden layer. We recursively forward propagated the maximum activation value of each node, while keeping other nodes neutral by zero input, on the remaining portion. Finally, we prioritized the genes on the basis of last layer activations. For simplicity, we mathematically formulated these steps as follows (Fig. 1b ). Suppose k th layer of an L layer AE, has N k nodes. Here, N 1 and N L are the same as the number of genes in the profile expression matrix. Also, the number of nodes in each hidden layer is H , i.e., N k = H for k ∊ {2, 3, 4, …. L −1}. The following equation recursively defines the activations, x k , of the k th layer from the activations, \(x^{k - 1}\) , at ( k – 1)th layer with the initial activation vector x p (it consists of the maximum activation value at the corresponding position of the hidden node and the rest of the elements are zero) corresponding to the node in the p th hidden layer, $$x^k = \left\{ \begin{array}{ll} f^k\left( W^{k}x^{k - 1} + b^{k} \right) & {\mathrm{if}}\,p < k \le L \\ \qquad x^{p} &{\mathrm{if}}\,p = k \end{array} \right.,$$ (1) where f k , b k , and W k are associated with the k th layer activation function, bias term and weight matrix respectively. Note that the first input layer does not have an activation function, bias term and weight matrix, so k ∊ {2, 3, 4,…, L }. The Eq. ( 1 ) defines the activations at the output layer, x L with dimension of gene size. We prioritized the genes based on the vector, x L , to show the associations with the PPI module. Predicting disease genes We derived a new approach for predicting a disease gene that is explained in the following four steps (Fig. 1c ), which were performed three times in order to estimate mean values and standard deviation estimations: (1) Compressing the expression profile at hidden layers using trained deepAE. (2) Training a supervised neural network on the compressed representations in reverse direction: We trained a one-hidden-layer supervised neural network, having the same number of nodes in the second and third layers, with sigmoid and linear activation function respectively. The input matrix [ c i,j ] is followed by i ∊ {1, 2, 3,…, P } and j ∊ {1, 2, 3,…, S } with dimension P × S , where P and S are the total number of phenotypes and samples respectively. The matrix [ c i,j ] is defined by another identity matrix [ δ i,p ] of the Kronecker tensor as follows, \(c_{i,j} = \delta _{ip}\) if the j th sample is associated with the p th phenotype. The output matrix [ s k,j ] is a profile matrix of compressed signals at a hidden layer of dimension H × S , while H is the number of nodes in the hidden layer. (3) Stacking the supervised neural network with the left part of deepAE, in the feed forward direction, from the layer at which the supervised neural network is trained. We scaled the mean and the variance of the weight matrices and biases in the consecutive layers where both networks are stacked. (4) Finding the disease scores from the expression: The absolute value of scores s p , for prioritizing the genes related to the p th phenotype are computed by the parameters of a stacked neural network using: $$ x^k = \left\{ \begin{array}{ll} f^k\left( {W^kx^{k - 1} + b^k} \right) & {\mathrm{if}}\,1 < k \le L - 1 \\ \qquad r^p & {\mathrm{if}}\,k = 1 \end{array} \right.,$$ $$ s^p = W^Lx^{L - 1}$$ where \(r^p = \left[ {\delta _{ip}} \right]\) is a one column vector for the p th phenotype followed by i ∊ {1, 2, 3, …. P }. We compared our approach with naively training, i.e, training gene expression profile as an input instead of the compressed representations, a neural network with 512 hidden nodes and performing disease association as above. Validation of predicted genes We downloaded the curated disease SNPs from the DisGeNET database and human genome reference consortium assembly, build version 37 (GRch37, hg19) from the UCSC database ( ). We computed the closest gene to each disease associated SNP, using Bedtools under the default option. In this way, we defined disease-associated gene sets for validating the neural network-based predicted genes. The performance of the predicted genes was demonstrated in terms of Fisher p value from the hyper-geometric test. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The trained models and the normalized gene expression data used for defing the disease modules are available at The microarray and RNA-seq transcriptomics are taken from the ArrayExpress database (accession number E-MTAB-3732) and , respectively. Code availability The codes and the tutorial for using them is availiable at the gitlab page . | An artificial neural network can reveal patterns in huge amounts of gene expression data and discover groups of disease-related genes. This has been shown by a new study led by researchers at Linköping University, published in Nature Communications. The scientists hope that the method can eventually be applied within precision medicine and individualized treatment. It's common when using social media that the platform suggests people whom you may want to add as friends. The suggestion is based on you and the other person having common contacts, which indicates that you may know each other. In a similar manner, scientists are creating maps of biological networks based on how different proteins or genes interact with each other. The researchers behind a new study have used artificial intelligence, AI, to investigate whether it is possible to discover biological networks using deep learning, in which entities known as "artificial neural networks" are trained by experimental data. Since artificial neural networks are excellent at learning how to find patterns in enormous amounts of complex data, they are used in applications such as image recognition. However, this machine learning method has until now seldom been used in biological research. "We have for the first time used deep learning to find disease-related genes. This is a very powerful method in the analysis of huge amounts of biological information, or Big Data," says Sanjiv Dwivedi, postdoc in the Department of Physics, Chemistry and Biology (IFM) at Linköping University. The scientists used a large database with information about the expression patterns of 20,000 genes in a large number of people. The information was "unsorted," in the sense that the researchers did not give the artificial neural network information about which gene expression patterns were from people with diseases, and which were from healthy people. The AI model was then trained to find patterns of gene expression. One of the challenges of machine learning is that it is not possible to see exactly how an artificial neural network solves a task. AI is sometimes described as a "black box"—we see only the information that we put into the box and the result that it produces. We cannot see the steps between. Artificial neural networks consist of several layers in which information is mathematically processed. The network comprises an input layer and an output layer that delivers the result of the information processing carried out by the system. Between these two layers are several hidden layers in which calculations are carried out. When the scientists had trained the artificial neural network, they wondered whether it was possible to lift the lid of the black box, in a manner of speaking, and understand how it works. Are the designs of the neural network and the familiar biological networks similar? "When we analysed our neural network, it turned out that the first hidden layer represented to a large extent interactions between various proteins. Deeper in the model, in contrast, on the third level, we found groups of different cell types. It's extremely interesting that this type of biologically relevant grouping is automatically produced, given that our network has started from unclassified gene expression data," says Mika Gustafsson, senior lecturer at IFM and leader of the study. The scientists then investigated whether their model of gene expression could be used to determine which gene expression patterns are associated with disease and which is normal. They confirmed that the model finds relevant patterns that agree well with biological mechanisms in the body. Since the model has been trained using unclassified data, it is possible that the artificial neural network has found totally new patterns. The researchers plan now to investigate whether such, previously unknown patterns, are relevant from a biological perspective. "We believe that the key to progress in the field is to understand the neural network. This can teach us new things about biological contexts, such as diseases in which many factors interact. And we believe that our method gives models that are easier to generalize and that can be used for many different types of biological information," says Mika Gustafsson. Mika Gustafsson hopes that close collaboration with medical researchers will enable him to apply the method developed in the study in precision medicine. It may be possible, for example, to determine which groups of patients should receive a certain type of medicine, or identify the patients who are most severely affected. | 10.1038/s41467-020-14666-6 |
Chemistry | Comprehensive analysis of single plant cells provides new insights into natural product biosynthesis | Lorenzo Caputi, Single-cell multi-omics in the medicinal plant Catharanthus roseus, Nature Chemical Biology (2023). DOI: 10.1038/s41589-023-01327-0. www.nature.com/articles/s41589-023-01327-0 Journal information: Nature Chemical Biology | https://dx.doi.org/10.1038/s41589-023-01327-0 | https://phys.org/news/2023-05-comprehensive-analysis-cells-insights-natural.html | Abstract Advances in omics technologies now permit the generation of highly contiguous genome assemblies, detection of transcripts and metabolites at the level of single cells and high-resolution determination of gene regulatory features. Here, using a complementary, multi-omics approach, we interrogated the monoterpene indole alkaloid (MIA) biosynthetic pathway in Catharanthus roseus , a source of leading anticancer drugs. We identified clusters of genes involved in MIA biosynthesis on the eight C. roseus chromosomes and extensive gene duplication of MIA pathway genes. Clustering was not limited to the linear genome, and through chromatin interaction data, MIA pathway genes were present within the same topologically associated domain, permitting the identification of a secologanin transporter. Single-cell RNA-sequencing revealed sequential cell-type-specific partitioning of the leaf MIA biosynthetic pathway that, when coupled with a single-cell metabolomics approach, permitted the identification of a reductase that yields the bis-indole alkaloid anhydrovinblastine. We also revealed cell-type-specific expression in the root MIA pathway. Main Gene discovery for metabolic pathways in plants has relied on whole-tissue-derived datasets 1 . Discovery entails correlating the expression of genes with the presence of the molecule of interest. Occasionally, high-quality genome assemblies further facilitate pathway gene discovery by allowing the identification of biosynthetic gene clusters, but such clusters occur in limited numbers of plant pathways. Overall, identifying all genes in a complex metabolic pathway typically requires functional screening of large numbers of candidate genes; these mining approaches are further limited when genes are not coregulated. In plants, biosynthetic pathways of these complex specialized metabolites (natural products) are localized not only to distinct organs but also to distinct cell types within organs. The advent of single-cell omics has enormous potential to revolutionize metabolic pathway gene discovery in plants 2 , 3 , 4 . Furthermore, single-cell omics reveal how metabolic pathways are partitioned across cell types. The medicinal plant species Catharanthus roseus (L.) G. Don produces monoterpene indole alkaloids (MIAs), a natural product family with a wide variety of chemical scaffolds and biological activities 5 . These include the dimeric MIAs that demonstrate anticancer activity (vinblastine and vincristine) or are used as a precursor (anhydrovinblastine) for alkaloids with anticancer activity (for example, vinorelbine; Fig. 1 and Supplementary Fig. 1 ). MIA biosynthesis in C. roseus shows distinct metabolite profiles across organs, with leaves producing vinblastine and vincristine and roots producing hörhammercine (Fig. 1 and Supplementary Fig 2 ). Over the last 30 years, 38 dedicated MIA pathway genes and several transcription factors involved in the jasmonate-induction of the MIA biosynthetic pathway have been discovered using traditional biochemical and coexpression analysis of whole-tissue-derived omics datasets. Not only does C. roseus have enormous economic importance as a producer of anticancer drugs, but it has also emerged as a model species to probe the mechanistic basis of localization, transport and regulation of specialized metabolic pathways. Fig. 1: Abbreviated biosynthetic pathway of monoterpene indole alkaloids in Catharanthus roseus . For a more extended biosynthetic scheme, see Supplementary Fig. 1 (leaves) and Supplementary Fig. 2 (roots). Full size image Here we show how a state-of-the-art genome assembly, Hi-C chromosome conformation capture and single-cell transcriptomics datasets empowered discoveries in the C. roseus MIA biosynthetic pathway. We show that a 38-step MIA pathway is sequentially expressed in three distinct cell types in leaves, and also show differences in cell-type-specific gene expression of the MIA biosynthetic pathway in C. roseus roots. We used long-range chromosome interaction maps to reveal the three-dimensional (3D) organization of MIA biosynthetic gene clusters that contribute to coordinated gene expression. To complement our genomic and transcriptomic datasets, we developed a high-throughput, high-resolution and semi-quantitative single-cell metabolomics (scMet) profiling method for C. roseus leaf cells. Finally, using these omics data, we identified an intracellular transporter and the missing reductase that generates anhydrovinblastine. Results A chromosome-scale genome assembly for C. roseus Because some specialized metabolic pathways have been demonstrated to be physically clustered in plant genomes, the availability of a scaffolded genome is essential to accelerate gene discovery 6 . Earlier versions of the C. roseus genome assembly were fragmented 7 . Here using Oxford Nanopore Technologies (ONT) long reads, we generated a draft assembly for C. roseus ‘Sunstorm Apricot’ and performed error correction using ONT and Illumina whole-genome shotgun reads yielding a 575.2 Mb assembly composed of 1,411 contigs with an N50 contig length of 11.3 Mb. Proximity-by-ligation Hi-C sequences were used to scaffold the contigs, resulting in eight pseudochromosomes (Fig. 2a ), consistent with the chromosome number of C. roseus . To fill gaps within the pseudochromosomes, we capitalized on the ability to redirect in real-time the sequencing of each nanopore using adaptive sampling 8 by targeting the physical ends of each contig for sequencing (Supplementary Fig. 3 ; adaptive finishing). We observed 5.5- to 14-fold enrichment of sequence coverage, depending on the length of physical ends that we targeted (Supplementary Fig. 3a ). Using the adaptive finishing reads along with the bulk ONT genomic reads, we closed 14 gaps ranging in size from 8-bp to 20.2-kbp (Supplementary Fig. 3b and Supplementary Table 1 ). The final (v3) C. roseus genome assembly is 572.1 Mb, of which 556.4 Mb is anchored to the eight chromosomes, with an N50 scaffold size of 71.2 Mb (Fig. 2b ) representing a substantial improvement in contiguity (27.6-fold increase in N50 scaffold length) and 31 Mb genome size increase over v2 of the C. roseus genome 7 . Assessment of the v3 C. roseus genome using Benchmarking Universal Single-Copy Ortholog (BUSCO) analysis revealed 98.5% BUSCOs indicating a high-quality genome assembly (Supplementary Table 2 ). Fig. 2: Chromosome-scale assembly and annotation of Catharanthus roseus . a , Contact map of Hi-C reads revealing the eight chromosomes of C. roseus . Blue boxes represent pseudomolecules, green boxes represent contigs of the primary assembly and red color indicates Hi-C contacts. b , Assembly metrics of the v3 versus v2 C. roseus genome assembly. For the number of gaps and scaffolds, the v3 numbers represent the pseudochromosomes and the complete assembly (pseudochromosomes plus unanchored scaffolds). c , Gene and repetitive sequence density along the eight C. roseus chromosomes. Y axis values and color scales represent the number of representative gene models (first row) or repetitive sequences (>1-kb, second row) in 1-Mb resolution. Source data Full size image To annotate the genome, we performed de novo repeat identification, revealing 70.25% of the genome was repetitive with retroelements being the largest class of transposable elements (Supplementary Table 3 ). Genome-guided transcript assemblies from paired-end mRNA-sequencing (mRNA-seq) reads from diverse tissues (leaf, root, flower, shoots, methyl jasmonate treatment; Supplementary Table 4 ) were used to train an ab initio gene finder and generate primary gene models. We generated 62 million ONT full-length cDNA (FL-cDNA) reads (Supplementary Table 4 ) and used these data, along with the mRNA-seq data, to refine our gene model structures. The final annotated gene set encompasses 26,347 genes encoding 66,262 gene models (Supplementary Table 5 ) with an average of three alternative splice forms per locus, attributable to the deep transcript data used in the annotation. BUSCO analysis of the annotated gene models revealed 96.1% complete BUSCOs (Supplementary Table 2 ), suggestive of high-quality annotation that was confirmed by manual inspection and curation of known MIA pathway genes (Supplementary Table 6 ). The highly contiguous v3 assembly revealed clusters of MIA pathway genes (Supplementary Tables 6 and 7 ). Two clusters were identified, a paralog array containing tetrahydroalstonine synthase 1 ( THAS1) , THAS3 , THAS4 and the THAS homolog, heteroyohimbine synthase, as well as a cluster containing serpentine synthase ( SS ) 9 , an SS paralog with near identical protein sequence, strictosidine glucosidase ( SGD ) and SGD2 . Gene duplications are major drivers of chemical diversity 10 , and a total of 207 paralogs of 69 genes previously implicated in MIA biosynthesis were identified in the v3 genome, of which, a substantial number were locally duplicated (Supplementary Table 7 ). Interestingly, paralogs of MIA biosynthetic genes were identified within the SS–SGD–SGD2 cluster, the tabersonine 16-hydroxylase ( T16H2 ) - 16-hydroxytabersonine O-methyltransferase ( 16OMT ) cluster and the strictosidine synthase (STR ) - tryptophan decarboxylase ( TDC ) gene clusters. Notably, a multidrug and toxic compound efflux transporter ( MATE ) was also located in the STR–TDC cluster (see also Fig. 3a,b ). In summary, the high-quality C. roseus v3 genome assembly and annotation provide the foundation for accelerated discovery of the final MIA biosynthetic pathway genes and the mechanisms underlying the complex organ and cell-type-specific gene regulation of this pathway. Fig. 3: Biosynthetic gene clusters and associated 3D chromosome features. a , Hi-C contact map generated from mature leaves for a gene cluster consisting of STR , TDC and SLTr (Supplementary Table 6 ). b , Chemical scheme showing tryptamine, secologanin, strictosidine and secologanin and VIGS for SLTr . Bar heights represent means; error bars represent s.e.m. Each dot represents a sample. P values are based on two-tail Tukey tests and are shown on the graphs. EV, empty vector control. EV, n = 8. VIGS, n = 6. c , Hi-C contact map for a gene cluster consisting of GS1 , GS2 , THAS2 and PAS . The curve represents the chromosome loop. d , Hi-C contact map for a gene cluster consisting of an array of acetyltransferases. 1, 2 and 3 represent three TADs. Three previously studied biosynthetic genes ( MAT , TAT and DAT ) are indicated by asterisks. e , Gene expression profiles for acetyltransferases highlighted in c . FPKM: fragments per kilobase exon mapped per million reads. Heatmap represents Hi-C contacts at the 10-kb resolution, where the darker color represents higher Hi-C contacts ( a , c , d ). Color scales are maxed out at 100 Hi-C contacts per 10 kb. Solid lines and P values represent TAD boundaries detected by HiCKey 12 . P values are derived from generalized likelihood ratio tests, part of the HiCKey workflow, and are shown on the graphs. See Supplementary Fig. 1 and Supplementary Table 6 for abbreviation definitions. Source data Full size image Gene clusters and chromosome conformation features Unlike biosynthetic gene clusters found in prokaryotic genomes, biosynthetic gene clusters in plant genomes have a loose organization, with unrelated genes or long intergenic spaces separating the biosynthetic genes. Nevertheless, these gene clusters both facilitate gene identification and are believed to have a role in transcriptional regulation 11 . With the recent advent of mapping chromosome conformation features, we now have the ability to probe the location of genes in 3D space. Thus, in addition to searching for biosynthetic gene clusters linearly organized on the chromosome, we can also search for biosynthetic genes that are confined within the same 3D space. Using the v3 C. roseus genome assembly, we probed chromatin interactions between biosynthetic genes in 3D space using Hi-C data from mature leaves. HiCKey 12 was used to detect topologically associated domains (TADs) and HiCCUPS to detect chromosome loops 13 , revealing distinct chromosomal organizations associated with biosynthetic gene clusters. For example, STR and TDC are physically clustered on chromosome 3 (Fig. 3a ). STR and TDC are consecutive steps along the pathway, where TDC catalyzes the formation of tryptamine from tryptophan and STR catalyzes the condensation between secologanin and tryptamine to form strictosidine. The MATE transporter that is located adjacent to TDC in the linear genome was also colocated within a TAD with STR and TDC , as indicated by a high level of long-distance contacts detected by Hi-C (Fig. 3a ). To test whether this transporter is involved in MIA transport, we performed virus-induced gene silencing (VIGS) of this gene in young C. roseus leaves (Supplementary Fig. 4a ). Although the levels of secologanin and downstream alkaloids did not change substantially in response to silencing, we detected a build-up of a compound with m/z 391 (M + H) + and m/z 413 (M + Na) + in the MATE- silenced tissue but not in empty vector controls (Fig. 3b ; P = 1.3 × 10 −9 , Tukey’s tests). This compound was assigned as secologanol based on co-elution with a standard obtained by chemical reduction of secologanin (Supplementary Fig. 4b ). The most likely explanation of the VIGS chemotype is that this MATE transporter transports secologanin from the cytosol into the vacuole, where STR is localized. The lack of secologanin transport would result in a build-up of secologanin in the cytosol, where the reactive aldehyde would be reduced to the less toxic secologanol. Thus, we named this MATE transporter SLTr . Unfortunately, all our attempts to heterologously express SLTr for in vitro transport assays were unsuccessful. This gene cluster has been observed in other MIA producers Gelsemium sempervirens and Rhazya stricta , suggesting that it is conserved across strictosidine-producing plants 7 . We also observed that biosynthetic genes interact in 3D space via chromosome loops as shown with THAS2 and precondylocarpine acetate synthase ( PAS ) 14 , 15 , which are separated by ~50 kb in the linear distance (Fig. 3c ). Not all genes in a biosynthetic gene cluster are in the same TAD. For example, an array of locally duplicated acetyltransferases was found on chromosome 2, including three that were previously characterized (minovincinine-19-hydroxy-O-acetyltransferase ( MAT ), tabersonine derivative 19-O-acetyltransferase ( TAT ) and deacetylvindoline O-acetyltransferase ( DAT )). This array of acetyltransferases is separated into three TADs, with MAT and TAT within TAD 2 and DAT within TAD 3 (Fig. 3d ) 16 , 17 . This segregation of acetyltransferases within TADs coincides with organ-level expression patterns; MAT and TAT are expressed in roots but not in leaves, and DAT is expressed in leaves but not in roots (Fig. 3e ). These observations suggest chromosome conformation may have regulatory roles in controlling organ-specific biosynthetic gene expression, and consequently, the localization pattern of specialized metabolite production. Gene expression at cell type resolution in leaves In situ hybridization experiments have established the expression specificity of a subset of the 38 known biosynthetic genes involved in bis-indole alkaloid biosynthesis, where the initial steps are located in internal phloem-associated parenchyma (IPAP) cells, downstream enzymes in epidermis and late enzymes located in idioblast cells 18 , 19 , 20 , 21 , 22 . We performed single-cell RNA-sequencing (scRNA-seq) on ~13 to 14-week-old C. roseus leaves (Supplementary Fig. 5 ) and obtained gene expression profiles of 15,437 cells and 19,337 genes (Fig. 4a ). We integrated three independent biological replicates using Seurat 23 (Supplementary Fig. 6 ) with clustering patterns similar across three replicates. Cell types were assigned using Arabidopsis marker gene orthologs (Supplementary Table 8 , Fig. 4a and Supplementary Fig. 6 ). Two cell types, IPAP and idioblasts, were inferred using previously studied C. roseus biosynthetic genes that show cell-type-specific expression 18 , 19 . In an independent experiment, we profiled gene expression at the single-cell level across 1,379 cells using the Drop-seq platform 24 (Supplementary Fig. 7a ). Although fewer cell types were detected using the Drop-seq platform, expression profiles across the top 3,000 variable genes were highly concordant between the cell types detected by two different platforms (Supplementary Fig. 7b ). Taken together, we inferred that the single-cell expression profiles are robust and reproducible across two experimental platforms. Fig. 4: MIA biosynthetic genes are partitioned into three discrete cell types in the C. roseu s leaf. a , UMAP of gene expression in C. roseus leaves ( n = 15,437 cells). b , Gene expression heatmap of the MIA biosynthetic pathway for bulk and single-cell transcriptomes. Genes are arranged from upstream to downstream. Previously reported cell-type-specific expression 18 , 19 , 20 , 21 , 22 are confirmed and marked with asterisks. For the single-cell gene expression heatmap, color scale shows the average scaled expression of each gene at each cell type ( Methods ). Dot sizes indicate the fraction of cells where a given gene is expressed at a given cell type. c , Gene coexpression network for MIA biosynthetic genes using leaf scRNA-seq data. Each node is a gene. Larger size nodes represent previously characterized genes. Edges represent coexpression (FDR < 0.01; Methods ). See Supplementary Fig. 1 and Supplementary Table 6 for a list of gene name abbreviations. There are 38 biosynthetic genes and two transporters, among which SLTr and STRTr are transporters. See Supplementary Table 9 for the membership of genes in each module. E, epidermis; I, idioblast; M, mesophyll; V, vasculature; Un, unassigned. Source data Full size image Coexpression analyses using whole-tissue or organ-derived gene expression datasets, that is, bulk mRNA-seq, have enabled the discovery of MIA biosynthetic pathway genes (Fig. 4b ). However, whole organ/tissue-derived expression abundances provide an expression estimation that is averaged over all cells in the organ. Thus, if a gene is expressed in a rare cell type, such as the idioblast, the power of coexpression analyses will be limited at best. Therefore, we examined scRNA-seq data for biosynthetic gene expression across cell types and found that the pathway is clearly expressed in three specific leaf cell types (Fig. 4b ). The improved resolution in cell-type-specific expression profiles compared to tissue-specific profiles is stark (Fig. 4b and Supplementary Fig. 7c ). The cell-type-specific expression patterns of 17 biosynthetic genes previously characterized by mRNA in situ hybridization 18 , 19 , 20 , 21 , 22 (Fig. 4b , marked with *) confirmed our cell type resolution of MIA biosynthesis. The MEP and iridoid stages of the pathway are expressed in IPAP, whereas the alkaloid segment of the pathway is primarily expressed in the epidermis, and the final known steps of the pathway are exclusively expressed in the idioblasts. Previous work suggested that a heterodimeric GPPS, composed of large subunit (LSU) and small subunit (SSU), is responsible for geranyl pyrophosphate used in MIA biosynthesis 25 . Interestingly, although GPPS LSU is specifically expressed in IPAP, GPPS SSU is expressed in other cell types as well (Supplementary Fig. 7d ). Finally, we found that secologanin transporter ( SLTr) that is physically clustered with TDC and STR and colocated within the same TAD as these two biosynthetic pathway genes are specifically expressed in the epidermis (Fig. 3a,b and Fig. 4b ), further supporting its involvement in transporting the MIA intermediate secologanin. We performed gene coexpression analyses producing a network graph for biosynthetic genes as well as previously reported transcription factors. The network is self-organized into three main modules, corresponding to IPAP, epidermis and idioblast (Fig. 4c ). We also note that SS 9 is a member of the idioblast coexpression module. SS catalyzes the formation of serpentine, which has a strong blue autofluorescence and has been previously used as a visual marker for idioblast 9 . The recovery of SS in the idioblast module confirms the robustness of this coexpression analysis. Upon jasmonic acid (JA) elicitation, MYC2 and ORCA3 transcription factors are activated, which in turn activate MIA biosynthetic genes 26 . However, MYC2 and ORCA3 were not part of any modules containing biosynthetic genes (Fig. 4c ), suggesting that the regulatory mechanisms in response to JA are distinct from those controlling cell-type-specific expression. Finally, a paralog of ORCA3 , ORCA4 (ref. 27 ), was detected within the epidermis module, suggestive of regulatory roles beyond JA-responsiveness. Gene identifiers of all genes within coexpression modules shown in Fig. 4c can be found in Supplementary Table 9 . Taken together, the leaf scRNA-seq dataset is consistent with data obtained from previously established localization methods and provides accurate and high-resolution data for gene discovery. Moreover, the cell type resolution expression patterns produced coexpression networks that clarified and expanded regulatory relationships. High-throughput scMet Single-cell mass spectrometry (scMS) has lagged behind scRNA-seq due to the intrinsic limitations related to the abundance of the analytes, which is exacerbated by the fact that metabolites cannot be amplified like RNA or DNA. Because the volume of single cells is low (fL to nL range), even when intracellular analytes are present at millimolar concentrations, mass spectrometry detection methods require extreme sensitivity. Although progress has been made in the development of scMS approaches 28 , few methods have been successfully applied to plant cells. To date, mass spectrometry analyses of individual plant cells have either relied on MS imaging, which is hindered by low spatial resolution, complex sample preparation protocols and low throughput, or live single-cell mass spectrometry (LSC-MS) method 29 , 30 , which is highly labor-intensive and not high throughput. None of these methods uses chromatographic separation before mass spectrometry analysis, greatly limiting accurate structural assignment and quantification of metabolites. To address these limitations, we designed a process in which a high-precision microfluidic cell-picking robot was used to collect protoplasts prepared from C. roseus leaves from a Sievewell device (Supplementary Fig. 8 ). Protoplasts were then transferred to 96-well plates compatible with an ultra-high liquid chromatography–mass spectrometry (UPLC–MS) autosampler. The UPLC–MS method was optimized using available MIA standards (Supplementary Table 10 ). For this study, we collected a total of 672 single cells in seven 96-well plates that were each subjected to UPLC–MS, allowing simultaneous untargeted and targeted metabolomic analysis. As the analysis of all cells was performed over several days, we treated each 96-well plate as an independent experiment, to control for batch-to-batch variation due to experimental and instrumental variables. After close inspection of the selected cells, 86 samples were removed as they either contained two cells, none or some debris, reducing the total number of cells included in the analysis to 586. Representative examples of different cells collected in the experiment are shown in Fig. 5a . Fig. 5: Single-cell metabolomic analysis of leaves. a , Photos of some isolated protoplasts. Scale bar = 50 µm. b , Principal component plots colored by Z scores of secologanin and serpentine ( n = 586 cells). c , UPLC/–MS traces of selected metabolites for idioblast and epidermal cells. BPC of the representative epidermis and idioblast cells. BPC, base peak chromatogram. d , Concentration estimates of selected compounds in single cells where they were detected. Each dot is a single cell. Catharanthine, n = 47; vindoline, n = 47; serpentine, n = 47; AHVB, n = 37 and vinblastine, n = 5. Source data Full size image When using an intensity threshold of 5 × 10 4 counts, 34,729 peaks were detected by XCMS software package. We used CAMERA to group redundant signals (isotopes, adducts, etc.) and kept only the most widely detected peaks, yielding 8,268 representative features. Finally, we excluded the peaks that were not detected in all batches, for a total of 933 features. Raw areas were corrected by the intensity of the internal standard (ajmaline), log-transformed and center-scaled by batch to minimize batch-to-batch artifacts. Principal component analysis (PCA) unambiguously separated a group of cells that could be assigned as idioblast cells based on the occurrence of serpentine ( m/z 349.1547 (M) + , C 21 H 20 N 2 O 3 ) and vindoline ( m/z 457.2333 (M + H) + , C 25 H 32 N 2 O 6 ), which have been previously reported to localize to idioblast cells 29 , 30 , 31 (Fig. 5b ). Another group of cells, the epidermal cells, could be identified based on the occurrence of the secoiridoid secologanin ( m/z 389.1442 (M + H) + , C 17 H 24 O 10 ), known to be synthesized in epidermal cells 32 (Fig. 5b ). Strictosidine, which is formed from secologanin and also synthesized in the epidermis, was observed at low levels in only a few cells, because this compound accumulates in leaves younger than those used here. No iridoid intermediates were detected under these conditions, but iridoid intermediates do not accumulate at substantial levels and do not ionize efficiently in ESI + . Representative chromatograms of an idioblast and of an epidermal cell are shown in Fig. 5c and Supplementary Fig. 9 . MS/MS fragmentation experiments, conducted on pooled cells quality control (QC) samples, allowed unambiguous identification of key MIAs, such as catharanthine ( m/z 337.1911 (M + H) + , C 21 H 24 N 2 O 2 ), vindorosine ( m/z 427.2227 (M + H) + , C 24 H 30 N 2 O 5 ), vindoline and serpentine (Supplementary Fig. 10 ). Comparison between the methanolic extract of the leaf tissue used to generate the protoplasts, and the protoplasts used for cell picking showed that the metabolite profile is not altered during the process of protoplast preparation (Supplementary Fig. 11 ). External calibration using authentic standards allowed the quantification of selected metabolites within the cells (Supplementary Table 11 ). The concentrations of the analytes were corrected for the volume of the cells, calculated from the cell dimensions that were measured during the picking process. Surprisingly, the concentration of the major metabolites accumulating in idioblasts was in the millimolar range. Although large differences in concentration were observed in the individual cells, the average catharanthine concentration was 100 mM (Fig. 5d ), which is unexpected because the enzyme that produces this metabolite (catharanthine synthase (CS)) is located in the epidermis. We detected only low amounts of catharanthine in a few epidermal cells, and live cell mass spectrometry discussed in refs. 29 , 30 also demonstrated that the majority of this MIA accumulates in the idioblast cells. Catharanthine was proposed to be exported from the epidermal cells to the cuticle through the action of the ABC transporter CrTPT2 (ref. 33 ), but although our mass spectrometry method could not measure metabolites on the leaf cuticle, our data clearly show that substantial amounts of catharanthine are sequestered inside leaf idioblast cells. Thus, we hypothesize that catharanthine is rapidly transported from the epidermis to the idioblasts. In contrast, the location of most detected MIAs, including vindoline, vindorosine and serpentine, corresponded perfectly with the cell type expression of their biosynthetic enzymes 9 , 34 . Intracellular quantification of metabolite levels will allow a better understanding of the enzyme kinetic properties in vivo and of the rates of metabolic reactions, although subcellular compartmentalization and transport will also have to be taken into account. For instance, some metabolites, such as secologanin and vindoline, are synthesized in the cytoplasm and stored in the vacuole, making the correlation of concentration with steady-state enzyme kinetic parameters difficult. Nevertheless, we believe this methodology has great potential to determine the extent of substrate saturation of metabolic enzymes and to study the free energy of metabolic reactions. Our analysis also targeted anhydrovinblastine, vinblastine and vincristine (bis-indole alkaloids), which had also been detected in idioblast cells using LSC-MS 29 . Anhydrovinblastine ( m/z 397.2122 (M + 2H) 2+ , C 46 H 56 N 4 O 8, AHVB) was detected in the micromolar range in almost all of the idioblast cells analyzed, and its MS/MS fragmentation confirmed its identity (Supplementary Fig. 12 ). However, vinblastine ( m/z 406.2175 (M + 2H) 2+ , C 46 H 58 N 4 O 9 ) was found only in five cells, and vincristine was not detected at all (Supplementary Fig. 13 ). This reflects the low levels in which vinblastine and vincristine accumulate. However, catharanthine and vindoline, the proposed precursors of the bis-indole alkaloids, co-occur in the same cell type in which AHVB and vinblastine are present, suggesting that the enzymes involved in bis-indole biosynthesis should also be present in the idioblasts. Because concentrations of catharanthine and vindoline are two orders of magnitude higher than AHVB and vinblastine, it is likely that the coupling reaction leading to the bis-indole alkaloids is a rate-limiting step, which could be due to low expression, low specific activity of the coupling enzyme, or that coupling is hindered by intracellular compartmentalization of the two monomers. The fact that AHVB and vinblastine levels do not correlate with upstream pathway intermediates is likely a major reason why the late-stage enzymes that convert vindoline and catharanthine to AHVB and vinblastine have been particularly challenging to elucidate (Fig. 6a ). In this coupling reaction, an oxidase activates catharanthine, which then reacts with vindoline to form an iminium dimer. A reductase is then required to reduce this iminium species to form AHVB. Our scMet analysis revealed the presence of a chemical species with m/z 396.2044 (M + + H) 2+ and chemical formula C 46 H 54 N 4 O 8 + , consistent with this iminium dimer in idioblast cells, along with the monomers and AHVB (Supplementary Fig. 14 ). The mechanism of enzymatic reduction of the iminium intermediate to AHVB is not known, but recent work in our group has identified a number of medium-chain alcohol dehydrogenases in C. roseus that can reduce iminium moieties (GS, Redox1 and T3R; Supplementary Figs. 1 and 2 ). Therefore, we hypothesized that this reduction of the iminium dimer would be carried out by a medium-chain alcohol dehydrogenase localized to idioblast cells. From the coexpression modules across cell types (Fig. 4c ), we identified five idioblast-specific medium-chain alcohol dehydrogenases (Fig. 6b ). These enzymes were heterologously expressed in Escherichia coli and assayed with the iminium dimer, which could be generated in vitro by incubating catharanthine and vindoline with commercial horseradish peroxidase, which is known to catalyze the initial oxidative coupling 35 . Two enzymes, THAS1 and THAS2, formed AHVB when incubated with the iminium dimer, with specific activities of 10.38 ± 0.27 μmol min −1 mg −1 and 6.26 ± 0.62 μmol min −1 mg −1 , respectively (Fig. 6c ). We attempted to demonstrate the function of THAS1 and THAS2 in planta by silencing them using VIGS. However, silencing of THAS1 or THAS2 individually or simultaneously did not substantially affect the levels of vinblastine, AHVB and the iminium dimer (Supplementary Figs. 15 and 16 ). Therefore, although the in vitro function of these enzymes is clear, we cannot definitively define a physiological function for them. It is important to note however, that a number of medium-chain alcohol dehydrogenases from the MIA pathway (GS and T3R) have also failed to show strong chemical phenotypes when subjected to VIGS 36 , 37 . Both THAS1 and THAS2 have been previously biochemically characterized as tetrahydroalstonine synthases, an enzyme that generates tetrahydroalstonine from the reduction of strictosidine aglycone. Notably, however, tetrahydroalstonine levels in the leaf are low, which is consistent with an alternative catalytic function for these enzymes in the leaf. Fig. 6: Reduction of an iminium to form anhydrovinblastine. Discovery of ADH20, and comparison of kinetic parameters against THAS2 and other ADHs. a , A short chemical scheme showing coupling and reduction. b , Expression heatmap at single-cell type resolution. Color scale shows the average scaled expression of each gene at each cell type ( Methods ). Dot sizes indicate the fraction of cells where a given gene is expressed at a given cell type. c , Specific activity of the five idioblast-localized ADH enzymes recombinantly expressed in E. coli and tested in vitro for activity toward the AHVB iminium. n = 3 for all assayed enzymes. d , Heatmap showing cells as columns and compounds as rows from the scMet experiment ( n = 586 cells). Source data Full size image A peroxidase, CrPRX1, that can activate catharanthine to form the iminium intermediate, has been previously reported in ref. 38 . Surprisingly, this enzyme is selectively expressed in the epidermis (Fig. 6b ), in contrast to the localization of vindoline, iminium dimer and AHVB (Fig. 6d ). Notably, the dimerization reaction can be catalyzed by nonspecific peroxidases, such as horseradish peroxidase, so we hypothesize that CrPRX1 is also a nonselective enzyme that has another function in planta. We did not identify any idioblast-specific peroxidase in the leaf scRNA-seq dataset. These omics datasets set the stage for future work, which will focus on functionally characterizing additional classes of oxidases—which are challenging to functionally express in heterologous systems—specifically localized to the idioblast for activity in coupling and oxidation to vinblastine. Root single-cell transcriptome We also performed scRNA-seq on C. roseus roots to compare cell-specific expression in two distinct organs. Although catharanthine and tabersonine are present in both organs, the derivatization of tabersonine diverges in root and leaf (Supplementary Figs. 1 and 2 ). The tabersonine-derived product vindoline, which goes on to form AHVB, is found in leaves, while tabersonine-derived hörhammercine is found in roots (Supplementary Figs. 1 and 2 ). The root scRNA-seq dataset captured the expression of 2,636 cells and 18,190 genes from two biological replicates that grouped into 11 clusters and six major tissue classes (Extended Data Fig. 1a ). The clustering patterns are highly similar between the two replicates (Supplementary Fig. 17a ). MAT was previously reported by in situ hybridization 17 to be expressed in epidermis and cortex. In the root scRNA-seq data, MAT was also found to have dual localization (cluster 4 and cluster 8, and Supplementary Fig. 17b ). Cluster 4 contained marker genes for both endodermis and cortex ( PBL15 and AED3 ) 39 , whereas cluster 8 contained maker genes for atrichoblast epidermis ( TTG2 and GL2 ) 3 , 39 , 40 (Supplementary Fig. 17c and Supplementary Table 8 ). Collectively, these results recapitulate a dual-expressed biosynthetic gene previously characterized by in situ hybridization. The spatial organization of the core MIA genes differed between leaf and root, highlighting the plasticity of cell-specific regulation in these two organs. In leaves, the MIA pathway switched from IPAP to epidermal cells at loganic acid methyltransferase ( LAMT ) and from epidermal to idioblast cells at 3-hydroxy-16-methoxy-2,3-dihydrotabersonine N-methyltransferase ( NMT ) (Fig. 3b ), but in roots, the pathway was not partitioned into three discrete cell types (Extended Data Fig. 1b ). Instead, the MEP and iridoid stages are specifically expressed in the ground tissue composed of cortex and endodermis (Extended Data Fig. 1b and Supplementary Fig. 18a ) with an expression of the alkaloid stage, while also expressed in ground tissues, exhibiting a more diffused expression pattern (Extended Data Fig. 1b and Supplementary Fig. 18a ). Parallel to vindoline biosynthesis in leaves, the late-stage derivatization enzymes that modify tabersonine to hörhammercine are found in a different cell type from the rest of the pathway genes. Tabersonine 6,7-epoxidase (TEX ) 16 , tabersonine 19-hydroxylase 41 and TAT 42 all have a detectable expression in the epidermis, along with MAT 17 (Supplementary Fig. 18a ). Our root scRNA-seq experiment used highly developed 8-week-old whole root systems that are much more complex than root tips of Arabidopsis seedlings from which the root marker genes for dicots are derived. For example, secondary growth has not occurred in Arabidopsis seedling root tips, while in 8-week-old C. roseus roots, we clearly observe secondary growth, which may explain the presence of unassigned cell types in our root dataset (Extended Data Fig. 1a ). Despite the highly developed state of the root tissue, we nonetheless observed cell-type-specific expression of the MIA pathway primarily in the ground tissue, with the final reaction(s) present in the root epidermis (Extended Data Fig. 1b and Supplementary Fig. 18a ). The v3 assembly resolved multiple tandemly duplicated paralogs, some of which were collapsed or fused in the v2 assembly. To clarify the potential functions of these paralogs, we compared their expression patterns in both leaf and root across cell types. We noticed examples in which a single paralog was preferentially expressed in a given cell type compared to the other paralog(s). For example, a single, collapsed 7DLGT in v2 was resolved into four separate loci in v3 that displayed cell-type-specific expression patterns in leaf and root revealing neo- or subfunctionalization at the expression level (Extended Data Fig. 1c and Supplementary Fig. 18b ). We also observed retention of expression patterns among paralogs. Iridoid synthase ( ISY ), well characterized by silencing in the leaf, has a tandemly duplicated paralog. Although the ISY paralog is expressed in leaf tissue, silencing showed no obvious changes 43 , which can be explained by the redundant expression of both paralogs. In the root, both paralogs are expressed in the ground tissue along with the rest of the pathway. However, one of the paralogs is expressed in a higher percentage of cells (Extended Data Fig. 1c ). Finally, both TEX2 and THAS3 (ref. 14 ) have a tandemly duplicated paralog for which cell type resolution expression patterns highlighted which paralog is likely involved in the biosynthesis of MIAs (marked by asterisk in Extended Data Fig. 1c ). Discussion Over the last 15 years, next-generation sequencing technologies allowed the rapid generation of transcriptomic and genomic datasets that facilitated gene discovery in many plant species. Here we report how state-of-the-art omics methods not only accelerate gene discovery through the high-resolution spatial resolution of gene expression but also how complementary omics data facilitates the construction of a more holistic view of the genome encapsulating genes, gene regulation in 2D (linear) and 3D (chromatin) space, and genic end products, that is, metabolites. Generation of a highly contiguous, chromosome-scale genome assembly revealed substantial duplication of MIA biosynthetic pathway genes, of which, some are clustered in the linear genome. Chromatin interaction maps revealed 3D clustering of a subset of MIA pathway genes, some exhibiting organ-level specificity and interactions via chromatin loops, suggesting that TADs, in addition to physical colocalization, can be used to identify genes involved in specialized metabolism. The detection of TADs and chromatin loops of physically clustered genes is consistent with the coregulation hypothesis on the origins of biosynthetic pathway gene clustering as the 3D chromatin interactions serve key roles in gene regulation 44 . The ability to detect cell-type-specific gene expression data revealed that the MIA pathway is spatially and sequentially partitioned across discrete cell types in C. roseus leaf, permitting the construction of cell-type-specific coexpression modules for IPAP, epidermis and idioblast cells. Notably, the cell type expression profile of MIA biosynthesis genes in leaf and root are different, highlighting the plasticity of the gene expression networks between organs as well as the neo- and subfunctionalization at the gene expression level of MIA biosynthetic pathway paralogs. Aside from the MIA, multicellular localization patterns of only a few specialized metabolite pathways have been investigated in full. In the morphine biosynthetic pathway, biosynthetic enzymes are synthesized in companion cells, which are then delivered to sieve elements, where the early steps of the pathway take place. Later-stage intermediates are then transported from the sieve elements to laticifers where the enzymes involved in the late steps of the morphine pathway are localized, which is also the site of morphine and other alkaloid accumulation 45 . Additionally, the localization of the glucosinolate pathway in Arabidopsis thaliana has been established 4 . Biosynthetic enzymes for aliphatic glucosinolate biosynthesis appear to be located in xylem parenchyma cells and phloem cells, while indole glucosinolate biosynthetic enzymes are localized to sieve element-neighboring parenchyma cells of the phloem, and glucosinolates are transported to and stored in S-cells. The reasons for the distinct localization of the MIA pathway are not known. The spatial organization of natural products could have an important role in how these metabolites function in defense or signaling. Notably, strictosidine and strictosidine glucosidase, which likely serve an antifeedant role 46 , are located in the epidermis. More derivatized alkaloids, which do not yet have a known ecological role, appear to be derivatized and then stored in the idioblasts, comparable to the role laticifers have in benzylisoquinoline alkaloid biosynthesis. Alternatively, the localization pattern may simply be an accident of evolution, in which the biosynthetic enzymes have evolved from pre-existing enzymes located in these cell types. The partitioning that is observed in MIA biosynthesis does not appear to serve any obvious chemical function, such as separating intermediates that may cross-react. The high-throughput mass spectrometry method developed here showed not only which metabolites co-occurred in distinct cell types but also allowed us to measure the concentrations of metabolites across a cell population. Notably, although catharanthine is synthesized in the epidermis (as evidenced by the localization of the biosynthetic enzyme CS), this alkaloid colocalizes with vindoline, which is synthesized in idioblasts (as evidenced by the localization of the biosynthetic enzymes D4H and DAT). Therefore, catharanthine must be intercellularly transported from the epidermis to the idioblast. Notably, the concentration of catharanthine and vindoline, which dimerize to form AHVB, were in the high millimolar range. In contrast, the dimerization product AHVB was in the micromolar range, indicating that the coupling step is rate-limiting. This discovery serves as a starting point to design strategies to genetically engineer C. roseus plants with higher levels of AHVB. These state-of-the-art omics datasets provide a foundation for the discovery of the remaining genes and regulatory sequences involved in cell-type-specific MIA biosynthesis and transport in C. roseus . One-quarter of all pharmaceuticals are derived from plants 47 ; here we show the power of single-cell multi-omics in natural product gene discovery in plants. We anticipate that application of complementary single-cell omics methods will be essential in tapping the wealth of chemistry present across the plant kingdom. Methods Genome sequencing and assembly C. roseus cv ‘Sunstorm Apricot’ was grown under a 15 h photoperiod at 22 °C and dark-treated for 36 h before harvesting leaves from 17-week-old plants. High-molecular-weight DNA was isolated using a QIAGEN Genomic-tip 500/G after a crude cethyltrimethyl ammonium bromide extraction 48 . ONT libraries were prepared using the ONT SQK-LSK110 kit and sequenced on R9 FLO-MIN106 Rev D flow cells; the latest software available at the run date for each library was used (Supplementary Table 4 ). ONT whole-genome shotgun libraries were base-called using Guppy (5.0.7 + 2332e8d, ) using the high-accuracy model (dna_r9.4.1_450bps_hac). Reads less than 10 kb were filtered out using seqtk (v1.3, ), and remaining reads greater than 10 kb were assembled with Flye (v2.8.3-b1695) 49 using the parameters -i 0 and --nano-raw. The assembly was polished with two rounds of Racon (v1.4.20) 50 , followed by two rounds of Medaka (v1.4.3, ) using the ‘r941_min_hac_g507’ model, and finally, three rounds of Pilon (v1.23) 51 using Illumina whole-genome shotgun reads (Supplementary Table 4 ). Hi-C libraries were constructed from immature leaf tissue grown under a 15 h photoperiod with constant 22 °C conditions following manufacturer recommendations using the Arima Hi-C 2.0 Kit (Arima Genomics; CRO_AN, CRO_AO; Supplementary Table 4 ). Hi-C libraries were sequenced on an S4 flow cell in paired-end mode generating 151 nucleotides (nt) reads on an Illumina NovaSeq 6000 (Illumina). Contigs less than 10 kb were removed from the assembly using seqtk (v1.3; ). Pseudochromosomes were constructed using the Juicer (v1.6) 52 and 3D-DNA (git commit 429ccf4) 53 pipelines using the Illumina Hi-C sequencing data with default parameters. To produce the target file for adaptive finishing, 5-kb ends of contigs from the primary assembly were used (full sequences for contigs <10 kb in size) in the first run, while in the second run, 30-kb ends from contigs were used (full sequences for contigs <60 kb in size). In the second run, half of the channels in the flow cell were set to adaptively sample. Base-calling was performed with Guppy v5.0.16 (nanoporetech.com/community) with the following parameters: --config dna_r9.4.1_450bps_hac.cfg --trim_strategy dna --calib_detect. seqtk v1.3 (github.com/lh3/seqtk) was used to filter reads that were adaptively sampled (not rejected) by the pore. Adaptive finishing reads and the bulk ONT genomic reads were used to fill in gaps in the pseudomolecules and unanchored scaffolds using the DENTIST pipeline (v3.0.0) 54 with the following parameters: read-coverage: 90.0, ploidy: 2, max-coverage-self: 3 and join-policy: scaffoldGaps. Genome annotation A custom repeat library was created using RepeatModeler (v2.0.3) 55 . Putative protein-coding genes were removed using ProtExcluder (v1.2) 56 , and Viridiplantae repeats from RepBase 57 v20150807 were added to create the final custom repeat library. The final genome assembly was hard-masked and soft-masked using the custom repeat library and RepeatMasker (v4.1.2) 58 . To provide transcript evidence for genome annotation and gene expression abundance estimations, publicly available mRNA-seq libraries were downloaded from National Center for Biotechnology Information (Supplementary Table 4 ). RNA-seq libraries were processed with Cutadapt (v2.10) 59 with the following parameters: --minimum-length 75 and –quality-cutoff 10. Cleaned reads were aligned to the assembly with HISAT2 (v2.1.0) 60 with the following parameters: --max-intronlen 5000 --rna-strandness ‘RF’ --no-unal --dta; transcript assemblies were generated from the alignments using StringTie2 (v2.2.1) 61 . FL-cDNA sequences were generated from pooled replicates of young leaf, mature leaf, stem, flower and root tissue from 16-week-old C. roseus cv ‘Sunstorm Apricot’ plants grown in the greenhouse (Supplementary Table 4 ). RNA was isolated using the Qiagen Rneasy Plant Mini kit followed by mRNA isolation using the Dynabeads mRNA Purification Kit (Thermo Fisher Scientific, 61011). cDNA libraries were constructed using the ONT SQK-PCB109 kit and sequenced on R9 FLO-MIN106 Rev D flow cells; one library per tissue was constructed and sequenced on a single flow cell. ONT cDNA libraries were base-called using Guppy (v6.0.6 + 8a98bbc, nanoporetech.com/community) using the SUP model (dna_r9.4.1_450bps_sup.cfg) and the following parameters: --trim_strategy none --calib_detect. Base-called reads were processed with Pychopper (v2.5.0, github.com/nanoporetech/pychopper) to identify putative FL-cDNA reads, which were then aligned to the genome assembly using Minimap2 (ref. 62 ; v2.17-r941) with the following parameters: -a -x splice -uf -G 5000. Transcript assemblies were generated from the alignments using StringTie2 (ref. 61 ; v2.2.1). Initial gene predictions were generated using the BRAKER 63 (v2.1.5) pipeline using the RNA-seq Stringtie genome-guided alignments as transcript evidence. Gene predictions were refined using the RNA-seq and ONT cDNA transcript assemblies with two rounds of PASA2 (ref. 64 ; v2.4.1). MIA biosynthetic pathway genes were manually curated using WebApollo 65 (v2.6.5). Functional annotation of the gene models was generated by searching the Arabidopsis proteome 66 (TAIR10), Swiss-Prot plant proteins and PFAM 67 (v35) and assigning the function from the first informative match. Detection of paralogous MIA biosynthetic genes was performed using OrthoFinder 68 . Gene expression abundance estimations with bulk mRNA-seq samples Publicly available C. roseus mRNA-seq datasets were downloaded from the National Center for Biotechnology Information SRA (Supplementary Table 4 ). Reads were cleaned using Cutadapt 59 (v4.0) to trim adapters and remove low-quality sequences. Cleaned reads were aligned to the C. roseus genome (v3.0) using HISAT2 (ref. 60 ; v2.2.1) with a maximum intron size of 5 kb. Fragments per kilobase million (FPKM) were generated using Cufflinks 69 (v2.2.1) with the following parameters: --multi-read-correct, --max-bundle-frags 999999999, --min-intron-length 20 and --max-intron-length 5000. Chromosome conformation capture and analyses methods C. roseus cv ‘Sunstorm Apricot’ leaf tissue collected at the same time as one replicate of the 10x scRNA-seq experiments was used with the Proximo Hi-C kit (Phase Genomics; CRO_AR) to generate Hi-C reads and sequenced on the Illumina NovaSeq 6000 generating paired-end 150 nt reads. Fastq files were processed with Juicer 52 and Juicer Tools v2.13.07 (ref. 52 ) to produce the .hic file ( github.com/aidenlab/juicer/wiki ). The inter30.hic output was used for all downstream analyses, which contains chromosome interactions using Q30 + reads. For loops, HiCCUPS ( github.com/aidenlab/juicer/wiki/HiCCUPS ) was used to detect chromosome loops at 5-kb resolution. For TAD domains, straw ( github.com/aidenlab/straw ) was used to access the data and write.txt files for each chromosome at 10-kb resolution. HiCkey ( github.com/YingruWuGit/HiCKey ) was used to detect TAD boundaries; all P values are corrected for multiple testing using the false discovery rate method. Protoplast isolation and scRNA-seq library generation Protoplasts were isolated from young leaf tissue of 13–14-week-old plants from C. roseus cv ‘Sunstorm Apricot’ and used to generate scRNA-seq libraries using the 10x Chromium Controller (10x Genomics) and the Drop-Seq platform. Approximately 2 g of leaf tissue was used for protoplasting. For the 10x Genomics scRNA-seq, leaves were collected, and vacuum infiltrated with enzyme solution (0.4 M mannitol, 20 mM MES, 1.5% (wt/vol) cellulase (‘Cellulysin’; Sigma Aldrich, 219466), 0.3% (wt/vol) macerozyme R-10 (RPI, M22010), 1 mM calcium chloride, 0.1% BSA, pH 5.7) for 10 min at 400 mbar before being placed into a petri dish and shaken for 1 h and 45 min at 50 r.p.m. The plates were then shaken at 80–100 r.p.m. for 5 min to increase cell recovery. The resulting protoplast solution was filtered through a 40-µm mesh filter into a 50 ml tube with 5 ml of a storage solution (0.4 M mannitol, 20 mM MES, 1 mM calcium chloride, 0.1% bovine serum albumin, pH 5.7) used to rinse the plate and increase cell recovery. Protoplasts were gently pelleted at 150–200 g for 3 min at 4 °C, and the supernatant was removed. The protoplasts were then gently resuspended in a storage solution to be counted and used for scRNA-seq library preparation, with additional filtering performed as needed. To generate a root single-cell expression dataset, ~5 g of roots from ~8-week-old plants were used and processed similarly to leaves. A total of ~10,700 (leaf), ~2,600 (leaf), ~5,800 (leaf), ~3,400 (root) and ~4,000 (root) cells were used to generate 10x scRNA-seq libraries (Supplementary Table 12 ). In brief, the protoplast suspensions were loaded into a chromium microfluidic chip, and GEMs were generated using the 10x Chromium Controller (10x Genomics); libraries were constructed using the Single Cell 3′ v3.1 Kit (10x Genomics) according to the manufacturer’s instructions. For the Drop-Seq library, scRNA-seq protoplasts were isolated from young leaf tissue from C. roseus cv ‘Sunstorm Apricot’ and used to generate scRNA-seq libraries following the Drop-Seq method 24 ( mccarrolllab.org/download/905/ ). In brief, leaves were collected and protoplasts were generated as detailed above with the following modifications to the solutions: enzyme solution (0.6 M mannitol, 10 mM MES, 1.5% (wt/vol) cellulase (‘Cellulysin’; Sigma Aldrich, 219466), 0.3% (wt/vol) macerozyme R-10 (RPI, M22010), 1 mM calcium chloride, 0.1% BSA, pH 5.7) and storage solution (0.6 M, 10 mM MES, 1 mM calcium chloride, 0.1% bovine serum albumin, pH 5.7). In total ~115,000 protoplasts were run through the Drop-Seq protocol to generate the single-cell libraries. The 10x Genomics library SCP_AH was sequenced on a NextSeq 500 mid-output flow cell, and CRO_AS, CRO_AT, CRO_AW and CRO_AX were sequenced on a NextSeq 2000 P3 flow cell, with all runs sequenced as Read 1 at 28 nt and Read 2 at 91 nt and the index at 8 nt; in accordance with manufacturer recommendations. Drop-Seq libraries, CRO_AA and CRO_AB, were sequenced on three lanes of an Illumina MiSeq v3, with Read 1 being 25 nt and Read 2 being 100 nt. Single-cell transcriptome analysis For 10x Genomics reads, Read 2 was cleaned using Cutadapt 59 (v4.0) to remove adapters and poly-A tails; cleaned reads were then re-paired with Read 1. Cleaned reads were then aligned to a merged C. roseus genome (v3.0) and C. roseus chloroplast genome ( NC_021423.1 ) using the STARsolo pipeline of STARsolo (v2.7.10) 70 with the following parameters: --alignIntronMax 5000, --soloUMIlen 12, --soloCellFilter EmptyDrops_CR, --soloFeatures GeneFull, --soloMultiMappers EM, --soloType CB_UMI_Simple and --soloCBwhitelist using the latest 10× Genomics whitelist of barcodes. Drop-Seq reads were processed in accordance with established Drop-Seq processing methods ( github.com/broadinstitute/Drop-seq/blob/master/doc/Drop-seq_Alignment_Cookbook.pdf ) using DropSeqTools (v2.5.1). Reads were trimmed using Cutadapt 59 (v4.0) to remove adapters and poly-A tails and aligned to a merged C. roseus genome (v3.0) and C. roseus chloroplast genome ( NC_021423.1 ) using HISAT2 (v2.2.1) 60 with the parameters; --dta-cufflinks, --max-intronlen 5000 and --rna-strandness R. The Drop-Seq processing pipeline allows for various filtering approaches in how data is output; a minimum of 100 reads per cell cutoff was used to output our digital expression matrix. Removal of ambient RNA read counts Four 10× libraries (CRO_AS, CRO_AT, CRO_AW and CRO_AX; Supplementary Table 12 ) were sequenced far deeper than the recommended 25,000 reads per cell, which led to the detection of higher background (that is, ambient) expression of cell-type-specific genes (Supplementary Fig. 19a ). We used the R package DecontX 71 to estimate and remove ambient RNA reads from the feature-count matrices of these four libraries. Median DecontX removed ambient reads per cell are reported in Supplementary Table 12 ; ambient reads-removed matrices for the abovementioned libraries were used for downstream analyses (Supplementary Fig. 19b ). Cell type clustering Drop-Seq and 10x Genomics expression matrices were loaded as Seurat objects 23 . Observations were filtered for between 200 and 3,000 RNA features and log normalized. Top 3,000 variable genes were selected for all runs and integrated using the ‘IntegrateData()’ function from Seurat. Uniform manifold approximation and projection (UMAP) were calculated using the first 30 principal components using the following parameters: ‘dims = 1:30, min.dist = 0.001, respulsion.strength = 1, n.neighbor = 30 and spread = 1.’ We curated a set of epidermis, mesophyll and vasculature marker genes for C. roseus (Supplementary Table 8 ) using orthologs 68 of known markers from Arabidopsis 72 , 73 . For root cell markers, we curated markers from Arabidopsis 3 , 40 , 73 , 74 and C. roseus 17 (Supplementary Table 8 ). For single-cell gene expression heat maps (Figs. 4b and 6b and Extended Data Fig. 1b,c ), average expression for each gene at each cell type is computed as the averaged Z scores for log-transformed normalized expression values, such that the color scale for each gene is relative to the mean and standard deviation of each gene across all cells. Dot sizes indicate the fraction of cells where a given gene is expressed (>0 reads) at a given cell type. Gene coexpression analyses Pairwise correlation coefficients between the top 3,000 variable genes were computed using the ‘cor()’ function in R. A network edge table was produced from the correlation matrix, where each row is a gene pair. Statistical significance was calculated using a t -distribution approximation and adjusted for multiple testing using FDR. Only pairs with FDR < 0.01 were selected for downstream analysis. The network node table was constrained to known MIA biosynthetic enzymes and their first-degree network neighbors. Graphical representation of the network was produced using the ‘graph_from_data_frame()’ function from igraph 75 , using the filtered edge table and constrained network table as input. Network visualization was done using the ggraph package ( ggraph.data-imaginist.com/ ). Leaf protoplast isolation for scMet analysis Three leaves around 3.5 cm in length were selected. The leaves were cut into 1 mm strips with a sterile surgical blade. After that, the leaf strips were immediately transferred to a Petri dish with 10 ml of digestion medium (2% (wt/vol) cellulose Onozuka R-10, 0.3% (wt/vol) macerozyme R-10 and 0.1% (vol/vol) pectinase dissolved in mannitol/MES (MM) buffer. MM buffer contained 0.4 M mannitol and 20 mM MES, pH 5.7–5.8, adjusted with 1 M KOH. The open Petri dish was put inside a desiccator and a 100 mBar vacuum was applied for 15 min to infiltrate the medium into the leaf strips. The vacuum was gently disrupted for 10 s after every 1 min. The leaf strips were then incubated in the digestion medium for 2.5 h at room temperature. After the incubation, the Petri dish was placed on an orbital shaker at around 70 r.p.m., for 30 min at room temperature to help release the protoplasts. The protoplast suspension was filtered through a nylon sieve (70 μm) to remove larger debris and gently transferred to two 15 ml conical tubes. The protoplast suspension was centrifuged at 70 g with gentle acceleration/deceleration, for 5 min, at 23 °C to pellet the protoplasts. The supernatant was removed as much as possible, and the protoplast pellet of each tube was washed three times by adding 5 ml of MM buffer, swirling gently and centrifuging. Finally, the last pellet from two tubes was pooled together and resuspended in 1 ml of MM buffer. The protoplast concentration was determined using a hemocytometer. The final concentration of protoplasts was adjusted to 10 6 protoplasts in 1 ml. Cell picking for scMet analysis A SIEVEWELL chip (ASL) with 90,000 nanowells (50 μm × 50 μm, depth × diameter) was used for single-cell trapping and sorting. The SIEVEWELL chip was primed with 100% ethanol, and then 1 ml of DPBS was immediately added to the chamber and then discarded through the side ports for washing. This washing step was repeated two times. After that, the chip was coated by carefully adding 1 ml (1.5%) BSA–DPBS and subsequently discarding the liquid through the side port. Then, MM buffer was added to replace the 1.5% BSA in the DPBS solution. Finally, 1 ml of protoplast suspension was carefully added and dispensed in a z-shape across the well. One milliliter of liquid was then discarded from the side ports. The SIEVEWELL was then mounted on the CellCelector Flex (ALS Automated Lab Solutions) instrument, and the cells were visualized using the optical unit, constituted by a fluorescence microscope (Spectra X Lumencor) and a CCD camera (XM-10). Photos in transmitted light were acquired to cover all the chip. Single protoplasts were picked using a 50 μm glass capillary and dispensed into 96-well plates containing 5 μl of 0.1% formic acid. Pictures of the nanowells before (with single cell) and after picking (without single cell) were recorded. Cells were dried overnight and frozen at −20 °C until the analysis. scMet method The UPLC–MS analysis was performed on a Vanquish (Thermo Fisher Scientific) system coupled to a Q-Exactive Plus (Thermo Fisher Scientific) orbitrap mass spectrometer. For metabolite separation, a Phenomenex Kinetex XB-C18 100 Å column (50 × 1.0 mm, 2.6 µm) was used at a temperature of 40 °C. The binary mobile phases were 0.1% HCOOH in MilliQ water (aqueous phase) (A) and acetonitrile (ACN) (B). The gradient elution started with 99% of the aqueous phase and increased with the ACN phase to 70% in 5.5 min, followed by an increase to 100% in the next 0.1 min. The percentage of ACN was held at 100% for 2 min before switching back to 99% of the water phase in 0.1 min. Finally, ACN was kept at 1% for 2.5 min to condition the column for the next injection. The total time for chromatographic separation was 12 min. In total, 4 μl of standards or samples were injected into the column via the autosampler. The flow rate of the mobile phase was kept constant at 0.3 ml min − 1 during the chromatographic separation. Both samples and standard solutions were kept at 10 °C in the sample tray. The needle in the autosampler was washed using a mixture of methanol and MilliQ water (1:1, vol:vol) for 20 s after the draw and at a speed of 50 µl s −1 . The mass spectrometer was equipped with a heated electrospray ionization source. The mass spectrometer was calibrated using the Pierce positive and negative ion mass calibration solution (Thermo Fisher Scientific). The operating parameters of heated electrospray ionization are based on the UPLC flow rate of 300 µl min −1 using source auto default—sheath gas flow rate at 48; auxiliary gas flow rate at 11; sweep gas flow rate at 1; spray voltage +3500 V; capillary temperature at 250 °C; auxiliary gas heater temperature at 300 °C and S-lens RF level at 50. Acquisition was performed in full-scan MS mode (resolution 70000-FWHM at 200 Da) in positive mode over the mass range m/z from 120 to 1,000. The full MS/dd-MS2 (full-scan and data-dependent MS/MS mode) was used to simultaneously record the MS/MS (fragmentation) and the spectra for the precursors of QC pooled samples. The full MS/dd-MS2 (that included target analytes) was also used for QC pooled sample to confirm fragments of the selected precursors. The dd-MS2 was set up with the following parameters: resolution 35,000 FWHM; mass isolation window 0.4 Da; maximum and minimum automatic gain control target 8 × 10 3 and 5 × 10 3 , respectively; normalized collision energy was set at three levels 15%, 30% and 45% and spectrum data format was centroid. All the parameters of the UPLC-MS system were controlled through Thermo Fisher Scientific Xcalibur software version 4.3.73.11 (Thermo Fisher Scientific). Chromatography and MS responses were optimized using several reference compounds (Supplementary Tables 10 and 11 ). Preparation of cells and QC samples Before the analysis, the single cells were resuspended with 12 µl of 0.1% formic acid containing 10 nM of ajmaline as an internal standard. For pooled QC samples, 2 µl of sample from each well was taken and pooled together. For QC, 20 µl of C. roseus leaf protoplasts were extracted with 500 µl of pure MeOH. After sonication (10 min) and vortexing, the protoplast extract was filtered, diluted 200-fold with 0.1% formic acid containing 10 nM of ajmaline and used as a QC run. For QC total, we used a methanolic extract of one of the leaf strips used for making protoplasts. The tissue was ground to a fine powder using a TissueLyser II (Qiagen). Metabolites were extracted from the powdered leaf sample with 300 µl of pure MeOH. After vortexing and sonication for 10 min, the leaf extracts were filtered, and 5 µl aliquots were placed into Eppendorf tubes and dried under vacuum. Before analysis, one Eppendorf tube was taken out, resuspended in 1 ml of the extraction solution (0.1% formic acid containing 10 nM of ajmaline), sonicated for 10 min, filtered and used as a QC total. Preparation of standard solutions and calibration curves Catharanthine (Sigma Aldrich), vindoline (Chemodex), serpentine hydrogen tartrate (Sequoia Research Products), anhydrovinblastine disulfate (Toronto Research Chemicals) and vinblastine sulfate (Sigma Aldrich) were dissolved in pure MeOH at a concentration of 10 µM. Serial dilutions ( n = 15) were made between 0.1 nM and 1,000 nM and analyzed by UPLC–MS. The extracted peak areas were used to calculate linear regression curves (Supplementary Table 10 ). XCMS analysis and statistical analysis Peak detection was performed using the XCMS 76 centWave 77 algorithm with a prefilter intensity threshold of 5 × 10 4 counts, a maximum deviation of 3 ppm, a signal-to-noise ratio greater than 5 and peak width from 5 to 30 s, integrating on the real data. Peaks were grouped using a density approach with a bandwidth of 1, retention time was corrected with a locally estimated scatterplot smoothing (loess) and a symmetric fit (Tukey’s biweight function), and peaks were regrouped after correction. Finally, gap-filling was performed by integrating raw data in each peak group region. We used CAMERA 78 to group redundant features (isotopes, adducts and in-source fragments) taking the injections of the QCs of the pooled samples as representative runs. Peaks were grouped within a window of 50% of the full width at half maximum (FWHM); isotopes were detected for single and doubly charged species with an error of 1 ppm; pseudospectra were grouped within sample correlation of extracted ion chromatograms, with a correlation threshold of 0.85 and a P value threshold of 0.05; and finally, adducts were determined for single cluster ions with 1 ppm error. Only one representative feature for each correlation group was selected, with preference given to the peaks detected in the largest number of single-cell samples, and breaking ties by the total sum of intensities. To reduce variation due to injection, we scaled raw areas by the recovery of the internal standard (ajmaline) in each run. Artifacts were removed by keeping only features that were detected in all injection batches, and the ajmaline-corrected, log-transformed areas were centered and scaled on a per-batch basis, to minimize the effect of batch-to-batch variation. PCA was performed on this matrix using the base library of the R programming language (v4.1.3). We assigned the identities of detected features by searching the exact mass within the limits of the feature m/z as detected by XCMS, and the retention time within 5 s of the experimental elution time of the standard, and we manually verified the assignations by comparing the QC runs against an injection of a mix of standards that was performed at the beginning and end of each batch. VIGS of SLTr transporter, D4H, THAS1, THAS2 and THAS1/2 A 516 bp fragment of the SLTr transporter was amplified from C. roseus Sunstorm Apricot leaves cDNA using the primers reported in Supplementary Table 13 and cloned into the pTRV2u vector as previously described 32 . A 300 bp fragment of the THAS1 gene was PCR amplified from C. roseus genomic DNA (gDNA), while two 300 bp fragments from the same region of the THAS2 gene with BamHI or SalI 5′ restriction overhangs, respectively, were PCR amplified from C. roseus cDNA. For the THAS1 and THAS2 double construct, the THAS1 gene fragment was cut with XhoI and ligated to the THAS1 PCR fragment with the SalI overhang cut with the same enzyme. The four obtained fragments carrying BamHI and XhoI restriction overhangs were cloned in the BamHI and XhoI digested VIGS vector pTRV2-MgChl (pTRV2-ChlH(F1R2) from ref. 79 ) using the In-Fusion Snap Assembly Master Mix (Clontech Takara), yielding plasmids pTRV2-THAS1, pTRV2-THAS2 and pTRV2-THAS1/2, respectively. A 234 bp fragment of the D4H gene was amplified from cDNA and ligated into the pTRV2-MgChl using the In-Fusion Snap Assembly Master Mix (Clontech Takara), yielding plasmids pTRV2-D4H. The primer sequences used for cloning are given in Supplementary Table 12 . Genomic DNA from C. roseus was isolated using the DNeasy Plant Mini Kit (Qiagen), and total RNA was isolated using the RNeasy plant Mini kit (Qiagen) and converted to cDNA using SuperScript IV VILO reverse transcriptase (Thermo Fisher Scientific). PCR was performed using Phusion DNA Polymerase (Thermo Fisher Scientific) according to the instructions of the manufacturer. All restriction enzymes were from New England BioLabs. Agrobacterium tumefaciens GV3101 electrocompetent cells (Goldbio, CC-207) were transformed with the construct by electroporation. Agrobacterium GV3101 cells containing pTRV1 and the pTRV2 constructs were grown overnight in 5 ml LB supplemented with rifampicin, gentamycin and kanamycin at 28 °C. Cultures were pelleted at 3,000 g , resuspended in inoculation solution (10 mM MES, 10 mM MgCl 2 and 200 µM acetosyringone) to an OD600 of 0.7 and incubated at 28 °C for 2 h. Transformants were confirmed by PCR using the gene-specific primers used to amplify the gene fragment. Strains containing pTRV2 constructs were mixed 1:1 with pTRV1 culture, and this mixture was used to inoculate plants by the pinch wounding method. Plants (8–12 and 4 weeks old) were inoculated for each construct. Silenced leaves were collected 21–25 d postinoculation, ground in a TissueLyser II (Qiagen) and stored at −80 °C before analysis by quantitative PCR (qPCR) and UPLC–MS. qPCR was performed as reported previously 32 using the primers in Supplementary Table 12 . For UPLC–MS of the VIGS tissue, 10–20 mg of frozen powder were extracted in 1:30 wt/vol of methanol containing 2 µM ajmaline as internal standard, sonicated for 10 min and incubated at room temperature for 1 h. After filtration through 0.2 µm polytetrafluoroethylene filters, the samples were diluted 1:10 with methanol before analysis by UPLC–MS. Samples were analyzed either on a Shimadzu IT-TOF or a Bruker Impact II qTOF instrument. Chromatography was performed on a Phenomenex Kinetex C18 XB (100 × 2.10 mm × 2.6 µm) column and the binary solvent system consisted of ACN and 0.1% formic acid in water. Compounds were separated using a linear gradient of 10–30% B in 5 min followed by 1.5 min isocratic at 100% B. The column was then re-equilibrated at 10% B for 1.5 min. The column was heated to 40 °C and flow rate was set to 0.6 ml min − ¹. Protein expression and activity assays Cloning of THAS1 (KM524258.1) and THS2 (KU865323.1) was described in ref. 14 . ADH20 (KU865330.1), ADH32 (AYE56096.1) and ADH92 (ON911573) genes were amplified from C. roseus ‘Sunstorm Apricot’ leaves cDNA using the primers reported in Supplementary Table 13 . The sequences of ADH20 and ADH92 are reported in Supplementary Table 14 . The PCR products were purified from agarose gel, ligated into the BamHI and KpnI restriction sites of the pOPINF vector 80 using the In-Fusion kit (Clontech Takara) and transformed into chemically competent E. coli One-Shot Top10 cells (Thermo Fisher Scientific, C404010). Recombinant colonies were selected on LB agar plates supplemented with carbenicillin (100 μg ml − 1 ). Positive clones were identified by colony PCR using T7_Fwd and pOPIN_Rev primers (Supplementary Table 13 ). Plasmids were isolated from positive colonies grown overnight. Identities of the inserted sequences were confirmed by Sanger sequencing. Chemically competent SoluBL21 E. coli cells (Amsbio) were transformed by heat shock at 42 °C. Transformed cells were selected on LB agar plates supplemented with carbenicillin (100 μg ml −1 ). Single colonies were used to inoculate starter cultures in 10 ml of 2× YT medium supplemented with carbenicillin (100 μg ml −1 ) that were grown overnight at 37 °C. Starter culture (1 ml) was used to inoculate 100 ml of 2× YT medium containing the antibiotic. The cultures were incubated at 37 °C until OD600 reached 0.6 and then transferred to 16 °C for 30 min before induction of protein expression by the addition of IPTG (0.2 mM). Protein expression was carried out for 16 h. Cells were extracted by centrifugation and resuspended in 10 ml of buffer A (50 mM Tris–HCl pH 8, 50 mM glycine, 500 mM NaCl, 5% glycerol, 20 mM imidazole,) with EDTA-free protease inhibitors (Roche Diagnostics). Cells were lysed by sonication for 2 min on ice. Cell debris was removed by centrifugation at 35,000 g for 20 min. Ni-NTA resin (200 μl, Qiagen) was added to each sample and the samples were incubated at 4 °C for 1 h. The Ni-NTA beads were sedimented by centrifugation at 1500 g for 1 min and washed three times with 10 ml of buffer A. The enzymes were step-eluted using 600 µl of buffer B (50 mM Tris–HCl pH 8, 50 mM glycine, 500 mM NaCl, 5% glycerol, 500 mM imidazole) and dialyzed in buffer C (25 mM HEPES pH 7.5, 150 mM NaCl). Enzymes were concentrated and stored at −20 °C before in vitro assays. To assay the activity of the ADHs, the substrate (AHVB iminium) first had to be generated in vitro. For this purpose, 500 µl reactions were assembled in 50 mM MES buffer (pH 6.5). Each reaction contained 100 µM vindoline, 600 µM catharanthine, 0.002% hydrogen peroxide and 22.5 U of horseradish peroxidase (Sigma Aldrich, 77332). The reactions were incubated at 30 °C for 45 min, after which the sample was divided into aliquots of 70 µl each. To each aliquot, NADPH was added to a concentration of 200 µM and the ADHs to a concentration of 1 µM. The final volume was 100 µl. The reactions were incubated for 2 h and 3 µl were taken at different time points to monitor the progression of the reactions. The 3 µl samples were quenched in 97 µl of MeOH, filtered and analyzed by UPLC–MS on a Thermo Ultimate 3000 chromatographic system coupled to a Bruker Impact II mass spectrometer. Separation was performed on a Phenomenex Kinetex 2.7 µm C18 100 Å (100 × 2.10 mm × 2.7 µm) column and the binary solvent system consisted of ACN and 0.1% formic acid in water. The elution program was as follows: time 0–1 min, 10% ACN; linear gradient to 30% ACN in 5 min; column wash at 100% ACN for 1.5 min and then re-equilibration to 10% ACN for 2.5 min. Flow rate was 600 µl min −1 . Data were analyzed using the Bruker Data Analysis software. Quantification of the AHVB produced during the reactions was performed using external calibration curves and used to calculate the specific activity of the enzymes. The experiments were performed in triplicate. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Data supporting the findings of this research are available within the Article, Extended Data, Source Data, Supplementary Tables and Supplementary Information. Sequences of the genes THAS1 (KM524258.1), THAS2 (KU865323.1), ADH20 (KU865330.1), ADH32 (AYE56096.1) and ADH92 (ON911573) are available from Genbank. All sequencing data associated with this study are available at the National Center for Biotechnology Institute Short Read Archive BioProject PRJNA847226 . Large files including the gene expression abundances from the bulk mRNA-seq, leaf and root scRNA-seq, genome assembly and genome annotation are available via the Dryad Digital Repository under . Source data are provided in this paper. Code availability All custom codes used to generate figures can be found at . | Plants are impressive in their diversity, but especially in the variety of metabolites they produce. Many plant natural products are highly complex molecules, such as the alkaloids vincristine and vinblastine, which are produced by the Madagascar periwinkle Catharanthus roseus. These two substances are already indispensable in cancer therapy. Researchers are very interested in finding out which individual biosynthetic steps are required to form the complex molecules. "Currently, these compounds are still obtained in very small quantities from the plant's leaf extract. We can learn from the plant how this compound is produced and use this knowledge to develop production systems that are more cost-effective, scalable and sustainable," said first author Chenxin Li of the University of Georgia's Center for Applied Genetic Technologies, describing the research goal. The study is published in the journal Nature Chemical Biology. Assigning genetic and metabolic information to individual cells of plant organs The scientists know that gene activity is not the same in all cells of a plant and that the chemistry can differ drastically from cell to cell. Therefore, the goal of the current study was to use a new set of methods collectively termed single-cell omics to investigate specialized and rare cell types that play a central role in the biosynthesis of plant natural products, and whose signals are often obscured by more abundant cell types in plant organs. "With single-cell omics, we have a method that allows researchers to assign genetic and metabolic information to individual cells. The term omics refers to the fact that an entire collection of genes or metabolites is quantified and analyzed," says Lorenzo Caputi, head of the Alkaloid Biosynthesis Project Group in the Department of Natural Product Biosynthesis in Jena and one of the lead authors. Biosynthetic pathway of vinblastine is organized in three distinct cell types As the analyses showed, the entire biosynthetic pathway for the alkaloid vinblastine is organized in three stages and three discrete cell types. "The first stage is expressed exclusively in specialized cells associated with vascular bundles in the leaf, called IPAP. The second stage of the biosynthetic pathway is expressed only in cells of the epidermis, the layer of cells that cover the leaves, and the last known steps of the biosynthetic pathway are expressed exclusively in idioblasts, a rare cell type of the leaf," says Chenxin Li. The researchers measured the concentrations of several intermediates in the metabolic pathway for vinblastine in single cells and were surprised. "Two important precursors of vinblastine, catharanthine and vindoline, occur in the idioblast cells at millimolar concentrations, about three orders of magnitude higher than vinblastine itself. The concentration of the two precursors in these cells was much higher than we expected and even exceeded their concentrations in whole organ extracts. However, this observation makes sense in that catharanthine and vindoline were found only in the rare idioblast cells. The abundant other cells in the leaf dilute the high concentration when whole leaves are crushed," says Sarah O'Connor, head of the Department of Natural Product Biosynthesis. The research team is confident that the organization of biosynthetic pathways for medicinally relevant alkaloids in Catharanthus roseus is not an isolated phenomenon. "We are just beginning to understand how and why such a cell type-specific organization exists. In addition, analysis of genes expressed simultaneously in a particular cell type has helped us identify new players in this metabolic pathway. The same technique can be used to study the biosynthesis of many other natural products. "Finally, the exact sites of accumulation of plant compounds, such as the epidermis, the vascular system, or latex duct, can help us hypothesize the ecological roles of natural products. For example, depending on the pattern of accumulation, the compounds may be more effective against biting insects than they are against sap-sucking insects," says Robin Buell, Professor at Georgia University. A better understanding of the biosynthetic pathways of the anti-cancer drugs vincristine and vinblastine may also help to produce or harvest these compounds more effectively in the long term. The use of methods described is also promising for the study of many other interesting and medically important natural products from the plant kingdom. The approach described here will help to narrow down these rare and specialized cells and uncover the gene activities and chemistry that are exclusive to them. | 10.1038/s41589-023-01327-0 |
Biology | Changes in the immune system lead to success | Martin Malmstrøm et al. Evolution of the immune system influences speciation rates in teleost fishes, Nature Genetics (2016). DOI: 10.1038/ng.3645 Journal information: Nature Genetics | http://dx.doi.org/10.1038/ng.3645 | https://phys.org/news/2016-08-immune-success.html | Abstract Teleost fishes constitute the most species-rich vertebrate clade and exhibit extensive genetic and phenotypic variation, including diverse immune defense strategies. The genomic basis of a particularly aberrant strategy is exemplified by Atlantic cod, in which a loss of major histocompatibility complex (MHC) II functionality coincides with a marked expansion of MHC I genes. Through low-coverage genome sequencing (9–39×), assembly and comparative analyses for 66 teleost species, we show here that MHC II is missing in the entire Gadiformes lineage and thus was lost once in their common ancestor. In contrast, we find that MHC I gene expansions have occurred multiple times, both inside and outside this clade. Moreover, we identify an association between high MHC I copy number and elevated speciation rates using trait-dependent diversification models. Our results extend current understanding of the plasticity of the adaptive immune system and suggest an important role for immune-related genes in animal diversification. Main With over 32,000 extant species 1 , teleost fishes comprise the majority of vertebrate species. Their taxonomic diversity is matched by extensive genetic and phenotypic variation, including novel immunological strategies. Although the functionality of the adaptive immune system has been considered to be conserved since its emergence in the ancestor of all jawed vertebrates 2 , 3 , fundamental modifications of the immune gene repertoire have recently been reported in teleosts 4 , 5 , 6 , 7 . One of the most dramatic changes has occurred in Atlantic cod ( Gadus morhua ), involving complete loss of the MHC II pathway that is otherwise responsible for the detection of bacterial pathogens in vertebrates 4 . Moreover, this loss is accompanied by a substantially enlarged repertoire of MHC I genes, which normally encode molecules for protection against viral pathogens. It has thus been hypothesized that the expanded MHC I repertoire of cod evolved as a compensatory mechanism, whereby broader MHC I functionality makes up for the initial loss of MHC II (refs. 4 , 6 ). However, the questions of how and when MHC II was lost relative to the MHC I expansion, and whether these genomic modifications are causally related, have so far remained unresolved. As key components of the vertebrate adaptive immune system, the complex MHC pathways and their functionality are now well characterized 8 , 9 , 10 , but less is known about the causes of MHC copy number variation, which poses an immunological tradeoff 11 , 12 . Although an increase in the number of MHC genes facilitates pathogen detection, it will also decrease the number of circulating T cells 13 , 14 , 15 , 16 , resulting in an immune system that can detect a large number of pathogens at the expense of being less efficient in removing them. The evolution of MHC copy numbers is therefore likely driven toward intermediate optima determined by a tradeoff between detection and elimination of pathogens—as suggested by selection for 5–10 copies inferred in case studies of fish 17 , 18 and birds 19 . Because pathogen load and the associated selective pressures vary between habitats, the optimal number of MHC copies depends on the environment 20 , 21 , 22 . As a result, interbreeding between different locally adapted populations is expected to produce hybrids with excess (above optimal) MHC diversity that are characterized by T cell deprivation and low fitness. This process would introduce postzygotic reproductive isolation and promote reinforcement of premating isolation between the populations. Consequently, MHC genes have been suggested to have an important role in speciation 22 , 23 , but, to our knowledge, this role has never been tested comparatively in a macroevolutionary context. Here we report comparative analyses of 76 teleost species, of which 66 were sequenced to produce partial draft genome assemblies, including 27 representatives of cod-like fishes within the order Gadiformes. First, we use phylogenomic analysis to resolve standing controversy regarding early-teleost divergences and to firmly establish the relationships of gadiform fishes. Second, by analyzing the presence and absence of key genes in the vertebrate adaptive immune system, we place the loss of MHC II functionality in the common ancestor of all Gadiformes; by time calibrating the teleost phylogeny on the basis of fossil sampling rate estimation, we further show that this loss occurred 105–85 million years ago. Third, we demonstrate that expansions of the MHC I gene repertoire are not limited to Gadiformes but are also found in three other clades, including the exceptionally species-rich group of percomorph fishes (Percomorphaceae). By applying phylogenetic comparative methods and ancestral state reconstruction, we trace the evolutionary history of MHC I gene expansions and infer distinct copy number optima in each of them. Finally, using trait-dependent diversification models, we identify a positive association between MHC I gene copy number and speciation rate. Our results highlight the plasticity of the vertebrate adaptive immune system and support the role of MHC genes as 'speciation genes', promoting rapid diversification in teleost fishes. Results Sequencing and draft assembly of 66 teleost genomes To investigate the evolution of the immune gene repertoire of teleost fishes, we selected representatives of all major lineages in the Neoteleostei 24 for low-coverage genome sequencing. As both loss and expansion of MHC genes have been reported in the Gadiformes 4 , we sampled this order more densely with 27 species representing most gadiform families and subfamilies ( Supplementary Table 1 ) 25 . The sequencing strategy for de novo assembly of the 66 genomes was designed on the basis of gene space completeness and assembly statistics of simulated genomes ( Supplementary Note ). For each species, a single paired-end library was sequenced to 9–39× coverage on the Illumina HiSeq 2000 platform ( Supplementary Table 2 and Supplementary Note ). The genome sequences were assembled with the Celera Assembler 26 , resulting in N50 scaffold sizes between 3.2 and 71 kb ( Supplementary Table 3 and Supplementary Note ). For most species, we recovered more than 75% of the conserved eukaryotic genes included in the CEGMA 27 analysis, and on average we recovered 74% of the conserved genes of the Actinopterygii data set included in the BUSCO 28 analysis. Collectively, this is indicative of a sufficiently high degree of gene space completeness for gene detection in our partial draft genome assemblies ( Supplementary Table 3 and Supplementary Note ). A genome-scale phylogeny of teleost fishes To firmly establish teleost and gadiform relationships, we performed phylogenomic analyses using a stringently filtered data set of 567 exon orthologs from 111 genes, identified in the 66 new draft genome assemblies as well as in 10 previously published genomes. After filtering ( Supplementary Note ), the phylogenomic data set included 71,418 bp, which was used for time calibration and coalescent-based species tree analyses. To establish a timeline of teleost diversification and to determine the timing of immune gene repertoire alterations, we performed Bayesian phylogenetic inference with the concatenated phylogenomic data set, using a relaxed-clock model and 17 fossil constraints ( Supplementary Figs. 1 and 2 , Supplementary Tables 4 and 5 , and Supplementary Note ). The time-calibrated species tree was used to test for incomplete lineage sorting among the species sampled for our phylogeny. Following Jarvis et al . 29 , we mapped genomic insertions and deletions on the rooted topology and found no correlation between branch lengths and uniquely mapped indels. This lack of correlation indicates that incomplete lineage sorting did not substantially affect divergences between the teleost lineages included in our phylogeny 29 and supports concatenation as the most appropriate strategy for species phylogeny estimation with our data ( Supplementary Note ). The resulting time tree ( Fig. 1 ) supports a monophyletic clade including the orders Gadiformes, Stylephoriformes, Zeiformes, Polymixiiformes and Percopsiformes, thus corroborating the Paracanthopterygii sensu Miya et al . 30 . We further confirm the placement of Stylephorus chordatus as the closest extant relative of the Gadiformes 30 , 31 and estimate the crown age of this order at approximately 85 million years ago. Within Gadiformes, we find support for a sister group relationship between Bregmacerotidae and all other gadiform families, which began to radiate around 70 million years ago. Figure 1: Time-calibrated phylogeny of 76 teleost species. Dashed branches correspond to species for which genome data were already available in Ensembl. All nodes are supported by Bayesian posterior probability of 1.0 except where noted ( Supplementary Table 5 ). Clade names to the right and on branches are given according to Betancur-R et al . 24 . Branch colors represent taxonomy at the level of orders. Teleost illustrations are by G. Holm. Source data Full size image Loss of MHC II pathway genes To assess the origin and extent of the MHC II pathway gene loss observed in Atlantic cod, we investigated the immune gene repertoires of the teleost genomes through a comparative gene mining pipeline comprising BLAST searches, prediction of ORFs and annotation ( Supplementary Note ). We explicitly investigated the presence of 27 genes chosen for their central role in MHC class I, MHC class II and cross-presentation pathways 8 , 10 . In addition, three highly conserved genes were included as a control ( Fig. 2 ). As query sequences, we used orthologs from zebrafish ( Danio rerio ), medaka (O ryzias latipes ), spotted green pufferfish ( Tetraodon nigroviridis ), fugu ( Takifugu rubripes ), three-spined stickleback ( Gasterosteus aculeatus ), Nile tilapia ( Oreochromis niloticus ), Amazon molly ( Poecilia formosa ), platyfish ( Xiphophorus maculatus ), cavefish ( Astyanax mexicanus ) and Atlantic cod ( G. morhua ) ( Supplementary Table 6 ). Figure 2: Immune gene repertoire and estimated MHC I copy number in the draft genome assemblies of 66 teleost species. ( a ) Presence of key genes in MHC I, MHC II and cross-presentation pathways for recognition of pathogen-derived antigens. Genes not detected are indicated by white squares. Colors reflect order-level taxonomy, and the cladistic representation follows that in Figure 1 . ( b ) Copy number estimates for non-classical MHC I Z-lineage (light gray) and U-lineage (dark gray) genes. Error bars, 95% confidence intervals based on 1,000 bootstrap replicates ( Supplementary Note ). Source data Full size image The genome-based phylogeny allowed us to place the immune gene characterization into an evolutionary perspective. Whereas the three control genes were identified in all species, we found that genes associated with the MHC II pathway were consistently missing in all inspected Gadiformes draft genome assemblies, as no orthologs could be identified for invariant chain (also known as cd74 ), cd4 and the MHC II α and β chains ( Fig. 2a and Supplementary Table 7 ). This finding reiterates the previous observation in the Atlantic cod genome, where lack of these genes was confirmed through qPCR and synteny analyses 4 . Although individual genes could not be detected in a subset of species outside the Gadiformes, these occurrences comprise a minority ( ∼ 3%) of the total number of comparisons. Notably, these losses did not show any phylogenetic pattern and hence are more likely to have resulted from incomplete genome assembly ( Supplementary Note ). As MHC II pathway genes are otherwise ubiquitous among vertebrates, the observed pattern shows that these genes were lost collectively in the common ancestor of all Gadiformes, following its divergence from Stylephoriformes at approximately 105 million years ago. This result implies that MHC II gene loss is shared by all 616 extant species of gadiform fishes. These taxa inhabit a wide variety of habitats, showing that their alternative immune system is highly versatile and not restricted to specific ecological niches. MHC I copy number variation MHC I molecules exist as five distinct lineages—L, S, P, U and Z—with the latter two comprising predominantly 'classical' (peptide-binding) MHC I molecules involved in antigen presentation 32 , 33 , 34 , 35 . The Atlantic cod genome has been shown to harbor 80–100 copies of MHC I (ref. 4 ), which is 15 times higher than the copy number determined to be optimal in the three-spined stickleback 14 , 17 . To investigate the evolutionary origin of the gadiform MHC I expansion and potentially detect other expansions, we estimated the MHC I copy numbers of the two peptide-binding lineages (U and Z) in the 66 sequenced species. MHC I copy number estimates were calculated on the basis of Illumina raw sequencing reads relative to a set of 19 putatively single-copy reference regions. For both U- and Z-lineage genes, we determined the number of matching raw reads by aligning them to conserved regions of the MHC I genes (α3 domain) and the set of reference genes. The uncertainties of all copy number estimates were assessed with a double-bootstrapping procedure ( Supplementary Tables 3 and 8 , 9 , 10 , 11 , and Supplementary Note ). Our copy number estimates were cross-validated with previous results for Atlantic cod based on qPCR 4 and transcribed MHC I U-lineage genes 6 . The copy number estimation procedure was further validated by estimating the relatively conserved number of Hox gene copies for all species. Estimated Hox gene copy numbers (mean = 50.3, s.d. = 11.4) were in agreement with expectations for teleost fishes 36 and were uncorrelated with MHC I gene copy numbers ( R 2 = 0.002) ( Supplementary Fig. 3 , Supplementary Table 3 and Supplementary Note ). Mean copy number estimates for all species are shown in Figure 2b . Extremely high copy numbers of MHC I U-lineage genes were detected in Gadiformes, where several species had around 100 copies, followed by species within Percomorphaceae with up to 80 copies. Within Gadiformes, high copy numbers above 40 were observed in as many as 12 species. Although we also observed low copy numbers in a limited number of gadiform species (for example, five copies in Trisopterus minutus ), nearly all representatives of this order appear to share an expanded MHC I gene repertoire with 15–30 copies of the U-lineage genes. Such gene family expansions may promote biological diversification by introducing new raw genetic material, potentially resulting in sub- or neofunctionalization and thus novel immunological pathways. In this regard, we identified the same cytoplasmic sorting motifs presumed to be part of a novel pathway enhancing cross-presentation functionality in Atlantic cod 6 , 37 , 38 also in five additional species of the Gadinae. Because the origin of these signaling motifs only dates back to about 15 million years ago, it is likely that the enhanced cross-presentation functionality was not part of a preexisting machinery that could have favored the loss of MHC class II ( Supplementary Fig. 4 , Supplementary Table 12 and Supplementary Note ). Two hypotheses have been put forward as possible explanations for the loss of MHC II: a metabolic gain by not maintaining a costly system or a functional shift in the immune gene repertoire, rendering it obsolete 39 . Whether the loss occurred before or following the expansion of MHC I genes is key in discriminating between these hypotheses. In contrast to all other Gadiformes, the most basal gadiform lineage, represented by Bregmaceros cantori , is characterized by a complete absence of MHC I U-lineage genes, in addition to MHC II loss. The only antigen-presenting molecules detected in Bregmacerotidae are encoded by the MHC I Z-linage genes, of which 2–3 copies are still present. With minor exceptions, the Z-lineage genes were found in low copy numbers in all species, and, contrary to the U-lineage genes, they did not show a pattern of clade-specific expansions. Interestingly, the myctophiform Benthosema glaciale showed convergent loss of the MHC I U-lineage genes, and both Bregmaceros and Benthosema have experienced an additional loss of two genes involved in MHC I interactions with T cells ( cd8a and cd8b ), rendering the traditional pathway for endogenously derived pathogens non-functional as well ( Fig. 2 ). The lack of MHC I U-lineage genes in B. cantori indicates that MHC I expansions in Gadiformes occurred after the divergence of Bregmacerotidae and thus subsequent to the loss of MHC II. Rate shifts in MHC I copy number evolution It has previously been shown that MHC copy number and molecular diversity are linked to the habitat that a species or population occupies 21 , 22 . Our data show that the species-rich clades Gadiformes and Percomorphaceae, which are both distributed across a wide variety of habitats, contain the highest numbers of MHC I gene copies among teleosts. We therefore used phylogenetic comparative methods to test for putative adaptive evolution of MHC I copy numbers in all groups included in our phylogeny. We used a Markov chain Monte Carlo (MCMC)-based reversible-jump Bayesian approach to fit multiple-regime Ornstein–Uhlenbeck models in which trait evolution is directed toward different optimal values in different parts of the tree 40 . This approach identified the probability and direction of rate shifts in MHC I copy number optima on branches in the phylogeny ( Supplementary Fig. 5 and Supplementary Note ). On the basis of the inferred branch-specific rate shift probabilities and the presence of copy number outliers, we tested explicit hypotheses for rate shift combinations in a likelihood framework 41 , 42 . We found that the overall best fitting model included five shifts of the MHC copy number optimum ( Fig. 3a , Supplementary Table 13 and Supplementary Note ). This model is characterized by a phylogenetic half-life of 23 million years, a stationary variance of 0.38 squared log copy numbers and an optimum of 6.8 MHC I copies for basal branches of the phylogeny, which is in concordance with the hypothesized MHC I repertoire of early gnathostomes 43 . In contrast, substantially higher optimal MHC I copy numbers of up to 58 copies were inferred within gadiform and percomorph fishes. The phylogenetic half-life of 23 million years shows that the MHC optima are approached slowly and that there can be considerable lag in adaptation. The multiple-optima model vastly outperformed alternatives such as Brownian motion, white noise, single-peaked Ornstein–Uhlenbeck and early-burst models. This finding corroborates the hypothesis that MHC I copy number evolution is characterized by selection toward intermediate optima, resulting from a tradeoff between detection and elimination of pathogens. Figure 3: MHC class I copy number evolution and diversification rate analysis. ( a ) 3D representation of teleost phylogeny ( Fig. 1 ) and MHC I copy number evolution, based on ancestral state reconstruction with the best fitting Ornstein–Uhlenbeck model. This model supports five shifts toward elevated copy number optima, marked by colored dots: 25.2 copies in Percomorphaceae excluding Ophidiiformes, 41.5 in Berycoidei, 22.1 in Polymixiiformes and Percopsiformes, 19.8 in Gadiformes excluding Bregmacerotidae, and 57.5 in Gadiformes excluding Bregmacerotidae and Gadidae. Loss of MHC II in the common ancestor of Gadiformes is represented by a black dot. The ordering of species is identical to that in Figure 2 . Gray shading visualizes reconstructed MHC I copy numbers along branches. Steps of 20 copies are indicated by light gray lines on the 3D reconstruction. ( b ) Per-species MHC I copy numbers and mean net diversification rate estimates, connected according to phylogenetic relationships and colored as in a . ( c ) Diversification rate estimates for lineages with high and low MHC I copy numbers, based on BiSSE analyses with incrementing thresholds between high and low copy numbers. Black lines represent mean estimates and gray shading represents standard deviation from 25 replicate analyses. ( d ) Improvement in log-transformed likelihood when speciation rates are unlinked for lineages with high and low copy numbers, in the analyses performed in c . The black line corresponds to the median and gray shading represents 0.05 and 0.95 quantiles from replicate analyses. Source data Full size image Diversification rate analyses The Gadiformes comprise 616 extant species 1 and are thus the most species-rich order within Paracanthopterygii, where Stylephoriformes are represented by a single species and Zeiformes, Percopsiformes and Polymixiformes collectively include 56 species. To investigate whether this pattern of diversity is the result of an elevated speciation rate in Gadiformes, we tested for differences in diversification among all clades included in our phylogenetic data set using a Bayesian framework ( Supplementary Figs. 6 , 7 , 8 and Supplementary Note ). For this analysis, we identified a set of 37 mutually exclusive clades that represent almost the entire extant diversity of the teleost supercohort Clupeocephala 24 . Assuming constant speciation and extinction rates within specific shift regimes, we identified two major shifts in diversification rates, one at the base of Gadiformes and a second within the taxonomically diverse Percomorphaceae 24 , that is, in the two clades featuring particularly high MHC I copy numbers ( Fig. 3a ). A comparison of per-species MHC I copy numbers and mean net diversification rates ( Fig. 3b ) indicates a relationship between the two measures. To quantify the association between MHC I copy numbers and rates of diversification, we carried out binary state speciation and extinction (BiSSE) analysis to estimate differences in diversification rate between lineages with high and low MHC I copy numbers ( Supplementary Note ). We found that diversification rates differed most when the threshold was placed between 20 and 25 copies ( Fig. 3c ). With a threshold in this range, the model with two separate speciation rates for lineages with high and low copy numbers was better supported than a model with a single speciation rate parameter (ΔAIC > 37.1; Fig. 3d and Supplementary Note ). These results suggest that the influence of MHC I genes on speciation rates is stronger in species that have already evolved at least 20 copies. Discussion Teleost fishes are characterized by striking differences in species richness across different lineages, with some groups, such as cyprinids and cichlids, containing thousands of species, while others include only a single or very few taxa. Although these differences can in part be explained by ecological opportunity 44 or key innovations 45 , a large proportion of lineage-specific variation in taxonomic diversity remains unexplained. It has previously been suggested that MHC genes can influence speciation rates through selection against hybrids with higher than optimal MHC copy numbers and consequent reinforcement 22 , 23 . For species with more MHC copies, the extra number of alleles in hybrids will be comparatively higher, and these species should therefore experience greater fitness reduction and stronger assortative mating to maintain co-adapted genes. The proposed role of MHC genes as speciation genes 23 , promoting diversification and the maintenance of recently diverged species, is therefore expected to be more pronounced in species exhibiting high copy numbers. The identification of an MHC I copy number threshold above which speciation rates are accelerated expands on this conceptual framework. Our analyses identify this threshold at 20–25 MHC I copies, suggesting that the effect of T cell depletion on hybrid fitness becomes more pronounced in this range and that this might affect mate choice in species with copy numbers above this threshold, promoting inbreeding and reinforcement. By pinpointing the loss of MHC II pathway genes to the common ancestor of Gadiformes and by reconstructing MHC I copy number evolution, we show that loss of MHC II predated the MHC I expansions that occurred within this group. This implies that MHC II functionality was not outcompeted by a preexisting alternative immunological strategy such as an expanded MHC I repertoire but may instead have been lost as a result of the metabolic costs associated with its maintenance 39 . The temporal relationship between the loss and expansions in Gadiformes and the identification of putative cross-presentation signaling motifs in Atlantic cod 6 , as well as in several other gadiform species, indicate that a number of genes in the expanded MHC I repertoire evolved to compensate for loss of MHC II. Nevertheless, marked expansions of MHC I genes were also observed outside of Gadiformes in groups with intact MHC II pathways. These independent expansions clearly show that loss of MHC II is not the only potential trigger for MHC I expansion but that MHC copy number evolution has also been driven by other factors, highlighting the extensive plasticity of the teleost adaptive immune system. The 66 new teleost draft genome assemblies provide unprecedented opportunities for comparative analyses of the largest clade of vertebrates. We have used this genomic information to unravel the evolutionary history of key immune genes and to show how MHC gene composition in teleost fishes has influenced diversification rates in this diverse vertebrate lineage. Methods Tissues, sequencing and assembly. Genomic DNA was obtained from various tissues of the different species in this study. Most tissue samples were provided by museums and other collections, while some come from commercially caught fish in collaboration with local fishermen (see Supplementary Table 1 for a full list of tissues and contributors). A single paired-end library, with an insert size of ∼ 400 bp, was created for each species, using the Illumina TruSeq Sample Prep v2 Low-Throughput protocol. All species were sequenced (2 × 150 bp) to >9× coverage on the Illumina HiSeq 2000 platform, and sequences were assembled using the Celera Assembler 26 ( Supplementary Note ). Draft genome assembly quality, in terms of gene space completeness, was assessed using CEGMA 27 and BUSCO 28 ( Supplementary Table 3 and Supplementary Note ). Phylogenetic inference. Strict filtering criteria were applied for the identification of suitable orthologous phylogenetic markers. For the 33,737 annotated zebrafish genes in release 78 of the Ensembl database 46 , we selected the longest transcript if it had at least five stop-codon-free exons of 150 bp or greater in length. We removed genes that could not be assigned to an Ensembl gene tree and genes for which teleost fishes did not form a monophyletic group in the gene tree. We further excluded genes for which the Ensembl gene tree indicated gene duplications among teleosts or did not include all ten teleost species of Ensembl v.78 ( Supplementary Note ). The ten teleost reference genomes of Ensembl were used to calculate TBLASTN bitscores for each of the zebrafish exons, using the BLAST+ v.2.2.29 suite of tools 47 . Exon-specific bitscore thresholds for putative orthologs were defined on the basis of this bitscore information, and exons for which two or more of the known orthologs had bitscores lower than this threshold were excluded. Genes with fewer than five remaining exons were discarded, which resulted in a set of 2,251 exon sequences of 302 zebrafish genes that were then used as queries in TBLASTN searches against each of the 66 new teleost draft genome assemblies, the 10 Ensembl teleost genomes and the genome sequence of salmon 48 . For each species, the best hits were accepted as putative orthologs if their TBLASTN bitscores were above the exon-specific bitscore threshold ( Supplementary Note ). Alignments of TBLASTN hits for the 2,251 exons were further filtered on the basis of ratios of nonsynonymous to synonymous substitutions (dN/dS) determined with the software codeml of the PAML 4 package 49 to exclude exons putatively under positive selection. Unreliably aligned sites were identified with the software BMGE v.1.0 (ref. 50 ) and excluded from the alignments. We further removed all third codon positions from the alignments and excluded exons with high variation in GC content ( Supplementary Note ). For each gene, the concordance of individual exon trees was assessed with the software Concaterpillar v.1.7.2 (ref. 51 ), and we removed either individual exon alignments or all exon alignments for a given gene from the data set, depending on the number of congruent exon trees ( Supplementary Note ). To filter for genes with clock-like evolution, we estimated the coefficient of variation of rates of each gene with the software BEAST v.2.2.0 (ref. 52 ) and removed the genes with the highest estimated rate variation coefficients ( Supplementary Note ). After this step, our strictly filtered data set used for phylogenetic inference contained 567 exons from 111 genes, with a total alignment length of 71,418 bp and 7.3% missing data. To assess the consequences of strict filtering on phylogenetic inference, we compared maximum-likelihood phylogenies based on the strictly filtered data set (111 genes, 71,418 bp, 7.3% missing data) with phylogenies based on a data set that was substantially larger but less strictly filtered (302 genes, 252,442 bp, 18.2% missing data). Maximum-likelihood phylogenies for both data sets were inferred with the software RAxML v. 8.1.17 (ref. 53 ) ( Supplementary Fig. 1 and Supplementary Note ). The strictly filtered data set was further used to estimate the timeline of teleost diversification with the software BEAST v.2.2 (ref. 52 ). Calibration densities for time calibration were calculated with the BEAST add-on CladeAge 54 , on the basis of estimates of diversification rates and the fossil sampling rate. The earliest fossil occurrences of 17 clades in our phylogeny were identified and used to constrain the ages of these clades with CladeAge calibration densities, taking into account the uncertainties in the ages of these fossils ( Supplementary Note ). We further used coalescent-based species tree estimation to test for potentially misleading phylogenetic signal due to incomplete lineage sorting. This analysis was conducted both with individual gene trees and with trees based on alignments binned according to the binning approach of Mirarab et al . 55 . Maximum-likelihood trees produced with RAxML and maximum-clade-credibility trees resulting from BEAST analyses of binned and unbinned genes were used for species tree inference with the software ASTRAL v.4.7.8 (ref. 56 ) ( Supplementary Fig. 2 , Supplementary Table 5 and Supplementary Note ). To test for incomplete lineage sorting among the taxa included in our phylogeny, we compared branch lengths and the proportion of synapomorphic indels supporting each branch, following Jarvis et al . 29 ( Supplementary Note ). Gene mining of draft genome assemblies. All draft genome assemblies were mined for genetic content on the unitig (UTG) assembly level, as assembly parameters are stricter for UTGs than for contigs or scaffolds. The presence or absence of each gene was determined through an automated pipeline, using full-length amino acid sequences for 27 immune-related genes and 3 control genes, from ten teleost reference genomes (Ensembl gene identifiers are listed in Supplementary Table 6 ). Potential orthologs were detected using TBLASTN with an acceptance level of e value = 1 × 10 −10 and evaluated through identification of ORFs predicted by the software Genescan 57 . All ORFs were then aligned to the UniProt database ( Supplementary Note ), and reciprocal TBLASTN hits were recorded as potentially correct if their e value was below 1 × 10 −10 . All recorded annotations for each gene were then manually inspected, and the best hit is reported (see the Supplementary Note for details and Supplementary Table 7 for the location of each identified ortholog). Copy number estimation of MHC I genes. High sequence similarity and conserved regions make the different MHC I genes difficult to assemble correctly. To estimate the number of copies of these genes in each of the sequenced genomes, we applied a new method for copy number estimation, based on a comparison of raw read counts for target and reference sequences. For MHC I U- and Z-lineage genes, we used 270 bp of the conserved α3 domain as the target and equivalently sized fragments from 14 single- or low-copy genes as references (see Supplementary Table 9 for a full overview of all reference regions). MHC I target sequences were prepared through consensus by majority for all hits detected in the individual draft genome assemblies with TBLASTN ( e -value cutoff set to 1 × 10 −5 ) using U- and Z-lineage MHC I α3-domain sequences from ten teleost reference genomes as queries. The number of copies of each of the target genes was determined on the basis of the number of unique sequencing reads mapping to this region, relative to the number of reads matching each of the reference gene regions. The copy numbers of each of the reference gene regions were estimated first, using an iterative method and four different BLAST stringencies. Not all reference regions fulfilled our criteria, and some references were discarded for some species (see the Supplementary Note for details and Supplementary Table 11 for a full list of the references used for each species). Copy numbers for both MHC I lineages were then estimated by comparing the number of raw reads matching both the target and reference sequences and taking estimated genome size, coverage variation and total number of reads into account. The uncertainties of all copy number estimates were assessed with a double-bootstrapping procedure ( Supplementary Note ). Rate shifts in MHC I copy number evolution. Phylogenetic signal in MHC I copy number evolution was assessed with Blomberg's K statistic 58 , calculated using the phylosignal function of the picante R package v.1.6-2 (ref. 59 ), and with Pagel's lambda 60 , calculated with function phylosig of the phytools R package v.0.4-45 (ref. 61 ) ( Supplementary Note ). The fits of four general models of trait evolution were compared on the basis of their sample-size-corrected Akaike information criterion (AICc), using the function fitContinuous of the geiger R package v.2.0.3 (ref. 62 ): a white noise model, a Brownian motion model, an early-burst model 63 and a single-peak Ornstein–Uhlenbeck model 40 , 64 ( Supplementary Note ). The reversible-jump Bayesian approach of the bayou R package v.1.0.1 (ref. 65 ) was used to perform MCMC sampling of locations, magnitudes and numbers of shifts in multiple-optima Ornstein–Uhlenbeck models ( Supplementary Fig. 5 and Supplementary Note ). On the basis of the results of the bayou analysis, explicit hypotheses for shift combinations were tested in a likelihood framework, using the SLOUCH R package 41 , 42 . For each shift combination, the likelihood of the best fitting combination of optimum, half-life and stationary variance was recorded and used for model comparison based on AICc scores ( Supplementary Table 13 and Supplementary Note ). The ancestral states of log-transformed MHC I copy numbers were reconstructed for internal nodes of the time-calibrated phylogeny, on the basis of the best fitting Ornstein–Uhlenbeck model ( Supplementary Note ). Diversification rate analyses. Patterns of species diversification were analyzed with the Bayesian framework implemented in BAMM v.2.2.0 (ref. 66 ), on the basis of the time-calibrated phylogeny and counts of species richness in each of the 37 mutually exclusive clades of teleost fishes ( Supplementary Table 14 ). The 'MEDUSA-like' model of diversification, assuming constant speciation and extinction rates within specific shift regimes 67 , was used for this analysis ( Supplementary Fig. 8 and Supplementary Note ). To test whether high MHC I copy numbers are associated with lineages that have high diversification rates, we carried out BiSSE analyses 68 with the diversitree R package 69 . In these analyses, species were grouped into two categories for high and low MHC I copy numbers, on the basis of a given threshold value. Analyses were repeated for 26 equally spaced copy number threshold values between 10 and 60. As diversitree allows terminal clades with extant diversities of no more than 200 species, we used birth–death models of diversification in combination with the diversified sampling scheme of Höhna et al . 70 to stochastically resolve subclades of all clades with more than 200 extant species, which was repeated 25 times. BiSSE analyses were conducted for each of the 25 resulting phylogenies and with each of the 26 copy number thresholds, assuming symmetric transition rates between high and low copy numbers and identical extinction rates in taxa with high and low copy numbers ( Supplementary Note and Supplementary Data ). URLs. GitHub repository for scripts used in this study, . Accession codes. All reads generated for this project have been deposited in the European Nucleotide Archive (ENA) under study accession PRJEB12469 (sample identifiers ERS1199874 – ERS1199939 ). All new assemblies (unitigs and scaffolds) reported on here have been deposited in the Dryad repository under doi:10.5061/dryad.326r8 . Accession codes Primary accessions European Nucleotide Archive ERS1199874 ERS1199939 PRJEB12469 | The sequencing of the Atlantic cod genome in 2011 demonstrated that this species lacks a crucial part of its immune system. In a follow-up study, Kjetill S. Jakobsen and collaborators have investigated a large number of additional fish species and found that this is a trait that Atlantic cod have in common with its close relatives, the codfishes. Further analyses show that the alternate immune system observed is associated with increased speciation rates, and a key to the success of this group of fishes. The results are now published in the world-leading journal Nature Genetics. Altogether, 65 new fish species were genome sequenced which made a foundation for a new bony fish phylogeny and further revealed that the loss of the central immune gene MHC II occurred around 100 million years ago in the branch leading to codfishes. Intriguingly, it was shown that the codfishes have evolved an alternative strategy by substantially increasing in the copy-number of another immune gene, MHC I, found to influence another evolutionary process: "Other researchers have suggested that the immune genes also have an effect on mate choice and speciation processes. For example, immune genes may affect mate choice in three spined stickleback, and even among humans – although the human results are thought to be controversial. Until now we have not had empirical data on MHC I and speciation. Our findings for codfishes and other groups of fishes are a breakthrough," Kjetill S. Jakobsen explains. In addition to evolution of the immune system, an improved phylogeny and new insight into speciation these results are of interest to immunological research in general: "Our data shows that the immune system is far more evolutionary flexible than previously believed. The prevailing view has been that the human immune system is universal and can serve as a model for all vertebrates. Now, by adding the bony fishes – the largest group of vertebrates – it turns out that we, the humans, may be the special case. This knowledge has implications for all immunological research – including us," Sissel Jentoft says. A particular challenge for cod aquaculture has been the difficulties in developing vaccines by traditional methods. The new knowledge about the immune system of Atlantic cod and other codfishes may catalyse more efficient methods for vaccine development. | 10.1038/ng.3645 |
Physics | Novel tin 'bubbles' spur advances in the development of integrated chips | Christopher S. A. Musgrave et al, Easy-handling minimum mass laser target scaffold based on sub-millimeter air bubble -An example of laser plasma extreme ultraviolet generation-, Scientific Reports (2020). DOI: 10.1038/s41598-020-62858-3 Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-020-62858-3 | https://phys.org/news/2020-04-tin-spur-advances-chips.html | Abstract Low density materials can control plasma properties of laser absorption, which can enhance quantum beam generation. The recent practical extreme ultraviolet light (EUV) is the first industrial example of laser plasma source with low density targets. Here we propose an easy-handling target source based on a hollow sub-millimeter microcapsule fabricated from polyelectrolyte cationic and anionic surfactant on air bubbles. The lightweight microcapsules acted as a scaffold for surface coating by tin (IV) oxide nanoparticles (22–48%), and then dried. As a proof of concept study, the microcapsules were ablated with a Nd:YAG laser (7.1 × 10 10 W/cm 2 , 1 ns) to generate 13.5 nm EUV relatively directed to laser incidence. The laser conversion efficiency (CE) at 13.5 nm 2% bandwidth from the tin-coated microcapsule (0.8%) was competitive compared with bulk tin (1%). We propose that microcapsule aggregates could be utilized as a potential small scale/compact EUV source, and future quantum beam sources by changing the coating to other elements. Introduction Low density materials are a type of material that are significantly lower in density than the mother source. For example, polystyrene has a density of ~1.0 g/cm 3 , and corresponding very low density polystyrene can be as light as 0.03 g/cm 3 1 , 2 . Low density materials are versatile; used in many applications such as tissue engineering 3 , high surface area matrix 4 , and widely throughout laser plasma experiments. Within laser plasma experiments, critical electron density of plasma is a key parameter to determine the absorption of laser resulting high energy density state, generating of quantum beam 5 , 6 . Ultralow density less than the critical density, typically ~1 mg/cm 3 , is desired to control the plasma character 7 , 8 . Recently, practical applications of extreme ultraviolet (EUV) include lithography for production of <7 nm integrated circuits 9 . EUV light sources require a high laser conversion efficiency (CE %) from laser light, and robustness of the reflective Mo/Si optics over long operating periods 10 . At present, the most reliable EUV light sources utilize liquid tin droplets for ablation by a double pulse laser scheme due to the high power (250 W) at 13.5 nm 11 . High repetition rates ( ∼ 100 kHz) can be achieved using liquid tin droplets. However, the double pulse scheme struggles to control the droplet expansion dynamics where the droplet was illuminated by a prepulse to expand just microseconds before the main laser pulse 11 , 12 , 13 . The EUV collector durability also remains a problem. Finally, liquid tin requires high temperatures to melt (232 °C). This not ideal for practical handling, especially when new generations of laser quantum beam sources are designed. Easy-handling of high-repetition and high CE laser target without prepulse illumination is an crucial factor 14 , 15 . Overcoming the limitations of liquid tin dynamics control can be very advantageous in generating EUV. Synthesis of well-defined low density tin targets have a merit for supporting a wide range of materials consisting of various elements, exact shape, pore size, density etc 16 , 17 , 18 , 19 , 20 . Plasma generated from low density materials or nanostructured targets have a reduced opacity, increasing the CE as the plasma becomes less dense 6 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 . Moreover, the flexibility of materials science is exciting for quantum beam sources; the desired wavelength of light can be selected based on specific elements supported by a low density scaffold. For example, some elements supported by low density scaffolds include gold 17 , copper 27 , vanadium 27 and titanium 28 for x-ray generation. For EUV generation, an extremely low density tin (10 19 atoms/cm 3 ) is required, we aim here to define a new concept of low density targets for this purpose. A technique derived from a layer-by-layer (LbL) fabrication method 29 , 30 has previously been explored to produce low density materials for EUV light generation 20 . One LbL method produces a polyelectrolyte microcapsule that can be coated with tin nanoparticles or electrodeposited onto the surface. The now tin-coated microcapsule can be ablated to generate 13.5 nm EUV. The benefit of LbL methods is that the raw materials are low-cost, mechanically stable, and the coating of nanoparticles are not restricted to tin. However, in those EUV experiments a solid particle template was used in the formation of the LbL capsule, which was then removed before laser irradiation 20 , 31 . The process of removing the solid template would not be favorable in situations were a higher volume of targets which are required to be processed quickly such as high-repetition laser experiments. Therefore, we were motivated to develop a LbL microcapsule based on a substrate free 32 , 33 or gas template in a similar manner to a previously published structure 34 . A gas template can be simply incorporated into the LbL capsule without supercritical fluid drying process, and can be treated as a laser target. Furthermore, a simple fabrication process could be scaled-up into a higher volume production for consideration in high-repetition EUV generation, or extend to other quantum beam generation due to a freedom of doping materials. Here, we show the fabrication of LbL polyelectrolyte microcapsules from a gas (air) template. The microcapsules were coated with tin oxide (SnO 2 ) nanoparticles, with alternating layers of tin oxide and polyelectrolytes to increase tin content. The capsules were ablated using a 1064 nm Nd:YAG (7.1 × 10 10 W/cm 2 , 1 ns, 60 μm spot size) laser to generate 13.5 nm EUV light. The CE at 13.5 nm 2% bandwidth was estimated for bulk tin and the microcapsules. We used an adapted version of a LbL fabrication technique to produce polyelectrolyte microcapsules composed of poly(sodium 4-styrene-sulfonate) (PSS) and poly(allylamine hydrochloride) (PAH) 20 , 29 , 30 , 31 , 32 , 33 , 34 . We used a gas tight syringe and pump to form the initial microcapsule. The air template was encapsulated with either dodecyltrimethylammoniumbromide (DTAB) or poly(vinyl alcohol) (PVA). Furthermore, using an air template meant one less stage of processing was required compared with a solid particle template. The flow rate of the syringe pump was chosen between 0.05–2.0 ml/min to control the microcapsule diameter (Figs. 1 and S1 ). The wet microcapsules diameters were monodisperse at the flow rates we tested, with a diameter variation of 4% at each flow rate. The maximum rate of microcapsule production was about 200 Hz, which also correlated with other parameters such as capsule diameter. The ability to select the microcapsule diameter is hugely flexible compared with microcapsules based on the particle template, which is limited to 10 μm variation due to collapse of the capsule during template decomposition. Control over the capsule diameter has the advantage of customizable diameter depending on the application; for example tuning the diameter to different laser spot sizes. The microcapusles were then coated in a tin oxide nanoparticle solution, and dried for characterization. Figure 1 Polyelectrolyte microcapsule diameter control using a 7.2 × 10 −2 M DTAB solution with varying flow rate. Full size image Inductively coupled plasma atomic emission spectroscopy (ICP-AES) measured 4.2 × 10 −9 g of Sn was present in the microcapsule core layers, which was equivalent to 2.1 × 10 13 Sn atoms (Supplementary Information). Our microcapsules were unique compared with previous methodologies in that additional layers of alternating PAH/SnO 2 layers were applied over the PSS/PAH core layers. This increased the overall tin content with only a monolayer of PAH between each SnO 2 layer. We performed this in order to capitalize on alternating ionic charge between the SnO 2 nanoparticle solution (−) and the PAH (+) to form these additional stablizing layers. Field emission scanning electron microscopy (FE-SEM) (Fig. 2(b) ) revealed that the thickness of the microcapsule walls were around 180 nm. The cross-section almost resolved the individual layers of the microcapsule, particularly the final SnO 2 layer. The schematic in Fig. 2(a) is not to scale, but shows the LbL composition of each polyelectrolyte ion from the DTAB or PVA core (black), PSS (blue) and PAH (red) layers followed by coating with SnO 2 nanoparticles. Figure 3 shows another SEM image from the topview and Fig. 3(b) is a zoom-in image of a part of Fig. 3(a) . Energy dispersive X-ray spectroscopy (EDS) measured 22–48% by mass Sn coating on the surface of the microcapsules (Fig. 3(c) ). The SnO 2 coating was relatively well-distributed across the surface, even in cases were 22% Sn was measured. This meant there was no severe variations expected in the wall thickness (approx. 15 nm SnO 2 nanoparticle size from transmission electron mircoscopy (TEM)). Figure 2 A schematic of the microcapsule composition (left), and a cross-sectional FE-SEM image (right). The schematic represents two layers of SnO 2 coating whereas the fabricated microcapsules contained 3- or 6-layers. The FE-SEM image shows a microcapsule wall cross-section composed of PVA[PSS/PAH] 3 [SnO 2 /PAH] 2 SnO 2 . Full size image Figure 3 SEM images of a dry tin oxide coated microcapsule target ( a ) and ( b ), and corresponding EDS mapping of elemental tin ( c ). The lighter specks ( c ) correspond to the tin oxide nanoparticles. The images are of a PVA[PSS/PAH] 3 [SnO 2 /PAH] 2 SnO 2 microcapsule with 28% tin content by mass. Full size image As a case study for the effectiveness of the microcapsule scaffold, we generated 13.5 nm EUV light using a 1064 nm Nd:YAG laser. The capured EUV spectra can be seen in Fig. 4 . ICP-AES measured that there were 2.1 × 10 13 Sn atoms present in a microcapsule. This was comparable to the minimum mass calculated elsewhere where 7.4 × 10 13 –1.5 × 10 14 atoms was suitable for EUV generation 35 , 36 . Thus, we expected a strong EUV emission at 13.5 nm associated with ablation of tin within the microcapsules. The EUV spectra were characterized as a strong unresolved transmission array (UTA) emission around 13.5 nm arising from tin transmissions between Sn 8+ and Sn 21+ 35 . The in-band CE at 13.5 nm 2% bandwidth was estimated at 0.8% for the 6-layer SnO 2 microcapsules using a previously described technique 37 . This was lower than the ideal CE (3%) for bulk tin 38 , but comparable to that of bulk tin (1%) at the same laser condition. The lower tin efficiency would be due to a lateral expansion loss of ablation plume for the present smaller laser spot (60 μm) and shorter pulse duration (1 ns) than the cases of previous reports (for example >100 μm, >6 ns) 39 , 40 . Such lateral expansion would exhibit less opacity effect as re-absorption of 13.5 nm light due to the small plume. This resulted in a sharper spectrum in comparison the previous spectrum produced by Nd-YAG laser irradiation, while larger spot size and long pulse duration gave so-called corona plasma which re-absorbs 13.5 nm light from the radiation region 35 . Figure 4 EUV emission spectra of bulk tin (black), 6-layer SnO 2 microcapsule (red) and 3-layer SnO 2 microcapsule (blue) ablated using a 7.1 × 10 10 W/cm 2 1 ns pulse. Full size image The 6-layer CE of 0.8% was in contrast to 0.4% CE of the 3-layer SnO 2 microcapsules. Therefore our microcapsule preparation method was justified; the additional layers of SnO 2 increased the tin content sufficiently for a strong EUV emission. The EUV spectra suggested that the laser was over-penetrating the top layers of SnO 2 /PAH resulting in ablation of more SnO 2 . Thus, more SnO 2 was ablated for a strong emission in the 6-layered microcapsules compared with the 3-layered target, resulting in a higher CE. Further improvements to the microcapsule SnO 2 coverage and laser parameters could yield a higher CE than bulk Sn. The microcapsules exhibited relatively directed EUV emission to laser incidence in comparison to bulk tin as seen angular distribution data (Fig. S4 and Table S1 in ESI). The EUV emission from the 3-layered target was more directed than that from the 6-layered one, suggesting a localized emission point at the front of laser incident 41 due to the present minimum mass tin. In practice, the microcapsules had a tendency to coalesce into a larger aggregate during fabrication and transportation for laser shots. This meant that 13.5 nm EUV was generated from a microcapsule aggregate. Aggregated capsules are not a serious issue as the synthesis method could be adapted for higher repetition EUV generation. Firstly, the microcapsule diameter was both monodisperse and customizable (~80–180 µm). Secondly, the facile preparation method could be adapted into a continuous fabrication process and supply of the microcapsule aggregates to the laser focus position. Lastly, the microcapsule structure is such that only one driving pulse would be required to generate EUV. A pre-pulse would be redundant as the microscapsule is already a low density structure/aggregate. The microcapsules would be analogous to the presently used double-pulse method to create a tin mist from liquid tin before ablation by the main driving pulse 11 , 12 , 13 . Fig. 5 shows how the microcapsule aggregate represents the double-pulse method. We are actively interested in developing such a higher-repetition microcapsule aggregate target delivery system. Figure 5 Scheme comparing current EUVL double-pulse method (left) to the proposed microcapsule aggregate target (right). The aggregate would represent a “mist” in a similar manner to current methods requiring two pulses. A high-speed camera image shows some progress at Tokyo Tech to date. The microcapsules can be produced in a large volume, which could be used for continuous target supply. Full size image However, with EUV entering high volume manufacturing (HVM) stages, there are several points to overcome. Firstly, issues with microcapsule transportation to the focal spot precisely and frequently. A ~50 m/s speed of target delivery is required, while such high speed with high accuracy has been studied by target fabricators 42 . The second issue for these targets and HVM is the frequency of capsule production, which was about 200 Hz in the present air bubble. We would need to bundle and integrate ~100 fabrication devices as a simultaneous operation including target injection to the focal spot. The other point is problems the carbon debris from the capsule if it is not fully removed by the magnetic field shield 35 . These points would be faced on high-repetition >1 kHz, and EUV HVM. On the other hand, relatively low-repetition (<100 Hz) does not prove so difficult to construct the present concept of laser quantum beam source with a single fabrication device. Finally, in this paper we used tin oxide as the coating material on the surface of the polyelectrolyte scaffold. The intent was to show the fabrication and practical usage of a LbL scaffold microcapsule for metallic nanoparticles. In this case 13.5 nm EUV light was generated. However, it is feasible to use many other nanoparticles to generate other quantum beams, and wavelengths of EUV light. For example, Gd (6 nm) is of interest as a beyond EUV source 43 , 44 , which can be found in nanoparticle form. A detailed review on general uses, including surface coatings, of microcapusles have been discussed elsewhere 5 . In summary, EUV light is becoming increasingly important in today’s world and becomes more expensive as the high volume manufacturing of integrated circuits is realized. However, double pulse illumination is one of several issues that cannot be ignored much longer. To this end we have prepared a lightweight, stable layer-by-layer (LbL) tin oxide coated polyelectrolyte microcapsule scaffold. The facile synthesis route utilized a gas template core to fabricate the monodisperse microcapsules. To prove the effectiveness of the capsules, we generated 13.5 nm EUV light using a Nd:YAG laser (7.1 × 10 10 W/cm 2 ). A maximum CE of 0.8% for the LbL microcapsules vs 1% of bulk tin at 13.5 nm 2% bandwidth. A scheme for an aggregate microcapsule target was proposed, akin to the currently used double-pulse scheme. Such an easy-handling low density target contributes to construction of a compact EUV source better suited for imaging 45 or surface modification 46 rather than HVM EUV source. Finally, we highlight that other wavelengths of light could be generated by changing the scaffold nanoparticle coating. Experimental Section Materials All materials were used as received unless stated otherwise. Poly(vinyl alcohol) (PVA) (Mw 1,500–1,800, Wako chemicals), Poly(allylamine hydrochloride) (PAH) (Mw 17,500, Aldrich), poly(sodium 4-styrene-sulfonate) (PSS) (Mw 70,000, Aldrich), deionized water, dodecyltrimethylammoniumbromide (DTAB) (Tokyo Chemical Industry), sodium chloride (Wako chemicals), tin (IV) oxide nanoparticles (<100 nm (BET), Aldrich). The particle size characterization is shown in Figs S2 and S3 . The nanoparticle size was checked using by transmission electron microscopy (TEM) (TEM7000, Hitachi). The average particle size was 15 nm, with a distribution between 10–50 nm. Layer-by-layer microcapsule fabrication and characterization All of procedures are done in a laminar flow cabinet equipped with HEPA filter. The microcapsules were prepared by using a gas-tight syringe (Hamilton) attached to a syringe pump. We fabricated syringe needles with an inner diameter of either 27 μm or 108 μm, and inserted into a solution of 7.2 × 10 −2 M DTAB or 3 wt% PVA. Once the PVA or DTAB core bubbles were produced, they were washed with water to remove excess PVA or DTAB solution. A process of adding the electrolytes was applied to form the LbL microcapsules in the manner of PSS coating (1 mg/ml), washing by water, PAH coating (1 mg/ml), washing, and repeated 3 times (with each layer composing of one coating of PSS and PAH each). The microcapsules were then immersed in a tin oxide nanoparticle solution (1 mg/ml) for 1–2 minutes, and then coated again with PAH (1 mg/ml). This was repeated either twice or five times, with a final coating of tin oxide nanoparticles to give the completed microcapsules. The capsules were then dried on a glass substrate (Asahi) overnight. Inductively coupled plasma atomic emission spectrometry (ICP-AES) (PerkinElmer ELAN DRC-e) was performed on tin-oxide coated microcapsules. The mass of tin was obtained, allowing calculation of the number of tin atoms present in the microcapsules. Details are shown in Supplementary Information. A Field Emission Scanning Electron Microscope (FE-SEM) (Hitachi SU8020) operating in a low accelerating voltage mode (1 kV) imaged the cross-section of the dry microcapsule walls. Microcapsules were sputter-coated with several nm of platinum to improve conductivity of the electron beam. The metal coating also protected the polyelectrolyte capsule from any damage caused by the electron beam. A mini-SEM (Chip Hua, TE3000) was used to perform Energy-Dispersive X-ray Spectroscopy (EDS) measurements. The corresponding SEM images were obtained at an accelerating voltage of 15 kV in secondary electron mode. Microcapsules were not sputter-coated with metal, as this would have interfered with the EDS measurements. Laser irradiation conditions A 1064 nm Nd:YAG laser (2 mJ, 1 ns, L11038–01, Hamamatsu Photonics) with a spot size of 60 μm full width half maximum (7.1 × 10 10 W/cm 2 ) was used to irradiate the microcapsules. A charged couple device (CCD) camera (D0920-BN, Tokyo Instruments) was used to obtain the EUV spectra at 45 degree angle with respect to the angle of laser ablation inccidence. Bulk tin was used a reference material (100 μm thick, Nilaco, Japan). The in-band CE at 13.5 nm 2% bandwidth was estimated for bulk Sn, and the microcapsule targets using a previously described method using phosphor imaging plates and a calorimeter 38 . The vacuum chamber operated at a pressure in the region of 10 −5 Torr. | The use of extreme ultraviolet light sources in making advanced integrated chips has been considered, but their development has been hindered owing to a paucity of efficient laser targets. Scientists at Tokyo Institute of Technology (Tokyo Tech) recently developed an extremely low-density tin 'bubble,' which makes the generation of extreme ultraviolet reliable and low cost. This novel technology paves the way for various applications in electronics and shows potential in biotechnology and cancer therapy. Development of next-generation devices requires that their core, called the integrated circuit chip, is more compact and efficient than existing ones. Manufacturing these chips requires powerful light sources. The use of light sources in the extreme ultraviolet (EUV) range (an extremely short-wavelength radiation) has become popular in recent times, but their generation is challenging. One solution is the use of high-intensity lasers: Recent advances in laser technology have led to the development of lasers with increased power and lower prices. High-intensity lasers implement laser plasmas, and their first practical application is the generation of EUV light to manufacture semiconductor integrated circuits. In this process, these lasers irradiate an appropriate 'target,' and as a result, a high-temperature and high-density state is created. From this state, 13.5 nm light is generated with high brightness, which can be used in the manufacturing of integrated chips. But this is not an easy feat: control of target density that can produce light in the EUV range has been difficult. Tin has been considered as an option, but its development has been greatly delayed owing to the inability to control its dynamics. To this end, a team of scientists, including Associate Professor Keiji Nagai from Tokyo Tech and Assistant Professor Christopher Musgrave from University College, Dublin, set out to find efficient laser targets. In a study published in Scientific Reports, they describe a novel type of low-density material, which is scalable and low-cost. Prof Nagai says, "EUV light has become crucial in today's world but is expensive owing to the high-volume manufacturing." To begin with, the scientists created a tin-coated microcapsule or 'bubble,' a very low-density structure—weighing as little as 4.2 nanogram. For this, they used polymer electrolytes (dissolution of salts in a polymer matrix), which act as surfactants to stabilize the bubbles. The bubbles were then coated with tin nanoparticles. Prof Nagai explains, "We produced polyelectrolyte microcapsules composed of poly(sodium 4-styrene-sulfonate) and poly(allylamine hydrochloride) and then coated them in a tin oxide nanoparticle solution." To test the use of this bubble, the scientists irradiated it using a neodymium-YAG laser. This, indeed, resulted in the generation of EUV light, which is within the 13.5 nm range. In fact, the scientists even found that the structure was compatible with conventional EUV light sources that are used to manufacture semiconductor chips. But, the biggest advantage was that the laser conversion efficiency with the tin bubble, a measure of the laser power, matched that of bulk tin. Prof Nagai explains, "Overcoming the limitations of liquid tin dynamics can be very advantageous in generating EUV light. Well-defined low-density tin targets can support a wide range of materials including their shape, pore size, density etc." Prof Nagai and his research team have been developing low-density materials for laser targets for many years but had been suffering limitations with manufacturing costs and mass productivity. Now, combining new low-density tin targets made of bubbles offers an elegant solution for mass producing a compact 13.5 nm light source at a low cost. In addition to its applications in electronics, Prof Nagai is optimistic that their novel technology consisting of "bubble" laser targets could even be used in cancer therapy. He concludes, "This method could be utilized as a potential small scale/compact EUV source, and future quantum beam sources such as electrons, ions, and X-rays by changing the coating to other elements." Through this opportunity, Prof Nagai and his team wish to collaborate with large laser facilities in Japan and overseas. | 10.1038/s41598-020-62858-3 |
Medicine | Neuroimmune proteins may serve as biomarkers for diagnosis of neurodegenerative diseases | Jonathan D. Cherry et al, Neuroimmune proteins can differentiate between tauopathies, Journal of Neuroinflammation (2022). DOI: 10.1186/s12974-022-02640-6 Journal information: Journal of Neuroinflammation | https://dx.doi.org/10.1186/s12974-022-02640-6 | https://medicalxpress.com/news/2022-11-neuroimmune-proteins-biomarkers-diagnosis-neurodegenerative.html | Abstract Background Tauopathies are a group of neurodegenerative diseases where there is pathologic accumulation of hyperphosphorylated tau protein (ptau). The most common tauopathy is Alzheimer’s disease (AD), but chronic traumatic encephalopathy (CTE), progressive supranuclear palsy (PSP), corticobasal degeneration (CBD), and argyrophilic grain disease (AGD) are significant health risks as well. Currently, it is unclear what specific molecular factors might drive each distinct disease and represent therapeutic targets. Additionally, there is a lack of biomarkers that can differentiate each disease in life. Recent work has suggested that neuroinflammatory changes might be specific among distinct diseases and offers a novel resource for mechanistic targets and biomarker candidates. Methods To better examine each tauopathy, a 71 immune-related protein multiplex ELISA panel was utilized to analyze anterior cingulate grey matter from 127 individuals neuropathologically diagnosed with AD, CTE, PSP, CBD, and AGD. A partial least square regression analysis was carried out to perform unbiased clustering and identify proteins that are distinctly correlated with each tauopathy correcting for age and gender. Receiver operator characteristic and binary logistic regression analyses were then used to examine the ability of each candidate protein to distinguish diseases. Validation in postmortem cerebrospinal fluid (CSF) from 15 AD and 14 CTE cases was performed to determine if candidate proteins could act as possible novel biomarkers. Results Five clusters of immune proteins were identified and compared to each tauopathy to determine if clusters were specific to distinct disease. Each cluster was found to correlate with either CTE, AD, PSP, CBD, or AGD. When examining which proteins were the strongest driver of each cluster, it was observed the most distinctive protein for CTE was CCL21, AD was FLT3L, and PSP was IL13. Individual proteins that were specific to CBD and AGD were not observed. CCL21 was observed to be elevated in CTE CSF compared to AD cases ( p = 0.02), further validating the use as possible biomarkers. Sub-analyses for male only cases confirmed the results were not skewed by gender differences. Conclusions Overall, these results highlight that different neuroinflammatory responses might underlie unique mechanisms in related neurodegenerative pathologies. Additionally, the use of distinct neuroinflammatory signatures could help differentiate between tauopathies and act as novel biomarker candidate to increase specificity for in-life diagnoses. Background One of the most prevalent types of neurodegenerative pathologies are tauopathies, a class of neurodegenerative disease that demonstrate pathologic accumulation of hyperphosphorylated tau (ptau) in the brain [ 1 ]. The most common tauopathy is Alzheimer’s disease (AD), but diseases like chronic traumatic encephalopathy (CTE), progressive supranuclear palsy (PSP), corticobasal degeneration (CBD), and argyrophilic grain disease (AGD) are significant health risks as well. Neuropathologically, ptau can be observed to have a unique regional progression and cellular aggregation patterns specific to each disease [ 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 ]. However, the mechanisms behind neuropathology are still unknown and it is unclear if there are early shared mechanisms common among diseases. Additionally, tauopathies can only be diagnosed after death and lack specific diagnostic biomarkers. While the use of ptau in fluids, like blood and cerebrospinal fluid (CSF), has shown to be a possible marker of general neurodegenerative processes, additional biomarkers are still needed that can sufficiently differentiate between tauopathies and aid antemortem diagnosis [ 10 , 11 , 12 ]. As there is greater understanding of neurodegenerative disease processes, it has become increasing clear that neuroinflammatory changes might be specific among diseases. The idea that the immune response is either inflammatory or anti-inflammatory has been proven to be far too simplistic [ 13 , 14 ]. Recent single cell sequencing studies have challenged such dichotomous classification and suggested that the immune response present during neurodegenerative diseases is incredibly complex and diverse depending on the disease [ 15 , 16 , 17 , 18 ]. Furthermore, additional work has demonstrated that microglial mediated inflammatory response appears to be tailored towards the specific insult [ 14 ]. These complex changes potentially offer the ability to find a “neuroinflammatory signature” and might prove to be a useful source of mechanistic targets important to disease pathogenesis and novel biomarker candidates. To that end, previous research has suggested that inflammatory related markers like CCL11 could distinguish between AD and CTE [ 19 ]. However, CCL11 is also increased with aging making it challenging to use in cohorts populated with primary aged individuals [ 19 , 20 ]. Therefore, a deeper analysis of multiple diverse inflammatory or neuroimmune mediators might offer a powerful approach to identify novel proteins that could discriminate between tauopathies. Utilizing grey matter from the anterior cingulate cortex, a 71 immune-related protein multiplex ELISA was performed comparing the tauopathies AD, CTE, PSP, CBD, and AGD to identify novel neurodegenerative pathways and biomarkers. To narrow down the candidate biomarker list, an unbiased partial least square regression model was used to identify clusters of specific proteins that best correlated to each disease group while controlling for age at death and gender. Using the top distinct proteins, receiver operator curves and binary logistic regression analyses were performed to demonstrate the sensitivity, specificity, and predictive power for each protein. Finally, to validate the ELISA results and show efficacy on their use as novel candidate biomarkers, the top distinctive protein for CTE was used to examine the ability to identify disease in CSF compared to AD. Overall, due to the inherent complex nature of the neuroinflammatory response, this work suggests that in-depth analyses of neuroinflammatory components can reveal (1) novel mechanisms of disease specific to each tauopathy and (2) identify better fluid biomarkers to more accurately identify and discriminate tauopathies in life. Materials and methods Subjects Postmortem fresh frozen human brain tissue was obtained and processed from 127 individuals as previously described [ 19 ]. Additionally, 29 samples of cerebrospinal fluid (CSF) were obtained as previously described [ 11 ]. Cases were evaluated from the BU CTE Center, BU Alzheimer’s Disease Center, Framingham Heart Study, and Mayo Clinic Brain Banks. Next of kin provided written consent for participation and brain donation. IRB approval was obtained through the Boston University Alzheimer’s Disease and CTE Center and Mayo Clinic brain bank. Cases were assessed for neurodegenerative diseases using well established criteria for AD, neocortical Lewy body disease, frontal temporal lobar degeneration, motor neuron disease, CTE, PSP, CBD, and AGD [ 2 , 3 , 4 , 5 , 6 , 7 , 8 ]. For AD, cases were included if they had a Braak Stage of at least 3 and a CERAD score of at least 2. Cases were divided into 5 groups based on the presence of either AD, CTE, PSP, CBD, or AGD. Each group did not have comorbid neurodegenerative diseases or overlapping pathologic hallmarks. For brain tissue homogenate, 40 cases had a diagnosis of CTE, 28 had AD, 20 had PSP, 20 had CBD, and 19 had AGD. For CSF, 14 cases had CTE and 15 had AD. Case selection represented a convenience sample set from each respective brain banks to best age and gender match between the 5 groups used in the study. However, as no females have been diagnosed with CTE, only males were selected. A full breakdown of sample demographics for brain homogenate samples is in Table 1 and for CSF samples in Table 2 . Table 1 Demographics for brain homogenate samples Full size table Table 2 Demographics for CSF samples Full size table ELISA Frozen tissue from the anterior cingulate cortex was harvested using methods previously described [ 21 ]. Anterior cingulate cortex was selected as it is a region that can be affected by ptau in all 5 of the diseases [ 22 , 23 , 24 , 25 , 26 ]. Isolated frozen grey matter was homogenized using the Precellys Evolution + Cryolys Evolution system (Bertin Bioreagent) as per manufacturer’s instructions. Briefly, 500 mg of brain tissue was placed into a 7 mL CK28 Precellys lysis tube containing 5 mL of RIPA buffer. The tubes were placed into the Precellys Evolution + Cryolys Evolution and run at 6500 rpm 3 × 20 s with a 15 s break at 4 ℃. Samples were then centrifuged at 3000× g at 4 ℃ for 10 min. The supernatant was removed and stored. Samples were then diluted with RIPA buffer to the final concentration of 4 mg/mL. Diluted samples were sent to Eve Technologies and run on the Human Cytokine/Chemokine 71-plex discovery assay array (HD71). 8 protein targets were not expressed in any of the samples and excluded from the analyses. To validate candidate biomarkers in a more relevant setting, postmortem CSF was obtained from 29 cases and prepared as previously described [ 11 ]. Samples were run on the CCL21/6CKine DuoSet ELISA kit from R&D systems as per manufacturer’s instructions. Plates were imaged using a SpretraMax M3 imager (Molecular Devices). Statistics To identify disease specific proteins using an unbiased approach, Partial Least Square (PLS) regression analysis was carried out with a Kendall Correlation. The aim of PLS is to measure the inter-relatedness between two blocks of variables. For this analysis, one block is made up of the 5 contrasts of interest (CTE vs all other neuropathologies, AD vs all other neuropathologies, etc.). The other block contains the ELISA protein levels after adjusting for age and gender. To carry out the PLS, singular value decomposition (SVD) is conducted on a partial correlation matrix between each contrast and ELISA proteins after adjusting for age and gender, where each column corresponds to a contrast and each row corresponds to a biomarker. The resulting matrices from SVD provide insight into how contrasts and ELISA proteins load to underlying constructs. Each protein was given a correlation number that represented how much a specific protein contributes to each neuropathologic disease. The advantage of using the PLS regression is that it reduces the number of target proteins for analysis, so a multiple comparison correction across all 71 proteins in the panel was not needed. To better understand if gender influenced the results, a sub-analysis using just males was also performed. Receiver operating characteristic (ROC) curve and binary logistic regression analyses were used to determine the specificity and sensitivity of the top candidate proteins identified from PLS analysis. For ROC analysis, each disease was compared against all other pathologies (i.e. CTE cases vs all other diseases). ROC was analyzed for each protein individually. Multiple comparison correction was applied to the ROC results. Binary logistic regression was used to determine if proteins were predictive of distinct tauopathies when accounting for age and gender differences. Finally, Mann–Whitney tests were used to compare between AD and CTE CSF. Descriptive statistics, ROC curve, and Mann–Whitney tests were generated using SPSS (v26, IBM). Results Partial least square regression model identified 5 clusters of proteins that correlated with distinct tauopathy To identify possible biomarkers or mechanisms that are distinct among CTE, AD, PSP, CBD, and AGD, a 71 immune related protein multiplex ELISA was performed on anterior cingulate cortex grey matter. Using PLS regression analysis, 5 clusters of proteins were identified and correlated against each tauopathy to determine what clusters were related to specific disease (Fig. 1 ). Age and gender were included into the model to correct for differences. However, it is important to note that no females were present in the CTE group. Cluster 1 was most correlated with CTE, Cluster 2 was most correlated with AD, Cluster 3 was most correlated with PSP, Cluster 4 was most correlated with CBD, and Cluster 5 was most correlated with AGD. Although clusters 4 and 5 demonstrated higher specificity for CBD and AGD respectively, there was minor overlap with the other tauopathies as well. Fig. 1 Partial Least Square regression identifies clusters of proteins that correlate with each tauopathy. Using the partial least square regression analysis, five clusters of ELISA proteins were identified and compared against each tauopathy to determine degree of correlation. Each comparison was a contrast of a single tauopathy against all other neuropathologies. Blue represents low correlation and red represents high correlation. Comparisons are adjusted for age at death and gender. Cluster 1 was most correlated with CTE, Cluster 2 with AD, Cluster 3 with PSP, Cluster 4 with CBD, and Cluster 5 with AGD Full size image Next, each cluster was analyzed to examine which specific proteins were driving the cluster classification (Fig. 2 ) and the top 5 proteins for each cluster were identified. The full list of proteins is presented in Additional file 2 : Table S1. The most correlated protein for Cluster 1/CTE was CCL21 (0.36), then followed by CXCL5 (0.27), CXCL13 (0.24), GMCSF (0.25), then CCL17 (0.23). In Cluster 2/AD, the highest correlated protein was FLT3L (0.31), then followed by IL17F (0.25), CCL17 (0.23), IL15 (0.23), then IL17E/IL25 (0.21). In Cluster 3/PSP, the highest correlated protein was IL13 (0.37), followed by SCF (0.25), CXCL13 (0.25), CXCL9 (0.24), then IL3 (0.24). In Cluster 4/CBD, the highest correlated protein was IL1β (0.37), followed by MDC (0.37), CXCL9 (0.21), ILF (0.2), then VEGFA (0.19). Finally, in Cluster 5/AGD the highest correlated protein was VEGFA (0.33), followed by CCL2 (0.32), CXCL9 (0.31), FLT3L (0.28), then GMCSF (0.22). Fig. 2 Correlation of each cluster to ELISA proteins identifies which immune factors are most specific to each disease. Singular value decomposition (SVD) was conducted on a partial correlation matrix between each cluster and ELISA proteins after adjusting for age and gender, where each column corresponds to a cluster and each row corresponds to an ELISA protein. The resulting heatmap demonstrates which immune proteins best correlate to each cluster/disease. From this heatmap, the top 5 candidate proteins that are distinct for each cluster/disease were selected. Blue represents low correlation and red represents high correlation Full size image Although gender was controlled for as covariate in the PLS regression, a sub-analysis of just males was performed. The full list and order of proteins for the sub-analysis is present in Additional file 3 : Table S2. The top proteins for Cluster 1/CTE remained consistent with CCL21 (0.31) as the highest associated protein. Cluster 3/PSP also was consistent with IL13 as the highest associated protein. However, there was some differences across the other 3 groups. In Cluster 2/AD the highest male associated protein was IL23 (0.27), then CCL2 (0.24), followed by FLT3L (0.24). CXCL9 (0.35) was the highest associated protein in both Cluster 4 and 5. Overall, given that the sample size was severely reduced when excluding females, the findings in the male only subset was fairly consistent with the mix gender full analysis. The top 5 proteins in each cluster could identify distinct neurodegenerative pathologies Using the top 5 proteins from each cluster in the full mixed gender analysis group, receiver operating curve (ROC) analyses was performed to examine how specific and sensitive each protein was for identifying distinct tauopathies (Fig. 3 ). Multiple comparison correction was performed and p < 0.01 was the new significance threshold. ROC analysis of the top 5 Cluster 1 proteins for the ability to identify CTE demonstrated that CCL21 had the highest area under the curve (AUC) (0.854, p < 0.001). CXCL5, CXCL13, GMCSF, and CCL17 all were significant as well (Fig. 3 A). GMCSF was the only Cluster 1 protein that had a significant AUC under 0.5 demonstrating it is found at lower levels in CTE compared to the other diseases. The top 5 proteins in Cluster 2 were used to identify AD. FLT3L had the highest AUC (0.670, p = 0.001), followed CCL17 (0.634, p = 0.015). IL15 did not meet multiple comparison corrected significance (0.634, p = 0.017). IL17F (0.297, p < 0.001) had a significant AUC under 0.5 demonstrating decreased expression in AD compared to the other diseases (Fig. 3 B). Next, the top 5 proteins in Cluster 3 were used to identify PSP. IL13 was observed to have the highest significant AUC (0.777, p < 0.001), followed by SCF (0.640, p = 0.49), and IL3 (0.675, p = 0.003). CXCL9 (0.275, 0.001) had an AUC under 0.5 demonstrating less protein expression in PSP compared to other diseases (Fig. 3 C). Cluster 4 and Cluster 5 did not have any significant AUC values when used to identify CBD (Fig. 3 D) and AGD (Fig. 3 E), respectively. Fig. 3 Receiver operator characteristics (ROC) curve demonstrates the top 5 candidate proteins for each cluster are specific and sensitive to identify tauopathies. Using the PLS results and top 5 candidate proteins for each cluster, ROC curve was performed for A) CTE, B) AD, C) PSP, D) CBD, and E) AGD. The area under the curve (AUC), standard error, and significance is displayed below each graph. AUC values over 0.5 suggest positive association with each disease, while values under 0.5 represent negative association. The black line is the reference line. Multiple comparison correction set the significance threshold to p < 0.01 Full size image The protein with the highest AUC values in each cluster was then selected and used to examine the ability to identify each respective tauopathy when comparing against all 127 cases together while controlling for age at death and gender. Using binary logistic regression analysis, CCL21 significantly correlated with a positive diagnosis of CTE (OR = 1.206, p < 0.001) independently of age at death (OR = 0.944, p = 0.036) or gender (OR = 0.00, p = 0.997). Next, it was observed that FLT3L, although not reaching significance, trended towards being able to predict AD (OR = 1.13, p = 0.054) independently of age at death (OR = 1.109, p < 0.001) or gender (OR = 2.679, p = 0.42). Cluster 3’s IL13 demonstrated significant ability to identify PSP (OR = 1.313, p < 0.001), independently of age at death (OR = 0.970, p = 0.276), or gender (OR = 2.076, p = 0.183). Unlike Clusters 1–3, the protein with the highest AUC in Cluster 4 was not the top protein in the PLS analysis. LIF had the highest Cluster 4 AUC, but IL1β was the top PLS correlated protein. However, neither IL1β (OR = 0.965, p = 0.816) or LIF (OR = 1.244, p = 0.232) correlated with a diagnosis of CBD. Similarly, for Cluster 5, GMCSF had the highest AUC but VEGFA was the top PLS correlated protein. While VEGFA (OR = 1.046, p = 0.246) was not correlated with a diagnosis of AGD, GMCSF was correlated with a diagnosis of AGD (OR = 5.148, p = 0.007) independently of age (OR = 1.132, p < 0.001) and gender (OR = 0.776, p = 0.676). Binary logistic regression for the male only top associated proteins was then preformed as a sub-analysis. CCL21 was still significantly correlated with a diagnosis of CTE (OR = 1.206, p < 0.001), independently of age at death (OR = 0.944, p = 0.036). Additionally, IL13 again demonstrated significant ability to identify PSP (OR = 1.439, p = 0.001), independently of age at death (OR = 0.992, p = 0.850). Neither Cluster 2’s IL23 (OR = 1.007, p = 0.206), nor CCL2 (OR = 1.006, p = 0.456), demonstrate significant ability to identify AD. However, FLT3L was again able to predict AD in the male only cases (OR = 1.208, p = 0.039), independently of age at death (OR = 1.102, p = 0.019). Finally, CXCL9 was not significant for CBD (OR = 0.946, p = 0.220), or AGD (OR = 0.987, p = 0.620). CCL21 was able to identify CTE using postmortem CSF Finally, to examine if the results taken from the brain homogenate can be extrapolated to fluids such as CSF, postmortem CSF from individuals with AD and CTE was obtained and a CCL21 ELISA was performed. When comparing the total concentrations, individuals with CTE were observed to have significantly more CCL21 compared to individuals with AD ( p = 0.02) (Fig. 4 ). A sub-analysis using just the male cases also demonstrated that CCL21 trended towards elevation in CTE compared to AD ( p = 0.056) (Additional file 1 : Fig. S1). Fig. 4 CCL21 is elevated in the CSF in CTE. To validate PLS results and determine if candidate proteins can be used as biomarkers as fluid biomarkers, the top distinct protein for CTE, CCL21, was measured in postmortem CSF from individuals with AD and CTE. CCL21 concentrations were higher in CTE compared to CTE as measured with a Mann–Whitney test (* p < 0.05). Each dot represents 1 case. Error bars represent mean ± SEM Full size image Discussion Here we have shown that the complex neuroinflammatory response that occurs during disease might provide additional insight into unique disease mechanisms. Additionally, the neuroinflammatory response might be a useful source of candidate biomarkers to help identify tauopathies during life. Using a 71 immune-related protein multiplex ELISA panel, a PLS regression model was able to identify 5 distinct clusters of proteins that each correlated with a unique tauopathy. CTE, AD, and PSP had the highest overall correlation with their respective clusters. Although CBD and AGD had the strongest correlation with cluster 4 and 5 respectively, some overlap with other diseases was also observed. Using the top PLS hits, a ROC and binary logistic regression analysis demonstrated that CCL21 was the strongest predictor of CTE, FLT3L for AD, and IL13 for PSP. The strongest predictor proteins were consistent in both mixed gender and male only sub-analyses. Since there was less specificity in the clusters that belonged to CBD and AGD, a strong biomarker candidate was not observed. Finally, validation of the top PLS candidate proteins was performed in postmortem CSF, and it was observed that CCL21 was increased in CTE compared to AD. Overall, this work demonstrates that the neuroinflammatory signatures that are present among distinct tauopathies can be used to distinguish between diseases and could be an important source of novel biomarkers to aid in-life diagnosis. Neurodegenerative pathologies are complex and might even change over the course of disease [ 27 ]. Therefore, it is likely a panel of biomarkers, consisting of serum or CSF sampling, PET or MRI imaging, and clinical symptoms will be needed to capture multiple diverse aspects of distinct neurodegenerative diseases. An important contribution of the current study is the identification of new candidate proteins that could be useful alongside of the other established biomarkers to help distinguish between neurodegenerative pathologies for a more specific in-life diagnosis. A panel consisting of CCL21, FLT3L, and IL-13 appeared to demonstrate significant power to distinguish between CTE, AD, and PSP, while CBD and AGD were a more ambiguous. To that end, it was demonstrated that CCL21 was also increased in the CSF in CTE compared to AD, suggesting the brain homogenate findings could translate to fluids and be viable clinical biomarkers. These results suggest that the inclusion of the candidate proteins into biomarker panels could aid as tools to identify and distinguish diseases in-life. In addition to discovering possible novel biomarker candidates, by identifying specific sets of neuroinflammatory proteins linked to specific diseases, each clusters offers details on mechanistic processes that could be distinct among tauopathies. The added insight can help shape our understanding of pathogenesis and aid the design of disease specific therapies. Cluster 1 was found to correlate strongly for CTE and the top PLS hits were CCL21, CXCL5, CXCL13, GMCSF, and CCL17. Out of those top 5, CCL21 was the strongest driver of the cluster and was the strongest predictor of any of the proteins for their respective disease. CCL21, also known as 6CKine, is a chemokine that signals immune cell trafficking [ 28 , 29 ]. The CCL21/CCR7 signaling axis has been shown to be an important component of T Cell extravasation into the brain [ 30 ]. CCL21 has also been shown to be a potent glial activator as well [ 31 ]. CCL21 has been associated with trauma-related injuries, suggesting the type of repetitive head trauma responsible inducing CTE could be driving a distinct CCL21 response [ 31 , 32 ]. Interestingly, all the top 5 hits for CTE belong to the chemokine family of proteins suggesting some of the strongest CTE immune signals are related to recruiting immune cells. Previous findings have also observed the chemokine CCL11 (also called eotaxin) was elevated in CTE [ 19 ]. However, no increase in CCL11 was observed in the present study likely due to the advanced age of all the cases resulting in a ceiling effect. Although AD is a highly studied disease, comparing the unique inflammatory response to other related tauopathies is less common. FLT3L, IL15, IL17E/IL25, IL17F, and CCL17 were the top proteins found to be distinctive for AD in Cluster 2. The most highly correlated protein to AD was FLT3L. FLT3L has several known effects relating to cell proliferation, metabolism, and leukocyte activation [ 33 ]. Several reports have detailed the importance of FLT3L in dendritic cell development [ 33 ] further highlighting a potential strong connection between AD and the immune system. Interestingly, FLT3L was also found to correlate with CSF tau concentration in cases of AD, Sjogren’s syndrome, and fibromyalgia [ 34 ]. The proinflammatory cytokine IL15, has also been previously suggested as a possible AD related biomarker, further validating the current findings [ 35 ]. After CCL21 in CTE, the correlation between IL13 and PSP was the next strongest. IL13 is an anti-inflammatory cytokine that is closely related to IL4 and shares several downstream pathways [ 36 ]. Typically, IL13 is involved in allergic inflammatory diseases or wound-healing events [ 36 ]. To our knowledge, there has not been any previous reports of IL13 involvement in PSP. Anti-inflammatory proteins have been found to be part of the neuropathologic process in other diseases, such as AD, but they were believed to be part of a protective feedback pathway [ 37 ]. Additional work will be needed to better understand how IL13 might be involved in PSP pathogenesis. The final two diseases, CBD and AGD, demonstrated much less agreement with their respective clusters. ROC analysis for the top 5 PLS candidates for each disease did not reach significance, and only GMCSF correlated with AGD in a binary logistic regression. Although CBD and AGD had the highest correlation with Cluster 4 and 5, respectively, Fig. 1 demonstrated that there appeared to be moderate correlation with the other diseases as well. This suggests that the PLS candidate proteins identified for CBD and AGD are not as specific. It is possible that the anterior cingulate cortex was less affected in these diseases resulting in a more muted neurodegenerative response. Additionally, it is possible that CBD and AGD share more neuroimmune protein signatures to the other diseases as well. The present analysis does not compare absolute protein levels to a non-affected control case. Therefore, these results do not suggest that neurodegenerative proteins are not elevated in disease. Rather, they demonstrated that there was a lack of neuroimmune related proteins that could specifically identify AGD or CBD compared to CTE, AD, or PSP. Future work will be needed to more thoroughly investigate what other proteins could act as biomarkers for AGD or CBD to better distinguish them. Although gender was controlled for during the PLS and binary logistic regressions, it is important to point out that the CTE group was composed entirely of males which could skew results. This was due to the current lack of females diagnosed with CTE, as many of the donated samples were derived from individuals who played American football, a sport dominated by males. To account for this skew, a sub-analysis with just males was performed. The results for CTE compared to the other tauopathies with just males was consistent with CCL21 as the most predictive CTE protein. This was also observed in the CSF sub-analysis. Additionally, IL13 was again the strongest predictive protein for PSP. However, when using just males, CCL2 and IL23 moved ahead of FLT3L for AD. When investigating the ability of the new top 2 proteins to predict AD, they were not significant, while consistent with the mixed gender analysis, FLT3L was predictive. This suggested that while CCL2 and IL23 might have some additional role in AD males, it is likely that removing half of the samples severely limited the power of the analysis and increased noise was added to the top associated proteins. Like the mixed gender full analysis, the male only sub-analysis did not find any predictive proteins for AGD or CBD. Therefore, since the most predictive protein for CTE (CCL21), AD (FLT3L), and PSP (IL13) was consistent across mixed and single gender analyses, these results provide support that the findings were not significantly affected by the CTE male only skew. Gender differences during neurodegeneration is an important topic and likely contributed to the results in some degree. However, a more comprehensive future study will be needed to tease apart the gender differences in a larger sample set. While this current study is one of the most comprehensive to date looking at dozens of different proteins across 5 different diseases, there are likely many more differences not captured with the multiplex ELISA. More unbiased proteomic techniques like mass spectrometry would be useful to identify the full spectrum of differences present between diseases and identify additional targets. Absolute levels of each protein will also need to be compared to control cases to have a better understanding of changes during normal aging. Additionally, future studies will be needed to determine the cellular source of each protein to increase understanding of possible disease mechanisms and further validate the current findings. Finally, the current study only examined one brain region, the anterior cingulate cortex. This region was selected as it was affected in all diseases and offered the best chance to identify ptau related changes. However, the progression of ptau is distinct in each tauopathy. Therefore, future studies will be needed to examine the neuroimmune changes that occur in a region-by-region progression. Conclusion In conclusion, here we have shown that there are unique neuroinflammatory pathways present among tauopathies that can provide greater insight into distinct mechanisms of disease. Additionally, these results also suggest that neuroinflammatory proteins are a good source of possible novel biomarker candidates to distinguish between tauopathies. CCL21, FLT3L, and IL13 are novel candidates that could be useful in future work to help better understand and differentiate CTE, AD, and PSP. Although it is unclear how the absolute levels of the proposed candidates differ from non-disease control cases, the current study is significant as it suggests novel markers that can help differentiate between related diseases and add increased disease specificity to other biomarker panels. These candidate proteins will likely be needed to be used in conjunction with a panel of already established proteins (including Aβ and ptau), imaging studies such as PET and MRI, and clinical measures to truly identify and capture the complexity of each disease in life. Availability of data and materials The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. Abbreviations AD: Alzheimer’s disease AGD: Argyrophilic grain disease AUC: Area under the curve CBD: Corticobasal degeneration CERAD: Consortium to Establish a Registry for Alzheimer’s Disease CSF: Cerebrospinal fluid CTE: Chronic traumatic encephalopathy PLS: Partial least square PSP: Progressive supranuclear palsy ptau: Hyperphosphorylated tau ROC: Receiver operating characteristic SVD: Singular value decomposition | Researchers from the BU CTE Center have found that immune-related proteins could help differentiate between neurodegenerative diseases and provide additional candidates for biomarkers or new therapeutic targets. One of the largest challenges in neuroscience is identifying and treating neurodegenerative diseases during life, as many are only able to be diagnosed after death. Additionally, several diseases can sometimes have overlapping clinical symptoms that makes diagnosis even more challenging, including Alzheimer's disease (AD), Chronic Traumatic Encephalopathy (CTE), progressive supranuclear palsy (PSP), corticobasal degeneration (CBD) and argyrophilic grain disease (AGD), which highlights the critical need for better ways to identify and distinguish during life. "We identified the protein CCL21 was specific of CTE, FLT3L was selective of AD, and IL-13 was highest in PSP. This is novel since it provides more possible biomarkers to help better identify diseases like CTE, AD, PSP, CBD or AGD during life. Additionally, it helps us better understand the mechanisms behind each disease," says corresponding author Jonathan Cherry, Ph.D., a research health scientist at the VA Boston Healthcare System and assistant professor of pathology and laboratory medicine at Boston University Chobanian & Avedisian School of Medicine. The researchers examined postmortem brain tissue from 127 individuals with CTE, AD, PSP, CBD or AGD. They analyzed the concentration of 71 different immune related proteins and used a statistical technique to identify if specific clusters of proteins were most correlated with a specific disease. After determining a cluster of proteins for each disease, they took the top five proteins for each cluster to determine which protein was the strongest possible biomarker candidate. They then validated the initial results by comparing postmortem cerebrospinal fluid from individuals with CTE and AD. "We demonstrated that the protein CCL21 was able to distinguish CTE from AD in spinal fluid, further highlighting future use as a novel biomarker for in-life diagnoses. However, an important caveat is that it's likely there won't be a single 'magic protein' for any disease diagnosis. Ultimately, these proteins will be needed to be coupled with several other biomarkers, imaging studies, and clinical symptoms to really be effective. However, here we provided more targets that can help increase the overall specificity of diagnosis," he adds. According to the researchers, these results highlight how important the immune system is to neurodegenerative diseases. Additionally, they suggest that better understanding of the neuroinflammatory signature that is specific to individual neurodegenerative diseases may lead to identifying more specific biomarkers for all diseases and ultimately discover novel therapeutic compounds. These finding appear online in the Journal of Neuroinflammation. | 10.1186/s12974-022-02640-6 |
Medicine | Inflammatory bowel disease: Scientists zoom in on genetic culprits | Hailiang Huang et al. (2017) Fine-mapping inflammatory bowel disease loci to single variant resolution. Nature. DOI: 10.1038/nature22969 Journal information: Nature | http://dx.doi.org/10.1038/nature22969 | https://medicalxpress.com/news/2017-06-inflammatory-bowel-disease-scientists-genetic.html | Abstract Inflammatory bowel diseases are chronic gastrointestinal inflammatory disorders that affect millions of people worldwide. Genome-wide association studies have identified 200 inflammatory bowel disease-associated loci, but few have been conclusively resolved to specific functional variants. Here we report fine-mapping of 94 inflammatory bowel disease loci using high-density genotyping in 67,852 individuals. We pinpoint 18 associations to a single causal variant with greater than 95% certainty, and an additional 27 associations to a single variant with greater than 50% certainty. These 45 variants are significantly enriched for protein-coding changes ( n = 13), direct disruption of transcription-factor binding sites ( n = 3), and tissue-specific epigenetic marks ( n = 10), with the last category showing enrichment in specific immune cells among associations stronger in Crohn’s disease and in gut mucosa among associations stronger in ulcerative colitis. The results of this study suggest that high-resolution fine-mapping in large samples can convert many discoveries from genome-wide association studies into statistically convincing causal variants, providing a powerful substrate for experimental elucidation of disease mechanisms. Main Inflammatory bowel diseases (IBDs) are a group of chronic, debilitating disorders of the gastrointestinal tract with peak onset in adolescence and early adulthood. More than 1.4 million people are affected in the USA alone 1 , with an estimated direct healthcare cost of US$6.3 billion per year. IBD affects millions worldwide, and is rising in prevalence, particularly in paediatric and non-European ancestry populations 2 . IBD has two subtypes, ulcerative colitis and Crohn’s disease, which have distinct presentations and treatment courses. So far, 200 genomic loci have been associated with IBD 3 , 4 , but only a handful have been conclusively ascribed to a specific causal variant with direct insight into the underlying disease biology. This scenario is common to all genetically complex diseases, where the pace of identifying associated loci outstrips that of defining specific molecular mechanisms and extracting biological insight from each association. The widespread correlation structure of the human genome (known as linkage disequilibrium) often results in similar evidence for association among many neighbouring variants. However, unless linkage disequilibrium is perfect ( r 2 = 1), it is possible, with a sufficiently large sample size, to statistically resolve causal variants from neighbours even at high levels of correlation ( Extended Data Fig. 1 and ref. 5 ). Novel statistical approaches applied to very large datasets that address this problem 6 require that the highly correlated variants are directly genotyped or imputed with certainty. Truly high-resolution mapping data, when combined with increasingly sophisticated and comprehensive public databases annotating the putative regulatory function of DNA variants, are likely to reveal novel insights into disease pathogenesis 7 , 8 , 9 and the mechanisms of disease-associated variants. Genetic architecture of associated loci We genotyped 67,852 individuals of European ancestry, including 33,595 with IBD (18,967 Crohn’s disease and 14,628 ulcerative colitis) and 34,257 healthy controls using the Illumina ImmunoChip ( Extended Data Table 1 ). This genotyping array was designed to include all known variants from European individuals in the February 2010 release of the 1000 Genomes Project 10 , 11 in 187 high-density regions known to be associated with one or more immune-mediated diseases 12 . Because fine-mapping uses subtle differences in strength of association between tightly correlated variants to infer which is most likely to be causal, it is particularly sensitive to data quality. We therefore performed stringent quality control to remove genotyping errors and batch effects (Methods). We imputed into this dataset from the 1000 Genomes Project reference panel 13 , 14 to fill in variants missing from the ImmunoChip, or filtered out by our quality control ( Extended Data Fig. 2 ). We then evaluated the 97 high-density regions that had previous IBD associations 3 and contained at least one variant showing significant association (Methods) in this dataset. The major histocompatibility complex was excluded from these analyses as fine-mapping has been reported elsewhere 15 . We applied three complementary Bayesian fine-mapping methods that used different priors and model selection strategies to identify independent association signals within a region, and to assign a posterior probability of causality to each variant ( Supplementary Methods and Extended Data Fig. 2 ). For each independent signal detected by each method, we sorted all variants by the posterior probability of association, and added variants to the ‘credible set’ of associated variants until the sum of their posterior probability exceeded 95%: that is, the credible set contained the minimum list of DNA variants that were >95% likely to contain the causal variant ( Fig. 1 ). These sets ranged in size from 1 to >400 variants. We merged these results and subsequently focused only on signals where an overlapping credible set of variants was identified by at least two of the three methods and all variants were either directly genotyped or imputed with INFO score >0.4 (Methods and Fig. 1 ). Figure 1: Fine-mapping procedure and output using the SMAD3 region as an example. a , (1) We merge overlapping signals across methods; (2) we select a lead variant (black triangle) and phenotype (colour); and (3) we choose the best model. Details for each step are available in Methods. b , Example fine-mapping output. This region has been mapped to two independent signals. For each signal, we report the phenotype it is associated with (coloured), the variants in the credible set, and their posterior probabilities. AF, allele frequency. PowerPoint slide Full size image In 3 out of 97 regions, a consistent credible set could not be identified; when multiple independent effects exist in a region of very high linkage disequilibrium, multiple distinct fine-mapping solutions may not be distinguishable ( Supplementary Note ). Sixty-eight of the remaining 94 regions contained a single association, while 26 harboured 2 or more independent signals, for a total of 139 independent associations defined across the 94 regions ( Fig. 2a ). Only IL23R and NOD2 (both previously established to contain multiple associated protein-coding variants 16 ) contained more than three independent signals. Consistent with previous reports 3 , the vast majority of signals were associated with both Crohn’s disease and ulcerative colitis, although many of these had a significantly stronger association with one subtype. For the enrichment analyses below, we compared 79 signals that were more strongly associated with Crohn’s disease with 23 signals that were more strongly associated with ulcerative colitis (the remaining 37 were equally associated with both subtypes, Supplementary Table 1 ). Figure 2: Summary of fine-mapped associations. a , Independent signals. Sixty-eight loci containing one association and 26 loci containing multiple associations. b , Number of variants in credible sets. Eighteen associations were fine-mapped to a single variant, and 116 to ≤50 variants. c , Distribution of the posterior probability of the variants in credible sets having ≤50 variants. PowerPoint slide Full size image Using a restricted maximum likelihood mixed-model approach 17 , we evaluated the proportion of total variance in disease risk attributed to these 94 regions and how much of that was explained by the 139 specific associations. We estimated that 25% of Crohn’s disease risk was explained by the specific associations described here, out of a total of 28% explained by these loci (correspondingly for ulcerative colitis: 17% out of 22%). The single strongest signals in each region contributed 76% of this variance explained and the remaining associations contributed 24% ( Extended Data Fig. 3 ), highlighting the importance of secondary and tertiary associations in results from genome-wide association studies (GWAS) 15 , 18 . Associations mapped to a single variant For 18 signals, the 95% credible set consisted of a single variant (‘single variant credible sets’), and for 24 others the credible set consisted of 2–5 variants ( Fig. 2b ). The single-variant credible sets included five previously reported coding variants: three in NOD2 (fs1007insC, R702W, G908R), a rare protective allele in IL23R (V362I), and a splice variant in CARD9 (c.IVS11+1G>C) 16 , 19 . The remaining single-variant credible sets comprised three missense variants (I170V in SMAD3 , I923V in IFIH1 , and N289S in NOD2 ), four intronic variants (in IL2RA , LRRK2 , NOD2 , and RTEL1 / TNFRSF6B ), and six intergenic variants (located 3.7 kilobases (kb) downstream of GPR35 ; 3.9 kb upstream of PRDM1 ; within a EP300 binding site 39.9 kb upstream of IKZF1 ; 500 base pairs (bp) before the transcription start site of JAK2 ; 9.4 kb upstream of NKX2-3 ; and 3.5 kb downstream from HNF4A ) ( Table 1 ). Of note, while physical proximity did not guarantee functional relevance, the credible set of variants for 30 associated loci now implicated a specific gene either because it resided within 50 kb of only that gene or had a coding variant with >50% probability—improved from only 3 so refined using an earlier HapMap-based definition. Using the same definitions, the total number of potential candidate genes was reduced from 669 to 233. Examples of IBD candidate genes clearly prioritized in our data are described in the Supplementary Box , and a customizable browser ( ) is available to review the detailed fine-mapping results. Table 1 Variants having posterior probability >50% Full size table Associated protein-coding variants We first annotated the possible functional consequences of the IBD variants by their effect on the amino-acid sequences of proteins. Thirteen out of 45 variants ( Fig. 2c ) that had >50% posterior probability were non-synonymous ( Table 1 ), an 18-fold enrichment (enrichment P = 2 × 10 −13 , Fisher’s exact test) relative to randomly drawn variants in our regions ( Fig. 3a ). By contrast, only one variant with >50% probability was synonymous (enrichment P = 0.42). All common coding variants previously reported to affect IBD risk were included in a 95% credible set, including IL23R (R381Q, V362I, and G149R); CARD9 (c.IVS11+1G>C and S12N); NOD2 (S431L, R702W, V793M, N852S, and G908R, fs1007insC); ATG16L1 (T300A); PTPN22 (R620W); and FUT2 (W154X). While this enrichment of coding variation ( Fig. 3a ) provided assurance about the accuracy of our approach, it did not suggest that 30% of all associations were caused by coding variants; rather, it was almost certainly the case that associated coding variants had stronger effect sizes, making them easier to fine-map. Figure 3: Functional annotation of causal variants. a , Proportion of credible variants that are protein-coding, disrupt/create TFBS or are synonymous, sorted by posterior probability. b , Epigenetic peaks overlapping credible variants in cell and tissue types from the Roadmap Epigenomics Consortium 39 . Significant enrichment has been marked with asterisks. Proportion of credible variants that overlap ( c ) core immune peaks for H4K4me1 or ( d ) core gut peaks for H3K27ac (Methods). In a , c and d , the vertical dotted lines mark 50% posterior probability and the horizontal dashed lines show the background proportions of each functional category. PowerPoint slide Full size image Associated non-coding variants We next examined conserved nucleotides in high-confidence binding-site motifs of 84 transcription factor families 20 (Methods). There was a significant positive correlation between transcription-factor motif disruption and IBD association posterior probability ( P = 0.006, logistic regression) ( Fig. 3a ), including three variants with >50% probability (two >95%). In the RTEL1/TNFRSF6B region, rs6062496 is predicted to disrupt a transcription-factor binding motif site (TFBS) for EBF1, a transcription factor involved in the maintenance of B-cell identity and prevention of alternative fates in committed cells 21 . A low-frequency (3.6%) protective allele at rs74465132 creates a binding site for EP300 less than 40 kilobase pairs (kbp) upstream of IKZF1 . The third notable example of TFBS disruption, although not in a single variant credible set, is detailed in the Supplementary Box for the association at SMAD3 . Recent studies have shown that trait-associated variants are enriched for epigenetic marks highlighting cell-type-specific regulatory regions 9 , 22 , 23 . We compared our credible sets with ChIP-seq peaks (chromatin immunoprecipitation followed by sequencing) corresponding to ChIP with H3K4me1, H3K4me3, and H3K27ac (shown previously 22 , 23 to highlight enhancers, promoters, and active regulatory elements, respectively) in 120 adult and fetal tissues, assayed by the Roadmap Epigenomics Mapping Consortium 24 ( Fig. 3b ). Using a threshold of P = 1.3 × 10 −4 (0.05 corrected for 360 tests), we observed significant enrichment of H3K4me1 in 6 immune cell types and for H3K27ac in 2 gastrointestinal (gut) samples (sigmoid colon and rectal mucosa) ( Fig. 3b and Supplementary Table 2 ). The subset of signals that were more strongly associated with Crohn’s disease overlapped more with immune-cell chromatin peaks, whereas ulcerative colitis signals overlapped more with gut chromatin peaks ( Supplementary Table 2 ). These three chromatin marks were correlated both within tissues (we observed additional signal in other marks in the tissues described above) and across related tissues. We therefore defined a set of ‘core immune peaks’ for H3K4me1 and ‘core gut peaks’ for H3K27ac as the set of overlapping peaks in all enriched immune cell and gut tissue types, respectively. These two sets of peaks were independently significant and captured the observed enrichment compared with ‘control peaks’ made up of the same number of ChIP-seq peaks across our 94 regions in non-immune and non-gut tissues ( Fig. 3c, d ). These two tracks summarized our epigenetic-GWAS overlap signal, and the combined excess over the baseline suggested that a substantial number of regions, particularly those not mapped to coding variants, may ultimately be explained by functional variation in recognizable enhancer/promoter elements. Overlap with expression quantitative trait loci Variants that change enhancer or promoter activity might change gene expression, and baseline expression of many genes has been found to be regulated by genetic variation 25 , 26 , 27 . Indeed, it has been suggested that these so-called expression quantitative trait loci (eQTLs) underlie a large proportion of GWAS associations 25 , 28 . We therefore searched for variants that were both in an IBD-associated credible set with 50 or fewer variants, and the most significantly associated eQTL variant for a gene in a study 29 of peripheral blood mononuclear cells (PBMCs) from 2,752 twins. Sixty-eight of the 76 regions with signals fine-mapped to ≤50 variants harboured at least one significant eQTL (affecting a gene within 1 megabase with P < 10 −5 ). Despite this abundance of eQTLs in fine-mapped regions, only 3 credible sets included the most significantly associated eQTL variants, compared with 3.7 expected by chance (Methods). Data from a more recent study 30 using PBMCs from 8,086 individuals did not yield a substantively different outcome, demonstrating a modest but non-significant enrichment (8 observed overlaps, 4.2 expected by chance, P = 0.06). Using a more lenient definition of overlap, requiring the lead eQTL variant to be in linkage disequilibrium ( R 2 > 0.4) with an IBD credible set variant, increased the number of potential overlaps but again these numbers were not greater than chance expectation. As PBMCs are a heterogeneous collection of immune cell populations, cell-type-specific signals or signals corresponding to genes expressed most prominently in non-immune tissues may be missed. We therefore tested the enrichment of eQTLs that overlapped credible sets in five primary immune cell populations (CD4 + , CD8 + , CD19 + , CD14 + , and CD15 + ), platelets, and three distinct intestinal locations (rectum, colon, and ileum) isolated from 350 healthy individuals (Methods). We observed a significant enrichment of credible single nucleotide polymorphism (SNP)/eQTL overlaps in CD4 + cells and ileum ( Extended Data Table 2 ): 3 and 2 credible sets overlapped eQTLs, respectively, compared with 0.4 and 0.3 expected by chance ( P = 0.005 and 0.020). An enrichment was also observed for the naive CD14 + cells from another study 31 : 8 overlaps observed compared with 2.7 expected by chance ( P = 0.001). We did not observe enrichment of overlaps in stimulated (with interferon or lipopolysaccharide) CD14 + cells from the same source ( Extended Data Table 2 ). We investigated eQTL overlaps more deeply by applying two co-localization approaches (one frequentist, one Bayesian, Methods) to our cell-separated dataset where primary genotype and expression data were available. We confirmed greater than expected overlap with eQTLs in CD4 + and ileum described above ( Fig. 4 and Extended Data Table 2 ). These CD4 + co-localized eQTLs also had stronger overlap with CD4 + ChIP-seq peaks than our other credible sets, further supporting a regulatory causal mechanism. The number of co-localizations in other purified cell types and tissues was largely indistinguishable from what we expected under the null using either method, except for moderate enrichment in rectum (4 observed and 1.4 expected, P = 0.039, Frequentist approach) and colon (3 observed and 0.8 expected, P = 0.04, Bayesian approach). Only two of these co-localizations corresponded to an IBD variant with causal probability >50% ( Table 1 and Extended Data Fig. 4a ). Figure 4: Number of credible sets that co-localize eQTLs. Distributions of the number of co-localizations by chance (violins) and observed number of co-localizations with P values (dots). Both the background and the observed numbers were calculated using the ‘Frequentist co-localization using conditional P values’ approach (Methods). PowerPoint slide Full size image Discussion We have performed fine-mapping of 94 previously reported genetic risk loci for IBD. Rigorous quality control followed by an integration of three novel fine-mapping methods generated lists of genetic variants accounting for 139 independent associations across these loci. Our methods are concordant with an existing fine-mapping method 6 (67 of 68 credible sets in single-signal regions overlap, including exact matches for all single variant credible sets), and provide extensions to support the phenotype assignment (Crohn’s disease, ulcerative colitis, or IBD) and the conditional estimation of multiple credible sets in loci with multiple independent signals. The use of multiple methods allowed us to focus our downstream analyses on loci where the choice of fine-mapping method did not substantially alter conclusions about the biology of IBD. Our results improve on previous fine-mapping efforts using a preset linkage disequilibrium threshold 32 (for example, r 2 > 0.6) ( Extended Data Fig. 5 ) by formally modelling the posterior probability of association of every variant. Much of this resolution derives from the very large sample size we used, because the number of variants in a credible set decreases with increasing significance ( P = 0.0069). The high-density of genotyping also aids in improved resolution. For instance, the primary association at IL2RA has now been mapped to a single variant associated with Crohn’s disease, rs61839660. This variant was not present in the HapMap 3 reference panel and was therefore not reported in earlier studies 3 , 33 (nearby tagging variants, rs12722489 and rs12722515, were reported instead). Imputation using the 1000 Genomes Project reference panel and the largest assembled GWAS dataset 3 did not separate rs61839660 from its neighbours (H.H., unpublished observations), owing to the loss of information in imputation using the limited reference. Only direct genotyping, available in the ImmunoChip high-density regions, allowed the conclusive identification of the causal variant. Accurate fine-mapping should, in many instances, ultimately point to the same variant across diseases in shared loci. Among our single-variant credible sets, we fine-mapped an ulcerative colitis association to a rare missense variant (I923V) in IFIH1 , which is also associated with type 1 diabetes 34 with an opposite direction of effect ( Supplementary Box ). The intronic variant noted above (rs61839660, allele frequency = 9%) in IL2RA was also similarly associated with type 1 diabetes, again with a discordant directional effect 35 ( Supplementary Box ). Simultaneous high-resolution fine-mapping in multiple diseases should therefore better clarify both shared and distinct biology. Resolution of fine-mapping can be further improved by leveraging linkage disequilibrium from other ethnicities 36 . However, the sample size from other ethnicities we have collected is small compared with European samples (9,846 across East Asian, South Asian, and Middle Eastern). Limited access to matched imputation reference panels from all cohorts and the fact that the smaller non-European sets are not from populations (for example, African-derived) with narrower linkage disequilibrium also suggest that gains in fine-mapping accuracy would be limited at this time. Ultimately this effort will be aided by more substantial investment in genotyping non-European population samples and by developing and applying more robust trans-ethnic fine-mapping algorithms. A new release of the 1000 Genomes (phase 3) 37 and the UK10K 38 project have introduced new variants that were not present in the reference panel in our study. Our major findings remain the same using this new reference panel: the 18 single-variant credible sets are not in high linkage disequilibrium ( r 2 > 0.95) with any new variants in either new dataset, and the 1,426 variants in IBD associations mapped to ≤50 variants are in high linkage disequilibrium with only 47 new variants (3.3% of the total size of these credible sets, Supplementary Table 1 ). Given that this release represents a near-complete catalogue of variants with minor allele frequency (MAF) > 1% in European populations, we believe our current fine-mapping results are likely to be robust, especially for common variant associations. High-resolution fine-mapping demonstrates that causal variants are significantly enriched for variants that alter protein-coding variants or disrupt transcription-factor binding motifs. Enrichment was also observed in H3K4me1 marks in immune-related cell types and H3K27ac marks in sigmoid colon and rectal mucosal tissues, with Crohn’s disease loci demonstrating a stronger immune signature and ulcerative colitis loci more enriched for gut tissues ( P values 0.014, 0.0005, and 0.0013 respectively for H3K4me1, H3K27ac, and H3K4me3; χ 2 test). By contrast, overall enrichment of eQTLs is quite modest compared with previous reports and not seen strongly in excess of chance in our well-refined credible sets (≤50 variants). This result emphasizes the importance of high-resolution mapping and the careful incorporation of the high background rate of eQTLs. It is worth noting that evaluating the overlap between two distinct mapping results is fundamentally different from comparing genetic mapping results to fixed genomic features, and depends on both mappings being well resolved. Although these data challenge the hypothesis that easily surveyed baseline eQTLs explain a large proportion of non-coding GWAS signals, the modest excesses observed in smaller but cell-specific datasets suggest that much larger tissue- or cell-specific studies (and under the correct stimuli or developmental time points) will resolve the contribution of eQTLs to GWAS hits. Resolving multiple independent associations may often help target the causal gene more precisely. For example, the SMAD3 locus hosts a non-synonymous variant and a variant disrupting the conserved transcription-factor binding site (also overlapping the H3K27ac marker in gut tissues), unambiguously articulating a role in disease and providing an allelic series for further experimental inquiry. Similarly, the TYK2 locus has been mapped to a non-synonymous variant and a variant disrupting a conserved transcription-factor binding site ( ). One-hundred and sixteen associations have been fine-mapped to ≤50 variants. Among them, 27 associations contain coding variants, 20 contain variants disrupting transcription-factor binding motifs, and 45 are within histone H3K4me1- or H3K27ac-marked DNA regions. The best-resolved associations—45 variants having >50% posterior probabilities for being causal ( Table 1 )—are similarly significantly enriched for variants with known or presumed function from genome annotation. Of these, 13 variants cause non-synonymous change in amino acids, 3 disrupt a conserved transcription-factor binding motif, 10 are within histone H3K4me1- or H3K27ac-marked DNA regions in disease-relevant tissues, and 2 co-localize with a significant cis -eQTL ( Extended Data Fig. 4a ). Risk alleles of these variants can be found throughout the allele frequency spectrum, with protein-coding variants having somewhat larger effects and more extreme risk allele frequencies ( Extended Data Fig. 6a–c ). This analysis, however, leaves 21 non-coding variants ( Extended Data Fig. 4b ), all of which have >50% probabilities of being causal (five have >95%), that are not located within known motifs, annotated elements, or in any experimentally determined ChIP-seq peaks or eQTL credible sets yet discovered. While we have identified a statistically compelling set of genuine associations (often intronic or within 10 kb of strong candidate genes), we can make little inference about function. For example, the intronic single-variant credible set of LRRK2 has no annotation, eQTL, or ChIP-seq peak of note. This emphasizes the incompleteness of our knowledge about the function of non-coding DNA and its role in disease, and calls for comprehensive studies on transcriptomes and epigenomes in a wide range of cell lines and stimulation conditions. That most of the best-refined non-coding associations have no available annotation is perhaps sobering with respect to how well we may currently be able to interpret non-coding variation in medical sequencing efforts. It does suggest, however, that detailed fine-mapping of GWAS signals down to single variants, combined with emerging high-throughput genome-editing methodology, may be among the most effective ways of advancing to a greater understanding of the biology of the non-coding genome. Methods The study protocols were approved by the institutional review board at each centre involved with recruitment. Informed consent and permission to share the data were obtained from all subjects, in compliance with the guidelines specified by the recruiting centre’s institutional review board. No statistical methods were used to predetermine sample size. The experiments were not randomized. The investigators were not blinded to allocation during experiments and outcome assessment. Genotyping and quality control We genotyped 35,197 unaffected and 35,346 affected individuals (20,155 Crohn’s disease and 15,191 ulcerative colitis) using the ImmunoChip array. Genotypes were called using optiCall 40 for 192,402 autosomal variants before quality control. We removed variants with missing data rate >2% across the whole dataset, or >10% in any one batch, and variants that failed (false discovery rate (FDR) < 10 −5 in either the whole dataset or at least two batches) tests for the following: (1) Hardy–Weinberg equilibrium in controls; (2) differential missingness between cases and controls; (3) different allele frequency across different batches in controls, Crohn’s disease, or ulcerative colitis. We also removed non-coding variants that were present in the 1000 Genomes pilot stage but were not in the subsequent phase I integrated variant set (March 2012 release) and had not been in releases 2 or 3 of HapMap, as these mostly represented false positives from the 1000 Genomes pilot, which often genotype poorly. Where a variant failed in exactly one batch, we set all genotypes to missing for that batch (to be reimputed later) and included the site if it passed in the remainder of the batches. We removed individuals that had >2% missing data, had significantly higher or lower (defined as FDR < 0.01) inbreeding coefficient ( F ), or were duplicated or related (PI_HAT ≥ 0.4, calculated from the linkage disequilibrium pruned dataset described below), by sequentially removing the individual with the largest number of related samples until no related samples remained. We projected all remaining samples onto principal component axes generated from HapMap 3, and classified their ancestry using a Gaussian mixture model fitted to the European (CEU + TSI), African (YRI + LWK + ASW + MKK), East Asian (CHB + JPT), and South Asian (GIH) HapMap samples (CEU, Utah residents with Northern and Western European ancestry from the CEPH collection; TSI, Toscani in Italia; YRI, Yoruba in Ibadan, Nigeria; LWK, Luhya in Webuye, Kenya; ASW, African ancestry in southwest USA; MKK, Maasai in Kinyawa, Kenya; CHB, Han Chinese in Beijing, China; JPT, Japanese in Tokyo, Japan; GIH, Gujarati Indians in Houston, Texas). We removed all samples that were classified as non-European, or that lay more than 8 standard deviations from the European cluster. After quality control, there were 67,852 European-derived samples with valid diagnoses (healthy control, Crohn’s disease, or ulcerative colitis), and 161,681 genotyped variants available for downstream analyses. Linkage disequilibrium pruning and principal component analysis From the clean dataset we removed variants in long-range linkage disequilibrium 41 or with MAF <0.05, and then pruned three times using the ‘–indep’ option in PLINK (with window size of 50, step size of 5 and VIF threshold of 1.25). Principal component axes were generated within controls using this linkage disequilibrium pruned dataset (18,123 variants). The axes were then projected to cases to generate the principal components for all samples. The analysis was performed using our in-house C code ( ) and LAPACK package 42 for efficiency. Controlling for population structure, batch effects, and other confounders We used 2,853 ‘background SNPs’ present on the ImmunoChip but not known to be associated with immune disorders to calculate the genomic inflation factor λ GC. After including the first five principal components calculated above as covariates, λ GC = 1.29, 1.25, and 1.31 for Crohn’s disease, ulcerative colitis, and IBD (adding additional principal components did not further reduce λ GC , Extended Data Table 3a ). Because our genotype data were processed in 15 batches with variable ratios of cases to controls, we conducted two analyses to ensure possible batch effects were adequately controlled. First, we split the samples into a ‘balanced’ cohort with studies having both cases and controls, and an ‘imbalanced’ cohort with studies having exclusively cases or controls ( Extended Data Table 1 ). As λ GC under polygenic inheritance scales with the sample size 43 , we randomly down-sampled the full dataset to match the sample size of the balanced and the imbalanced cohorts, respectively. We tested for association in these subsets of our data (and included batch identifier as a covariate in the balanced cohort), and found the λ GC from the balanced and imbalanced cohorts to be within the 95% confidence interval of size-matched values from our full data, suggesting that batch effects were not systematically inflating our association statistics ( Extended Data Table 3b ). We also performed a heterogeneity test for the odds ratio of lead variants in each credible set using the balanced and imbalanced cohorts, and observed no significant heterogeneity after Bonferroni correction ( Supplementary Table 3 ). We next sought to disentangle the contributions of polygenic inheritance and uncorrected population structure in our observed λ GC. Linkage disequilibrium score regression 44 is able to differentiate these two effects, but requires genome-wide data, so was not possible in our ImmunoChip dataset. Instead, we compared λ GC and λ 1000 values calculated using the same set of background SNPs from the largest IBD meta-analysis with genome-wide data 45 . For both Crohn’s disease and ulcerative colitis, the λ 1000 values in our ImmunoChip study (1.012 and 1.012) were equal to or less than those in the genome-wide study (1.016 and 1.012). Furthermore, linkage disequilibrium score regression on the genome-wide data showed that most inflation was caused by polygenic risk (linkage disequilibrium score intercept = 1.09 for both Crohn’s disease and ulcerative colitis, compared with λ GC = 1.23 and 1.29). Together, these results show that our residual inflation is consistent with polygenic signal and modest residual confounding. We tested what effect correcting for the linkage disequilibrium score intercept of 1.09 would have on posterior probabilities and credible sets, and found no major differences compared with uncorrected values. The full comparison of λ values is shown in Extended Data Table 3c . Imputation Imputation was performed separately in each ImmunoChip autosomal high-density region (185 total) from the 1000 Genomes phase I integrated haplotype reference panel. To prevent the edge effect, we extended each side of the high-density regions by 50 kbp. Two imputations were performed sequentially ( Extended Data Fig. 2 ) using software and parameters as described below. The first imputation was performed immediately after the quality control, from which the major results were manually inspected (‘Manual cluster plot inspection’ section in Methods). The second imputation was performed after removing variants that failed the manual cluster plot inspection. We used SHAPEIT 46 , 47 (versions: first imputation, v2.r644; second imputation, v2.r769) to pre-phase the genotypes, followed by IMPUTE2 13 , 14 (versions: first, 2.2.2; second, 2.3.0) to perform the imputation. The reference panels were downloaded from the IMPUTE2 website (first, March 2012 release; second, December 2013 release). After the second imputation, there were 388,432 variants with good imputation quality (INFO > 0.4). These include 99.9% of variants with MAF ≥ 0.05, 99.3% of variants with 0.05 > MAF ≥ 0.01, and 63.0% of variants with MAF < 0.01 ( Extended Data Fig. 6d–f ), with similar success rates both for coding and for non-coding variants, making it unlikely that missing variants substantially affected our fine-mapping conclusions. Manual cluster plot inspection Variants that had posterior probability greater than 50% or in credible sets mapped to ≤10 variants were manually inspected using Evoker version 2.2 (ref. 48 ). Each variant was inspected by three independent reviewers (ten reviewers participated) and scored as pass, fail, or maybe. Reviewers were blinded to the posterior probability of these variants. We removed variants that received one or more fails, or received fewer than two passes. Two hundred and twenty out of 276 inspected variants passed this inspection, and 53 out of 56 failed variants were restored by imputation. There was no difference in MAF between the failed and the passed variants ( P = 0.66). A further cluster plot inspection flagged two additional failed variants after removing the failed variants from the first inspection and re-doing the imputation and analysis. Dramatic clustering errors accounted for 27 out of 58 flagged variants, which were eliminated from final credible sets. The remaining 31 had only minor issues, and the imputed data for these remained in our final credible sets, with marginally smaller posteriors (mean of the difference 9.8%, P = 0.06, paired t -test). Establishing a P value threshold We used a multiple-testing-corrected P value threshold for associations of 1.35 × 10 −6 , which was established by permutation. We generated 200 permuted datasets by randomly shuffling phenotypes across samples and performed association analyses for each permutation across all variants in high-density regions that overlapped IBD-associated loci 3 . We stored (1) all the point-wise P values ( α S ), as well as (2) the ‘best’ P values ( α B ) of each of the 200 permuted datasets. We then computed the empirical, experiment-wide P value ( α M ) (corrected for multiple testing) for each of the tests as its rank/200 with respect to the 200 α B . We then estimated the number of independent tests performed in the studied regions, n , as the slope of the regression of log(1 − α M ) on log(1 − α S ), knowing that α M = 1 − (1 − α S ) n , yielding a value of 37,056. The P value threshold was determined as 0.05/ n ≈ 1.35 × 10 −6 . Detecting and fine-mapping association signals We used three fine-mapping methods ( Supplementary Methods ) to detect independent signals and create credible sets across 97 ImmunoChip autosomal high-density regions that contained at least one variant with P < 1.35 × 10 −6 . Our process for merging the results of the three methods is described below and illustrated in Fig. 1a . 1. We merged signals from different methods if their credible sets overlapped. To ensure a conservative credible set, this new merged credible set included all variants from all merged signals (the union of constituent credible sets). We assigned each variant in the merged credible sets a posterior probability equal to the average of the probabilities from the methods that reported this signal. To filter out technical artefacts, we required genotyped variants in small credible sets to pass manual cluster plot inspection (see above) and all imputed variants to have INFO > 0.4. For signals reported by only one or two methods that contained only imputed variants (that is, no directly genotyped variants), we additionally required at least one variant with INFO > 0.8 and MAF > 0.01. 2. We next assigned each signal to a provisional combination of lead variant and phenotype (Crohn’s disease, ulcerative colitis, or IBD) that maximized the marginal likelihood of equation (8) in Supplementary Methods . 3. At loci with more than one signal, we built a multivariate model with all signals reported by all three methods, and tested all possible combinations of adding signals reported by one or two methods, as long as they still had P < 1.35 × 10 −6 when jointly fitted in the multi-signal model. We selected the combination with the highest joint marginal likelihood (equation (8) in Supplementary Methods ). Phenotype assignment of signals The provisional phenotype assignment, performed during the signal merging described above, was merely a point estimate and did not capture the uncertainty associated with the phenotypic assignment. We therefore recomputed the assignment of each signal as Crohn’s-disease-specific, ulcerative-colitis-specific, or shared using the Bayesian multinomial model from fine-mapping method 2, empirical covariance prior with Laplace approximation 49 , as it was designed to assess evidence of sharing in the presence of potentially correlated effect sizes. For the lead variant for each credible set, we calculated the marginal likelihoods as in equation (13) from Supplementary Methods , restricting either β UC = 0 (for the Crohn’s-disease-only model) or β CD = 0 (for the ulcerative-colitis-only model), as well as using the unconstrained prior (for the associated-to-both model). We then calculated the log(Bayes factor) in favour of sharing; that is, the log of the ratio of marginal likelihoods between the associated-to-both model and the best of the single-phenotype associated models. These sharing log(Bayes factors) are given in Supplementary Table 1 (column ‘sharingBF’), and are a probabilistic assessment of phenotype assignment: for instance, the log(Bayes factor) of 97.4 for the primary signal at IL23R suggests a very high certainty that this signal is shared across both Crohn’s disease and ulcerative colitis, whereas the log(Bayes factor) of 0.4 for the primary signal at FUT2 is more ambiguous. In addition to providing the log(Bayes factor) itself, we also applied a log(Bayes factor) cut-off of 10 to select variants with strong evidence of being shared across phenotypes. Final filters These procedures generated some signals where all three methods largely agreed, and some where they differed. While the signals where the methods disagree are of interest for methods development, here we chose to focus on the most concordant signals, as they are most straightforward to interpret biologically. We therefore discarded all signals found by only one method (which completely removed one locus), and two loci where the ratio of marginal likelihoods (equation (8) in Supplementary Methods ) for the best model and the second-best model was <10 ( Supplementary Notes ). After these filters ( Extended Data Fig. 7 ), we considered 139 signals from 94 regions (containing a total of 181,232 variants) to be confidently fine-mapped, and took them forward for subsequent analysis. Estimating the variance explained by the fine-mapping We used a mixed-model framework to estimate the total risk variance attributable to the IBD risk loci, and to the signals identified in the fine-mapping. We used the GCTA software package 50 to compute a gametic relationship matrix (G-matrix) using genotype dosage information for the genotyped variants in the high-density regions (which we will call G HD ). We then fitted a variety of variance component models by restricted maximum likelihood analysis using an underlying liability threshold model implemented with the DMU package 51 . The first model was a standard heritability mixed-model that included fixed effects for five principal components (to correct for stratification) and a random effect summarizing the contribution of all variants in the fine-mapping regions, such that the liabilities across all individuals were distributed according to where λ 1 is thus the variance explained by all variants in fine-mapping regions, which we estimated. We then fitted a model that included an additional random effect for the contribution of the lead variants that had been specifically identified (with G-matrix G Signals ), such that the liability was distributed as The variance explained by the signals under consideration was then given by the reduction in the variance explained by all variants in the fine-mapping regions between the two models ( λ 1 − ). We used this approach to estimate what fraction of this variance was accounted for by (1) the single strongest signals in each region (as would be typically done before fine-mapping), or (2) all signals identified in fine-mapping. We used Cox and Snell’s method 52 to estimate the variance explained across individual signals ( Extended Data Fig. 3b ) for computational efficiency. Overlap between transcription-factor binding motifs and causal variants For each motif in the ENCODE transcription factor ChIP-seq data ( , accessed November 2014) 20 , we calculated the overall information content as the sum of information content for each position 53 , and only considered motifs with overall information content ≥ 14 bits (equivalent to seven perfectly conserved positions). For every variant in a high-density region, we determined whether it created or disrupted a motif at a high-information site (information content ≥ 1.8). Overlap between epigenetic signatures and causal variants For each combination of 120 tissues and 3 histone marks (H3K4me1, H3K4me3, and H3K27ac) from the Roadmap Epigenome Project, we calculated an overlap score, equal to the sum of fine-mapping posterior probabilities for all variants in peaks of that histone mark in that tissue. We generated a null distribution of this score for each tissue/mark by shifting chromatin marks randomly (between 0 bp and 44.53 megabase pairs, the length of all high-density regions) and circularly (peaks at the end of the region shifted to the beginning of the region) over the high-density regions while keeping the same inter-peak distances. To summarize these correlated results across many cell and tissue types, we defined a set of ‘core’ H3K4me1 immune and H3K27ac gut peaks as sets of overlapping peaks in cells that showed the strongest enrichment. Intersects were made using bedtools version 2.24.0 default settings 54 . We selected six immune cell types for H3K4me1 and three gut cell types for H3K27ac ( Supplementary Table 2 ). We also chose controls ( Supplementary Table 2 ) from non-immune and non-gut cell types with similar density of peaks in the fine-mapped regions compared with immune/gut cell types to confirm the tissue-specificity of the overlap. We used the phenotype assignments (described above) in dissecting the enrichment for the Crohn’s disease and ulcerative colitis signals. Sixty-five Crohn’s disease and 21 ulcerative colitis signals that were mapped to ≤50 variants were used in this analysis. Published eQTL summary statistics We used eQTL summary statistics from three published studies. (1) Peripheral blood eQTLs from ref. 29 of 2,752 twins, reporting loci with MAF > 0.5%. Imputation was performed using the 1000 Genomes reference panel 11 . (2) Peripheral blood eQTLs from ref. 30 of 8,086 individuals, including variants with MAF > 5%. Imputation was performed using the HapMap 2 CEU population reference panel 55 . (3) CD14 + monocyte eQTLs from Supplementary Table 2 in ref. 31 , comprising 432 European individuals, measured in a naive state and after stimulation with interferon-γ or lipopolysaccharide (for 2 or 24 h), reporting loci with MAF > 4% and FDR < 0.05. Imputation was performed using the 1000 Genomes reference panel 10 . Processing and quality control of new eQTL ULg dataset A detailed description of the ULg dataset is in preparation (Y.M. et al ., in preparation). Briefly, we collected venous blood and intestinal biopsies at three locations (ileum, transverse colon, and rectum) from 350 healthy individuals of European descent, average age 54 (range 17–87), 56% female. SNPs were genotyped on Illumina Human OmniExpress version 1.0 arrays interrogating 730,525 variants, and SNPs and individuals were subjected to standard quality control procedures using call rate, Hardy–Weinberg equilibrium, MAF ≥ 0.05, and consistency between declared and genotype-based sex as criteria. We further imputed genotypes at ∼ 7 million variants on the entire cohort using the Impute2 software package 13 and the 1,000 Genomes Project as reference population (phase 3 integrated variant set, released 12 October 2014) 11 , 14 . From the blood, we purified CD4 + , CD8 + , CD19 + , CD14 + , and CD15 + cells by positive selection, and platelets (CD45 − ) by negative selection. RNA from all leucocyte samples and intestinal biopsies was hybridized on Illumina Human HT-12 arrays version 4. After standard quality control, raw fluorescent intensities were variance stabilized 56 and quantile normalized 57 using the lumi R package 58 , and were corrected for sex, age, smoking status, number of probes with expression level significantly above background as fixed effects and array number (sentrix id) as the random effect. For each probe with measurable expression (detection P value < 0.05 in >25% of samples), we tested for cis -eQTLs at all variants within a 500 kbp window. The nominal P value of the best SNP within a cis -window was Sidak-corrected for the window-specific number of independent tests. The number of independent tests in each window was estimated exactly in the same manner as for the number of independent tests for fine-mapping methods (‘Establishing a P value threshold’ section in Methods). We estimated false discovery rates ( q values) from the resulting P values across all probes using the qvalue R package 59 . Four hundred and eighty cis -eQTL with FDR ≤ 0.10 for which the lead SNPs (that is, the SNP yielding the best P value for the cis -eQTL) mapped within the 97 high-density regions (94 fine-mapped plus 3 unresolved) were retained for further analyses. Naive co-localization using lead SNPs We calculated the number of IBD credible sets containing a lead eQTL variant in a particular tissue (‘observed’). This number was then compared with the background number of overlaps (‘expected’): where N i is the total number of variants in region i in 1,000 genomes with an allele frequency greater than a certain threshold (equal to the threshold used for the original eQTL study), C i is the number of these variants that lie in IBD credible sets, and S is a set of regions that have at least one significant eQTL. We simulated 1,000 trials per region with binomial probability equal to the regional background overlap rate: . Empirical P values were estimated by comparing the observed number of overlaps with the simulated number of the overlaps. More specifically, the P value was defined as the proportion of trails having equal or more overlaps in the simulations than the observed. Frequentist co-localization using conditional P values We next used conditional association to test for evidence of co-localization, as described in ref. 25 . This method compares the P value of association for the lead SNP of an eQTL before and after conditioning on the SNP with the highest posterior in the credible set, and measures the drop in −log( P ). An empirical P value for this drop is then calculated by comparing it to the drop for all variants in the high-density region. Because this method requires full genotypes, we could only apply it to the ULg dataset (MAF > 5%). An empirical P value ≤ 0.05 was considered as evidence that the corresponding credible set was co-localized with the corresponding cis -eQTL. To evaluate whether our fine-mapping associations co-localized with cis -eQTL more often than expected by chance, we counted the number of credible sets affecting at least one cis -eQTL with P ≤ 0.05, and compared how often this number was matched or exceeded by 1,000 sets of variants that were randomly selected yet distributed among the loci in accordance with the real credible sets. The number of variants per set was same as the number of credible sets in this eQTL analysis (MAF matched, size ≤ 50), shown in Extended Data Table 2 . Bayesian co-localization using Bayes factors Finally, we used the Bayesian co-localization methodology described in ref. 60 , modified to use the credible sets and posteriors generated by our fine-mapping methods (similarly only applicable to the ULg full genotype data). The method takes as input a pair of IBD and eQTL signals, with corresponding credible sets S IBD and S eQTL , and posteriors for each variant and (with ). Credible sets and posteriors were generated for eQTL signals using the Bayesian quantitative association mode in SNPTest (with default parameters), with credible sets in regions with multiple independent signals generated conditional on all other signals. Our method calculated a Bayes factor (BF) summarizing the evidence in favour of a co-localized model (that is, a single underlying causal variant between the IBD and eQTL signals) compared with a non-co-localized model (where different causal variants were driving the two signals), given by the ratio of marginal likelihoods The marginal likelihood for the co-localized model (that is, hypothesis H 4 in ref. 60 ) is given by and the marginal likelihood for the model where the signals are not co-localized (that is, hypothesis H 3 ) is given by: In both cases, N is the total number of variants in the region. We only counted towards N variants having r 2 > 0.2 with either the lead eQTL variant or the lead IBD variant. To measure enrichment in co-localization BFs compared with the null, we performed a permutation analysis. In this analysis, we randomly reassigned eQTL signals to new fine-mapping regions to generate a set of simulated null datasets. This was done using the following scheme on variants and credible sets with the same MAF cut-off as the eQTL dataset (ULg, MAF > 5%). (1) Estimate the standardized effect size β g for each eQTL signal g , equal to standard deviation increase in gene expression for each dose of the minor allele. (2) Randomly reassign each eQTL signal to a new fine-mapping region, and then select a new causal variant with a MAF within 1% of the lead variant from the real signal. If multiple such variants exist, select one at random. If no such variants exist, pick the variant with the closest MAF. (3) Generate new simulated gene expression signals for each individual from Normal( β g x j , 1 − β g 2 2 f (1 − f )), where x j is the individual’s minor allele dosage at the new causal variant and f is the MAF. (4) Carry out fine-mapping and calculate co-localization Bayes factors for each pair of (real) IBD signals and (simulated) eQTL signals. (5) Repeat stages (2)–(4) 1,000 times for each tissue type. We can use these permuted Bayes factors to calculate P values for each IBD credible set, given by the proportion of time the permuted Bayes factors were as large or greater than the one observed in the real dataset. To generate a high-quality set of co-localized eQTL and IBD signals, we take all IBD signals that have the co-localization BF > 2, P < 0.01, and r 2 (with the eQTL variant) > 0.8. Code availability Computer code used in this study is provided in the ‘Software availability’ sections in the Supplementary Methods . Data availability The data that support the findings of this study are available from the International Inflammatory Bowel Disease Genetics Consortium but restrictions apply to the availability of these data, which were used under licence for the current study, and so are not publicly available. Data are, however, available from the corresponding authors upon reasonable request and with the permission of the International Inflammatory Bowel Disease Genetics Consortium. Change history 12 July 2017 The equation at the end Methods section ‘Establishing a P value threshold’ was corrected. | Scientists have closed in on specific genes responsible for Inflammatory Bowel Disease (IBD) from a list of over 600 genes that were suspects for the disease. The team from the Wellcome Trust Sanger Institute and their collaborators at the Broad Institute of MIT and Harvard and the GIGA Institute of the University of Liège combined efforts to produce a high resolution map to investigate which genetic variants have a causal role in the disease. In the new study, published today (28 June) in Nature, scientists examined the genome of 67,852 individuals and applied three statistical methods to zoom in on which genetic variants were actively implicated in the disease. Of the regions of the genome associated with IBD that were studied, 18 could be pin-pointed to a single genetic variant with more than 95 per cent certainty. The results form a basis for more effective prescription of current treatments for the disease as well as the discovery of new drug targets. More than 300,000 people suffer from IBD in the UK. IBD is a debilitating disease in which the body's own immune system attacks parts of the digestive tract. The exact causes of this disease are unclear, and there currently is no cure. To understand more about the genetics underlying IBD, researchers have conducted genome wide association studies and previously found hundreds of genetic variants linked to the disease. However, it was not certain which specific genes were actually implicated by those variants. Dr Jeffrey Barrett, joint lead author from the Wellcome Trust Sanger Institute said: "We have taken the biggest ever data set for IBD and applied careful statistics to narrow down to the individual genetic variants involved. Now we have a clearer picture of which genes do and do not play a role in the disease. We are zooming in on the genetic culprits of IBD." The high resolution map of the disease enabled scientists to see which variants directly influence disease, and to separate them from other variants which happen to be located near each other in the genome. Dr Hailiang Huang, first author from the Massachusetts General Hospital and Broad Institute said: "An issue with studying complex diseases is that it can be hard to move from genetic associations, usually including many genetic variants of similar evidence, to knowing exactly which variants are involved. We need to be careful in deciding when we are sure we have the right variant. This new technique helps us to pinpoint which genetic variants are implicated in IBD with greater confidence." Professor Michel Georges, joint lead author from the GIGA Institute of the University of Liège said: "These results will help towards rational drug discovery for complex human diseases like IBD, and possibly for the development of personalised medicine by finding biomarkers for more effective prescription of existing drugs." | 10.1038/nature22969 |
Medicine | Rat fathers' diets may affect offspring's breast cancer risk | Camile Castilho Fontelles et al, Paternal programming of breast cancer risk in daughters in a rat model: opposing effects of animal- and plant-based high-fat diets, Breast Cancer Research (2016). DOI: 10.1186/s13058-016-0729-x Journal information: Breast Cancer Research | http://dx.doi.org/10.1186/s13058-016-0729-x | https://medicalxpress.com/news/2016-07-rat-fathers-diets-affect-offspring.html | Abstract Background Although males contribute half of the embryo’s genome, only recently has interest begun to be directed toward the potential impact of paternal experiences on the health of offspring. While there is evidence that paternal malnutrition may increase offspring susceptibility to metabolic diseases, the influence of paternal factors on a daughter’s breast cancer risk has been examined in few studies. Methods Male Sprague-Dawley rats were fed, before and during puberty, either a lard-based (high in saturated fats) or a corn oil-based (high in n-6 polyunsaturated fats) high-fat diet (60 % of fat-derived energy). Control animals were fed an AIN-93G control diet (16 % of fat-derived energy). Their 50-day-old female offspring fed only a commercial diet were subjected to the classical model of mammary carcinogenesis based on 7,12-dimethylbenz[a]anthracene initiation, and mammary tumor development was evaluated. Sperm cells and mammary gland tissue were subjected to cellular and molecular analysis. Results Compared with female offspring of control diet-fed male rats, offspring of lard-fed male rats did not differ in tumor latency, growth, or multiplicity. However, female offspring of lard-fed male rats had increased elongation of the mammary epithelial tree, number of terminal end buds, and tumor incidence compared with both female offspring of control diet-fed and corn oil-fed male rats. Compared with female offspring of control diet-fed male rats, female offspring of corn oil-fed male rats showed decreased tumor growth but no difference regarding tumor incidence, latency, or multiplicity. Additionally, female offspring of corn oil-fed male rats had longer tumor latency as well as decreased tumor growth and multiplicity compared with female offspring of lard-fed male rats. Paternal consumption of animal- or plant-based high-fat diets elicited opposing effects, with lard rich in saturated fatty acids increasing breast cancer risk in offspring and corn oil rich in n-6 polyunsaturated fatty acids decreasing it. These effects could be linked to alterations in microRNA expression in fathers’ sperm and their daughters’ mammary glands, and to modifications in breast cancer-related protein expression in this tissue. Conclusions Our findings highlight the importance of paternal nutrition in affecting future generations’ risk of developing breast cancer. Background Breast cancer is a global public health problem, with nearly 1.7 million new cases diagnosed in 2012, representing 25 % of all cancers in women worldwide [ 1 ]. Its incidence is projected to rise significantly over the next 20 years despite current efforts to prevent the disease [ 2 ]. Although the precise reason for this growth is still not clear, it has been suggested that modern women’s lifestyles, including postponing first pregnancy and having fewer children, can explain the increase [ 3 ]. Nutritional habits, such as adoption of Western dietary patterns, are also linked to increased breast cancer risk [ 4 ]. These patterns are characterized by low consumption of fruits and vegetables, increased energy intake, and decreased energy expenditure, leading to obesity, as well as increased intake of saturated fatty acids (SFA), n-6 polyunsaturated fatty acids (PUFA), and trans -fatty acids and decreased intake of n-3 polyunsaturated fats [ 5 , 6 ]. While the majority of epidemiological studies on nutrition and breast cancer risk have been focused on women’s diet in adulthood, accumulating epidemiological and experimental evidence highlights early life experiences, including nutrition, as relevant factors for later breast cancer risk determination [ 7 ]. The developmental origins of this cancer have been considered predominantly from a maternal perspective, with emphasis placed on the impact of high fat or energy intake during gestation and lactation on female offspring mammary gland development and later breast cancer risk [ 8 , 9 ]. Although males contribute half of the embryo’s genome, only recently has interest begun to be directed toward the potential impact of paternal experiences on the health of offspring [ 10 ]. While experimental studies have shown that paternal malnutrition may increase the susceptibility of offspring to metabolic dysregulation, obesity, and cardiovascular diseases [ 11 , 12 ], the influence of paternal factors on daughter’s breast cancer risk has been examined in few studies. Among them, epidemiological studies show an association between higher paternal education level, older age, and smoking with increased rate of breast cancer in the daughters [ 13 , 14 ]. Unlike the female production of germ cells that takes place predominantly in early life [ 15 ], male production of germ cells starts in utero, with mature sperm cells being produced throughout the entire reproductive life of the male [ 16 ]. Because spermatogenesis can be dramatically influenced by environmental factors, including malnutrition, obesity, and an exposure to toxic compounds, the father’s health during preconception is now acknowledged as a critical factor in the context of the developmental origins of health and disease [ 17 ]. In addition to embryogenesis, gametogenesis comprises intense epigenetic (DNA methylation, histone modification, and microRNA [miRNA or miR] expression) remodeling [ 18 , 19 ]. Thus, epigenetically inherited increased disease risk could be transmitted through the female as well as the male germline [ 20 ]. Specific windows within which male gametes would be especially prone to environmentally elicited epigenetic deregulation include prepuberty and the reproductive phase [ 21 ]. Given the marked increase in dietary fat intake over the past three decades [ 22 ], as well as the notion that different kinds of dietary fats can lead to different health outcomes [ 23 ], we designed this study to investigate whether, in rats, consumption of high levels of animal- or vegetable-based fats by fathers would affect their daughters’ risk of breast cancer. We also investigated the underlying cellular and molecular mechanisms. We fed male Sprague-Dawley rats, before and during puberty, either a lard-based (high in SFA) or a corn oil-based (high in n-6 PUFA) high-fat diet (60 % of fat-derived energy). Control animals were fed a control AIN-93G diet containing soybean oil as a fat source (16 % of fat-derived energy). Male rats were mated with female rats that were consuming a commercial diet. We show that paternal consumption of these high-fat diets elicited opposing effects, with animal fat increasing and vegetable oil decreasing breast cancer risk in the offspring. These effects could be linked to alterations in miRNA expression in fathers’ sperm and their daughters’ mammary glands, as well as to modifications in breast cancer-related protein expression in this tissue. These novel data show that paternal high-fat diets influence their female offspring’s susceptibility to mammary cancer, with consumption of lard increasing and corn oil reducing daughters’ mammary cancer risk. Thus, paternal diet before conception sets a stage for a daughter’s risk of developing breast cancer. Methods Experimental design This study was approved by the ethics committee on animal experiments of the Faculty of Pharmaceutical Sciences, University of São Paulo (protocol number CEUA/FCF/381). Twenty-one-day-old male rats were divided into three groups ( n = 20 rats per group): control rats (those fed the control AIN-93G diet, with 16 % of total calories provided by lipids), lard-fed males (exposed to a high-SFA diet, with 60 % of total calories provided mainly from lard), and corn oil-fed males (exposed to n-6 PUFA diet, with 60 % of total calories provided mainly from corn oil). At 12 weeks of age, all male rats were switched to a chow diet and mated by housing one male with one female per cage. Pregnant female rats and their offspring consumed only commercial laboratory chow (Nuvital Nutrientes, Colombo, Brazil). Body weight and food intake were recorded two or three times per week. Determination of the diets’ lipid profiles The lipid profiles of the control, lard, and corn oil diets were determined according to the methods published by the Association of Official Analytical Chemists [ 24 ]. Fatty acids were esterified to fatty-acid methyl esters according to the method reported by Hartman and Lago [ 25 ], and their composition was analyzed with a gas chromatograph (GC 17A/Class GC 10; Shimadzu, Kyoto, Japan) equipped with a flame ionization detector and a SUPELCOWAX® 10 fused silica capillary column (30 mm × 0.25 mm inner diameter; Sigma-Aldrich, St. Louis, MO, USA). The temperature was set at 170 °C, raised to 225 °C at a rate of 1 °C/minute, and held for 25 minutes. The temperatures of the vaporizer and detector were 250 °C and 270 °C, respectively. Helium was used as the carrier gas (1 ml/minute). Identification of the fatty acids was performed by comparison of the retention times with the standard mixture of fatty-acid methyl esters. Each determination was performed in duplicate using two different samples for each diet. Insulin tolerance test The tests were performed at 0800 h after the rats were fasted for 12 h, according to the method described by Takada et al. [ 26 ]. The insulin load (75 mU/100 g body weight) was injected as a bolus, and the blood glucose levels were determined at 0, 3, 6, 9, 12, and 30 minutes after injection in male rats and their 50-day-old female offspring. The AUC was calculated according to the trapezoid rule [ 27 ]. Mature spermatozoa collection and purification Control diet and lard- and corn oil-fed male rats were killed once females were pregnant, and the caudal epididymis was dissected for sperm collection. The cauda and vas deferens from male rats were collected, punctured, and transferred to tissue culture dishes containing M2 medium (M2 medium with HEPES, without penicillin and streptomycin, sterile-filtered, suitable for rat embryo; Sigma-Aldrich), where it was incubated for 1 h at 37 °C. Spermatozoa samples were washed with PBS and then incubated with somatic cell lysis buffer (SCLB; 0.1 % SDS, 0.5 % Triton X-100 in diethylpyrocarbonate water) for 1 h, according to a protocol described by Goodrich et al. [ 28 ]. SCLB was rinsed off with two washes of PBS, and the purified spermatozoa sample (at least 95 % purity as assessed by microscopy) was pelleted and used for miRNA extraction. Determination of daily sperm production The right testis was maintained at −20 °C until processing to determine the daily sperm production. The technique proposed by Robb et al. [ 29 ] is based on the resistance of elongated spermatids present in phases 17–19 of spermatogenesis to intense mechanical stress due to the high compaction of chromatin. Sperm morphological analyses According to the method of Seed et al. [ 30 ], the epididymis was previously frozen at −20 °C, underwent incision, and was subsequently immersed in PBS to promote the dissemination of gametes to the aqueous medium. Then the obtained solution was placed on slides for examination by light microscopy. Two hundred sperm per animal were analyzed microscopically at × 400 magnification. Mammary gland harvesting Abdominal mammary glands of female offspring of control diet- and lard- and corn oil-fed male rats ( n = 6 per group) were collected on postnatal day 50 and used for preparing mammary whole mounts and miRNA and protein extraction. Analysis of mammary gland morphology and development Whole-mount preparations of the fourth abdominal mammary gland from 50-day-old female offspring ( n = 5/group) were obtained, and the epithelial elongation and number of terminal end buds (TEBs) were determined as described by de Assis et al. [ 31 ]. Mammary tumor induction Mammary tumors were induced in 50-day-old female rat offspring ( n = 24 rats/group) by administration of 7,12-dimethylbenz[a]anthracene (DMBA, 50 mg/kg body weight; Sigma-Aldrich). The carcinogen was dissolved in corn oil and administered by oral gavage. Animals were examined for mammary tumors by palpation twice per week. Latency of tumor appearance, the number of animals with tumors, and the number of tumors per animal (multiplicity) were evaluated. The tumor volume was calculated with tumor measures of length ( a ), height ( b ), and width ( c ) taken with a caliper rule once per week since tumor appearance and throughout the experiment. The formula (1/6 × 3.14) × ( a × b × c ) was used to calculate the tumor volume, as described by Spang-Thomsen et al. [ 32 ]. The tumor growth rate was calculated using the measured volumes of each tumor at a given week ( d ) and the subsequent week ( e ) of appearance using the formula [( e − d )/ d ] × 100. Those animals in which tumor burden approximated 10 % of total body weight were killed. All others animals were killed 19 weeks after carcinogen administration. Analysis of mammary gland and tumor cell proliferation and apoptosis in female offspring Cell proliferation was evaluated in mammary glands (ducts and lobules) and tumors from 50-day-old female offspring ( n = 4/group) by Ki-67 immunohistochemistry. After being harvested, mammary tissue was directly fixed in 10 % buffered formalin, embedded in paraffin, and sectioned. Sections were then deparaffinized in xylene and hydrated through graded ethanol. Antigen retrieval was performed with 10 mM citrate buffer, pH 6, for 20 minutes in a pressure cooker. Peroxidase blocking was performed with 10 % H 2 O 2 for 10 minutes, and nonspecific binding was blocked for 1 h with 1 % skimmed milk in PBS. Sections were incubated overnight with anti-rat Ki-67 primary antibody (Abcam, Cambridge, UK) at a 1:50 dilution. After washes, sections were incubated with the LSAB 2 System-HRP kit (Dako, Carpinteria, CA, USA) according to the manufacturer’s instructions, stained with 3,3′-diaminobenzidine in chromogenic solution (Dako) for 10 minutes, washed, and counterstained for 1.5 minutes with hematoxylin. Cell proliferation was quantified by assessing the number of Ki-67-positive cells among 1000 cells. The slides were evaluated using ImageJ software (National Institutes of Health, Bethesda, MD, USA). Apoptosis analysis was conducted in mammary glands (ducts and lobules) and tumors from 50-day-old female offspring ( n = 4/group), according to the method described by Elmore et al. [ 33 ], using ImageJ software. Results are presented as mean number of apoptotic cells per 1000 cells. microRNA expression profile analysis Total RNA from paternal sperm and their female offspring’s total mammary gland was extracted using the miRNeasy Mini Kit (QIAGEN, Valencia, CA, USA) according to the manufacturer’s instructions. RNA samples were quantified and stored at −80 °C until use. miRNA arrays were performed at the Genomics and Epigenomics Shared Resources at Georgetown University using Applied Biosystems TaqMan Array Rodent MicroRNA arrays (Life Technologies, Carlsbad, CA, USA) to generate the miRNA expression profiles for each experimental group. The TaqMan® Array Rodent MicroRNA A + B Cards Set v3.0 is a two-card set containing a total of 384 TaqMan® MicroRNA assays per card (Life Technologies). The set enables accurate quantitation of 641 and 373 unique miRNAs for rat. There are three TaqMan® MicroRNA Assay endogenous controls for each species on each array to aid in data normalization [ 34 ]. The geNorm algorithm was applied to those endogenous controls to determine the optimal number of stable controls. The geometric mean of these selected controls was used for array normalization. To conduct further statistical analysis, the normalized value was log-transformed to meet the t test requirement. Statistical analysis was conducted using the limma package in R [ 35 ]. miRNAs that had a false discovery rate <0.1 were considered as significantly altered and selected for further analysis. Target prediction for miRNAs of interest was conducted using TargetScan (release 6.2). The predicted targeted messenger RNA (mRNA) list was then uploaded to Ingenuity Pathway Analysis (IPA; QIAGEN Silicon Valley, Redwood City, CA, USA) for gene set enrichment analysis. We selected the top canonical pathways for further analysis. Analysis of protein levels in mammary glands of female offspring Protein levels of 50 ng/μl were assessed by Western blot analysis in total mammary glands obtained from 50-day-old female rats ( n = 5 per group). Total protein was extracted from mammary tissues using radioimmunoprecipitation assay buffer with protease inhibitor (Roche, Basel, Switzerland), glycerophosphate (10 mM), sodium orthovanadate (1 mM), pyrophosphate (5 mM), and phenylmethylsulfonyl fluoride (1 mM). The samples were mixed with the buffer for 5 minutes, then incubated on ice for 30 minutes and centrifuged for 10 minutes at high speed. Protein in the supernatant was quantified using Pierce bicinchoninic acid protein assay reagent (Thermo Scientific, Rockford, IL, USA). Protein extracts were resolved on a 4–12 % gradient denaturing polyacrylamide gel (SDS-PAGE). Proteins were transferred using the Invitrogen iBlot® 7-Minute Dry Blotting System (Life Technologies) and blocked with 5 % nonfat dry milk for 1 h at room temperature. Membranes were incubated with the specific primary antibodies at 4 °C overnight. After several washes, the membranes were incubated with HRP-conjugated secondary antibody (1:5000; Santa Cruz Biotechnology, Dallas, TX, USA) at room temperature for 1 h. Membranes were developed using HyGLO chemiluminescent HRP antibody detection reagent (Denville Scientific Inc., Metuchen, NJ, USA), and exposed to Kodak autoradiography films (Carestream Health, Rochester, NY, USA). The optical density of the bands was quantified using Quantity One software (Bio-Rad Laboratories, Hercules, CA, USA). To control for equal protein loading, expression of the proteins of interest was normalized to the β-actin signal. Statistical analysis Results are expressed as the mean ± SEM, and all analyses were conducted with the limma package in R. Multiple-group comparisons were performed using one-way analysis of variance (ANOVA) followed by a least significant difference (LSD) test, and two-group comparisons were performed using Student’s t test. Repeated-measures ANOVA was applied for caloric intake data evaluation, and Kaplan-Meier curves and log-rank tests were applied for determining differences in tumor incidence. For all data analyses, p ≤ 0.05 was applied as the threshold for statistical significance. Results Paternal dietary and health parameters Compared with control diet-fed male rats, the ones that were on the lard- or corn oil-based high-fat diets consumed more ( p ≤ 0.05) SFA (predominantly palmitic [C16:0] and stearic [C18:0] acids), monounsaturated fatty acids (MUFA) (predominantly oleic acid [C18:1n9c]) and PUFA (predominantly linoleic acid [C18:2n6c]) (Fig. 1a ). Corn oil-fed male rats consumed less ( p ≤ 0.05) SFA (predominantly palmitic [C16:0] and stearic [C18:0] acids) and MUFA (predominantly oleic acid [C18:1n9c]) and more ( p ≤ 0.05) PUFA (predominantly linoleic acid [C18:2n6c]) than the lard-fed male rats (Fig. 1a ). Daily caloric intake was approximately 7 % higher ( p ≤ 0.05) in both lard- and corn oil-fed male rats than in control diet-fed male rats (data not shown). Although lard- and corn oil-fed male rats consumed nearly the same amount of calories per day, lard-fed male rats gained more ( p ≤ 0.05) weight than control diet- and corn oil-fed male rats ( p ≤ 0.05) (Table 1 ). There was no difference ( p > 0.05) between control diet- and corn oil-fed male rats regarding weight gain. Both lard- and corn oil-fed male rats had greater ( p ≤ 0.05) abdominal, retroperitoneal, and epididymal fat pad weights than control diet-fed male rats (Table 1 ). Compared with lard-fed male rats, corn oil-fed male rats had lesser ( p ≤ 0.05) epididymal fat pad weights, but there was no difference ( p > 0.05) in abdominal or retroperitoneal fat pad weights (Table 1 ). Further, lard-fed male rats had lesser testicle ( p ≤ 0.05), epididymis ( p ≤ 0.05), and seminal vesicle ( p ≤ 0.08) weights than the control diet- and corn oil-fed male rats (Table 1 ). There was no difference ( p > 0.05) between control diet- and corn oil-fed male rats regarding these parameters (Table 1 ). Lard-fed male rats also had fewer ( p ≤ 0.05) normal sperm cells and lower ( p ≤ 0.05) daily sperm production than control diet- and corn oil-fed male rats (Table 1 ). There was no difference ( p > 0.05) between control diet- and corn oil-fed male rats regarding these parameters (Table 1 ). Both lard- and corn oil-fed male rats had higher ( p ≤ 0.05) fasting glucose levels than control diet-fed male rats (Table 1 ). There was no difference ( p > 0.05) between lard- and corn oil-fed male rats regarding this parameter (Table 1 ). Further, in the insulin tolerance test, lard- and corn oil-fed male rats had higher ( p ≤ 0.05) AUCs than control diet-fed male rats (Fig. 1b ), indicating that they were insulin-intolerant. There was no difference ( p > 0.05) between lard- and corn oil-fed male rats regarding this parameter (Fig. 1b ). Fig. 1 Paternal fatty-acid consumption and insulin tolerance test. a Male rats fed a control diet (CO) or a lard-based (LB) or corn oil-based (CB) high-fat diet consumption of dietary fats (saturated fatty acids [SFA], monounsaturated fatty acids [MUFA], and polyunsaturated fatty acids [PUFA]) ( n = 20 per group). b Insulin tolerance test (ITT) of the CO-, LB-, and CB-fed male rats ( n = 6 per group). Inset : ITT is shown as the AUC. c Fifty-day-old female offspring ( n = 6 per group). Inset : ITT is shown as the AUC. Statistically significant difference ( p ≤ 0.05) compared with a CO and b LB, according to analysis of variance followed by a least significant difference test. The data are expressed as mean ± SEM Full size image Table 1 Health parameters of male rats and their 50-day-old female offspring in control diet, lard-based diet, and corn oil-based diet groups Full size table Female offspring health parameters Female offspring of both lard- and corn oil-fed male rats had greater birth weight ( p ≤ 0.05) and greater weight gain ( p ≤ 0.05; p ≤ 0.08 for offspring of corn oil-fed male rats) than offspring of control diet-fed male rats (Table 1 ). There was no difference ( p > 0.05) between female offspring of lard- and corn oil-fed male rats regarding birth weight (Table 1 ). Female offspring of corn oil-fed male rats had less ( p ≤ 0.05) weight gain than offspring of lard-fed male rats (Table 1 ). Similarly to the male rats, female offspring of both the lard-fed and the corn oil-fed male rats had greater ( p ≤ 0.05) retroperitoneal fat weights than offspring of control diet-fed male rats (Table 1 ). There was no difference ( p > 0.05) between female offspring of lard- or corn oil-fed male rats regarding this parameter (Fig. 1c ). Female offspring of lard-fed male rats had higher ( p ≤ 0.05) fasting glucose levels (Table 1 ) and higher ( p ≤ 0.05) AUCs than female offspring of control diet- or corn oil-fed male rats (Fig. 1c ). There was no difference ( p > 0.05) between female offspring of control diet- and corn oil-fed male rats regarding these parameters (Table 1 and Fig. 1c ). Female offspring mammary gland morphology Mammary gland morphology was assessed on the basis of mammary whole mounts obtained from 50-day-old female offspring. Both elongation of the mammary epithelial tree (Fig. 2c ) and the number of TEBs (Fig. 2d ) were higher ( p ≤ 0.05) in female offspring of lard-fed male rats than in female offspring of control diet- and corn oil-fed male rats. There was no difference ( p > 0.05) between female offspring of control diet- and corn oil-fed male rats regarding these parameters (Fig. 2c and d ). Fig. 2 Mammary gland development of 50-day-old female offspring of control diet (CO)-, lard (LB)-, and corn oil (CB)-fed male rats. a Histological depiction of the fourth abdominal mammary gland showing ductal elongation, indicated by arrow . b Terminal end buds (TEBs), indicated by arrows . c Ductal elongation. d Number of TEBs. All values are expressed as the mean ± SEM. Statistically significant difference ( p ≤ 0.05) compared with a CO and b LB, according to analysis of variance followed by a least significant difference test. The data are expressed as mean ± SEM ( n = 5 per group) Full size image Female offspring mammary gland tumors data Mammary tumors in the female offspring were induced by administering the carcinogen DMBA. Female offspring of lard-fed male rats had increased mammary tumor incidence ( p ≤ 0.05) compared with offspring of both control diet- and corn oil-fed male rats (Fig. 3a ). There was no statistical difference ( p > 0.05) between female offspring of control diet- and corn oil-fed male rats regarding tumor incidence. Female offspring of corn oil-fed male rats exhibited longer ( p ≤ 0.05) tumor latency and lower tumor multiplicity ( p ≤ 0.05) than female offspring of lard-fed male rats (Fig. 3b and d ). Compared with female offspring of control diet-fed male rats, female offspring of lard- and corn oil-fed male rats did not show differences ( p > 0.05) regarding tumor latency and multiplicity. Further, female offspring of corn oil-fed male rats showed less ( p ≤ 0.05) tumor growth in the first week of tumor appearance than offspring of both control diet- and lard-fed male rats (Fig. 3c ). There was no statistical difference ( p > 0.05) between offspring of control diet- and lard-fed male rats regarding tumor growth. Additionally, there was no statistical difference among groups in the tumor growth rate for the remaining experimental weeks. Fig. 3 Mammary tumorigenesis in female offspring of control diet (CO)-, lard (LB)-, and corn oil (CB)-fed males. a Number of tumor-free rats. b Number of days before the appearance of the first tumor. c Tumor growth rate in the first week of appearance. d Total number of tumors per rat (multiplicity). Statistically significant difference ( p ≤ 0.05) compared with a CO and b LB, according to analysis of variance followed by a least significant difference test. Marginal difference ( p ≤ 0.08) compared with d LB, according to t test. The data are expressed as mean ± SEM ( n = 24 per group). DMBA 7,12-dimethylbenz[a]anthracene Full size image Female offspring mammary gland and tumor cell proliferation and apoptosis Female offspring of lard-fed male rats exhibited an increased ( p ≤ 0.06) number of proliferative cells (Fig. 4b ) and a decreased ( p ≤ 0.05) number of apoptotic cells (Fig. 4e ) in mammary gland lobules compared with female offspring of control diet- and corn oil-fed male rats. There was no difference ( p > 0.05) between female offspring of control diet- and corn oil-fed male rats regarding these parameters (Fig. 4b and e ). Further, there was no difference ( p > 0.05) in cell proliferation and apoptosis in mammary gland ducts among female offspring of all groups. Female offspring of both lard- and corn oil-fed male rats exhibited a decreased ( p ≤ 0.05) number of apoptotic cells (Fig. 4f ) in mammary tumors compared with female offspring of control diet-fed male rats. There was no difference ( p > 0.05) between female offspring of lard- and corn oil-fed male rats regarding this parameter (Fig. 4f ). In addition, there was no difference ( p > 0.05) among groups regarding cell proliferation in the mammary tumors (Fig. 4c ). Fig. 4 Quantification of cell proliferation and apoptosis in mammary gland and tumors. a Photomicrograph (×20 original magnification) of Ki-67 immunostaining (cells indicated by arrows ). b Quantification of cell proliferation in the mammary gland ducts and lobules of 50-day-old female offspring of control diet (CO)-, lard (LB)-, and corn oil (CB)-fed male rats. c Quantification of cell proliferation in mammary tumors of female offspring from CO-, LB-, and CB-fed male rats. d Photomicrograph (×40 original magnification) showing apoptotic cells (indicated by arrows ). e Quantification of apoptosis in the mammary gland ducts and lobules of 50-day-old female offspring of CO-, LB-, and CB-fed male rats. f Quantification of apoptosis in mammary tumors of female offspring of CO-, LB-, and CB-fed males. Statistically significant difference ( p ≤ 0.05) compared with a CO and b LB, according to analysis of variance followed by a least significant difference test. Marginal difference ( p ≤ 0.06) compared with c CO and d LB, according to t test. The data are expressed as mean ± SEM ( n = 4 per group) Full size image miRNA expression profile in fathers’ sperm cells and in their daughters’ mammary glands To compare the outcomes of the distinct paternal high-fat diets on the basis of miRNA expression, Applied Biosystems TaqMan Rodent MicroRNA arrays were used to generate the miRNA profile for lard- and corn oil-fed fathers’ sperm cells, as well as for their respective daughters’ mammary glands. The microarray data are deposited in the Gene Expression Omnibus (GEO) public repository under accession number [GEO:GSE77012]. Corn oil-fed male rats had 89 downregulated ( p ≤ 0.05) miRNAs in the sperm compared with lard-fed male rats (Fig. 5a ). Furthermore, female offspring of corn oil-fed male rats had 21 downregulated ( p ≤ 0.05) and 2 upregulated ( p ≤ 0.05) miRNAs in their mammary glands compared with female offspring of lard-fed male rats (Fig. 5b ). There were three miRNAs that were downregulated in both the sperm and the mammary glands of the corn oil-fed fathers and their daughters, respectively: miR-1897-5p, miR-219-1-3p, and miR-376a#. IPA (Additional file 1 : Table S1) indicated that these miRNAs could regulate signaling pathways associated with key physiological processes such as growth hormone, phosphatase and tensin homolog (PTEN), and prolactin signaling, as well as disease processes such as Huntington’s disease, cardiac hypertrophy, type 2 diabetes mellitus, and breast cancer. Fig. 5 Heat map of microRNA (miRNA or miR) expression profiles. a Heat map of miRNA expression profile in sperm of the male rats from control diet-, lard-, and corn oil-fed male rats. b Heat map of miRNA expression profile in normal mammary tissue of 50-day-old female offspring ( n = 3 per group) Full size image Protein expression in female offspring mammary gland Since miR-1897-5p, miR-219-1-3p, and miR-376a# can directly or indirectly modulate several targets (shown in Additional file 1 : Table S1), we decided to perform Western blot analysis of the following proteins linked to breast cancer: CCAAT/enhancer-binding protein beta (Cebpβ), caspase 3 (Casp3), insulin-like growth factor 1 receptor (Igf1r), protein kinase D1 (Pkd1), and transforming growth factor, beta receptor I (Tgfβr1). On the one hand, there was no difference ( p > 0.05) among female offspring of the control diet-, lard-, and corn oil-fed male rats regarding Cebpβ, Casp3, and Igf1r levels (data not shown). On the other hand, female offspring of the corn oil-fed male rats had higher ( p ≤ 0.05) Pkd1 levels in the mammary glands than female offspring of lard-fed, but not control diet-fed, male rats (Fig. 6 ). There was no difference ( p > 0.05) between female offspring of control diet- and lard-fed male rats regarding this protein (Fig. 6 ). In addition, Tgfβr1 levels were significantly increased in the offspring of lard-fed male rats (Fig. 6 ) compared with offspring of both control diet-fed ( p ≤ 0.05) and corn oil-fed ( p ≤ 0.06) male rats. There was no difference ( p > 0.05) between female offspring of corn oil-fed and control diet-fed male rats regarding this protein (Fig. 6 ). Interestingly, both proteins are involved in regulating epithelial-to-mesenchymal transition (EMT): Pkd1 inhibits this process [ 36 ], and Tgfβr1 promotes it [ 37 ]. Fig. 6 Protein alterations associated with microRNA (miRNA) expression. Western blot analysis of protein kinase D1 (Pkd1), transforming growth factor, beta receptor I (Tgfβr1), transforming growth factor beta (Tgfβ), v-akt murine thymoma viral oncogene (Akt), mechanistic target of rapamycin (Mtor), mitogen-activated protein kinase kinase 4 (Mkk4), phosphorylated mitogen-activated protein kinase 8 (p-Jnk), and phosphorylated Smad family member 3/Smad family member 3 (p-Smad3/Smad3) ratio protein expression in mammary glands of 50-day-old female offspring of control diet (CO)-, lard (LB)-, and corn oil (CB)-fed male rats. Statistically significant difference ( p ≤ 0.05) compared with a CO and b LB, according to analysis of variance followed by a least significant difference test. Marginal difference ( p ≤ 0.06) compared with d LB, according to t test. The data are expressed as mean ± SEM ( n = 5 per group) Full size image We further explored if Tgfβ and key regulators of its activity were altered by measuring protein levels of v-akt murine thymoma viral oncogene (Akt), cofilin (Cfl), v-raf leukemia viral oncogene (c-Raf), extracellular signal-regulated kinase 1/2 (Erk1/2), phosphorylated mitogen-activated protein kinase 8 (p-Jnk), mitogen-activated protein kinase 4 (Mkk4), mechanistic target of rapamycin (Mtor), mitogen-activated protein kinase 14 (p38), phosphorylated Smad family member 3/Smad family member 3 (p-Smad3/Smad3) ratio, and Harvey rat sarcoma virus oncogene (Ras). Tgfβ protein expression was higher ( p ≤ 0.05) in the mammary glands of the female offspring of lard-fed male rats than in the offspring of control diet- and corn oil-fed male rats. There was no difference ( p > 0.05) between female offspring of corn oil- and control diet-fed male rats regarding this protein (Fig. 6 ). Further, the levels of Akt and p-Jnk were higher ( p ≤ 0.05) in the female offspring of lard-fed male rats than in female offspring of control diet-fed male rats (Fig. 6 ). There was no difference ( p > 0.05) in female offspring of corn oil-fed male rats and the offspring of control diet- and lard-fed male rats regarding these proteins (Fig. 6 ). Female offspring of corn oil-fed male rats had lower levels of Mtor, Mkk4 ( p ≤ 0.06), and p-Smad3/Smad3 ( p ≤ 0.05) than female offspring of lard-fed, but not control diet-fed, male rats (Fig. 6 ). There was no difference ( p > 0.05) between female offspring of control diet- and lard-fed male rats regarding these proteins (Fig. 6 ). In addition to promoting EMT, all these altered proteins are involved in increasing cell survival, growth, migration, and invasion. Discussion Breast cancer is a complex disease with a multifactorial etiology [ 38 ]. It is increasingly evident that in utero environment can program later susceptibility to breast cancer [ 39 ]. The findings of our present study suggest that breast cancer risk can be determined even earlier through diet-induced changes in paternal germ cells before conception. Our study shows that, compared with female offspring of control diet-fed fathers, offspring of lard-fed fathers did not differ in tumor latency, growth, or multiplicity. However, female offspring of lard-fed fathers had increased elongation of the mammary epithelial tree, number of TEBs, and tumor incidence compared with both control diet- and corn oil-fed fathers, showing that paternal exposure to a lard-based high-fat diet containing SFA increased their daughters’ mammary cancer risk. TEBs are considered sites of tumor initiation [ 40 ], and increased epithelial elongation reflects rapid epithelial growth [ 41 ]. Additionally, female offspring of lard-fed fathers showed increased cell proliferation and decreased apoptosis in the mammary gland lobules compared with female offspring of both control diet- and corn oil-fed fathers. Altogether, these findings support the view that altered mammary gland development represents a potential underlying mechanism of increased breast cancer risk [ 42 ]. Compared with female offspring of control diet-fed fathers, female offspring of corn oil-fed fathers had decreased tumor growth. There was no difference in tumor incidence, latency, or multiplicity between female offspring of control diet- and corn oil-fed fathers. In addition, female offspring of corn oil-fed fathers had longer tumor latency, decreased tumor growth, and decreased multiplicity compared with female offspring of lard-fed fathers. These data show that paternal exposure to a corn oil-based high-fat diet containing n-6 PUFA had an effect opposite that of a lard-based high-fat diet and reduced their daughters’ mammary cancer risk. Although male rats that were fed the lard-based and corn oil-based high-fat diets consumed the same amount of calories, lard increased the body weight and size of epididymal fat pads more than corn oil did. Thus, different fatty acids can have distinct effects on adipose accumulation, as already shown by others [ 43 ]. Our results further show that lard, but not corn oil, elicited detrimental effects on male reproductive parameters (fewer normal sperm cells and lower daily sperm production). This is in line with earlier human and animal data showing that SFA disrupt testicular metabolism and sperm quality, while PUFA are essential for sperm cell membrane fluidity and flexibility as well as fertilization [ 44 ]. Excessive epididymal fat in lard-fed males may have been detrimental to spermatogenesis, as epididymal tissue is an essential depot for spermatogenesis [ 45 ]. The adverse effects of lard may not be mediated through increased insulin resistance. Although a correlation between insulin resistance and impaired sperm production has been reported in rats fed a diet high in SFA [ 46 ], as also found in the present study, a corn oil-based high-fat diet also impaired insulin tolerance but did not affect male reproductive parameters. We propose that impaired sperm quality and function in lard-fed fathers could be associated with disruption in metabolic programming and increased breast cancer risk among their daughters. The impact of obesity in fathers leading to metabolic dysfunction in their female offspring was previously observed in rodent studies [ 10 ], and it was also seen in our present study. Female offspring of both lard- and corn oil-fed fathers exhibited increased body weight and adiposity. However, only female offspring of lard-fed fathers displayed an impaired insulin response, indicating that the type of dietary fatty acids consumed represents a key factor in metabolic programming through the male germline. Epigenetic modifications that are necessary for achieving reproductive capacity of male gametes include DNA methylation, histone retention, and expression of noncoding RNAs such as miRNAs [ 47 ]. Because miRNAs can modulate the expression of hundreds of mRNAs that affect embryonic development as well as the establishment of the offspring’s epigenome [ 48 ], they have been proposed to mediate paternal programming effects on the offspring [ 49 ]. The epididymis has been implicated as the site of the alterations of miRNA signatures occurring during the maturation of sperm cells, and therefore an increase in epididymal fat pad size could potentially impact inheritance of miRNA signatures and/or the developmental trajectory of the offspring [ 50 ]. The impact of high-fat-diet-induced male obesity on the miRNA profile in mature spermatozoa has been examined in rodent studies [ 51 ]. In a study by Fullston et al. [ 52 ], males fed a high-saturated-fat diet exhibited changes in miRNAs in the testes and mature spermatozoa that target mRNA associated with spermatogenesis, embryonic development, and metabolic diseases in the offspring. We provide further evidence that paternal nutrition can impact the sperm miRNA profile and possibly the subsequent mammary gland miRNA profile, which in turn targets genes implicated in breast cancer and other diseases. Some of the miRNAs that were differentially expressed in the lard- and corn oil-fed fathers’ germ cells also were differentially expressed in their daughters’ mammary glands, although the daughters were never directly exposed to the high-fat diets. When we compared lard-fed fathers and their daughters, we observed three miRNA that were significantly altered in both the sperm and mammary glands of corn oil-fed fathers and their daughters: miR-1897-5p, miR-219-1-3p, and miR-376a#. Since miRNAs can modulate gene expression by inhibiting the translation of mRNA or by directing their degradation [ 53 ], we focused on determining if the expression of the top potentially targeted proteins (Additional file 1 : Table S1) was altered in the daughters of lard- or corn oil-fed fathers. Among them, we highlight Pkd1 and Tgfβr1. PKD1 is a serine/threonine kinase that is expressed in ductal epithelial cells of the mammary gland, maintains the epithelial phenotype, and prevents EMT [ 54 ]. Inhibition of PKD1 can lead to pathological conditions such as cancer [ 55 ]. Thus, our finding of increased Pkd1 levels in the mammary glands of corn oil-fed fathers’ offspring, compared with female offspring of lard-fed fathers, is in line with their lowest susceptibility to breast cancer. In addition, compared with female offspring of control diet- and corn oil-fed fathers, another miRNA target, Tgfβr1, was increased in the daughters of lard-fed fathers that displayed the highest susceptibility to mammary cancer. Tgfβr1 expression is related to promotion of breast carcinogenesis through multiple mechanisms, including enhancing EMT [ 56 ]. We assessed the protein levels of up- and downstream signaling partners of Tgfβr1. Female offspring of lard-fed fathers showed higher protein levels of Tgfβ than female offspring of both control diet- and corn oil-fed fathers, as well as higher protein levels of Akt and p-Jnk than control diet-fed fathers. In addition, female offspring of corn oil-fed rats had lower levels of Mtor, Mkk4, and p-Smad3/Smad3 than female offspring of lard-fed fathers. These proteins collectively play important roles in cell survival, growth, migration, and invasion [ 57 , 58 ]. These findings indicate that mechanisms other than miRNAs contribute to changes in gene expression in the daughters’ mammary tissue following paternal exposure to a lard-based high-fat diet. Because fathers, mothers, and their daughters tend to share the same nutritional habits [ 59 ], it is important to further investigate if paternally programmed breast cancer risk is affected by maternal and female offspring’s fat intake. Maternal intake of a high corn oil diet during pregnancy increases female offspring’s mammary cancer risk [ 8 ], while intake of lard has opposite effects [ 9 ]. In addition, because obesity-induced altered sperm miRNA expression in the fathers can be normalized through exercise or dietary intervention (consumption of balanced diet), which then improves the metabolic health of female offspring [ 60 ], the efficacy of a similar intervention in reducing daughters’ breast cancer risk should be investigated. Conclusions In the present study, we show that paternal intake of a lard-based high-fat diet rich in SFA increases female offspring’s mammary cancer risk, as indicated by the increased elongation of the mammary epithelial tree, number of TEBs, and tumor incidence in female offspring of lard-fed fathers compared with female offspring of both control diet- and corn oil-fed rats. However, if the paternal fat source is corn oil that is high in n-6 PUFA, these male rats’ offspring’s mammary cancer risk is reduced, as indicated by decreased tumor growth in female offspring of corn oil-fed fathers compared with female offspring of both control diet- and lard-fed fathers, as well as by longer tumor latency and decreased tumor multiplicity compared with female offspring of lard-fed fathers. Altered miRNA expression in fathers’ sperm and daughters’ mammary glands may at least underlie these effects, but other epigenetic changes are likely to be involved. Our findings highlight the importance of paternal nutrition in affecting future generations’ risk of developing breast cancer. Abbreviations Akt, v-akt murine thymoma viral oncogene; ANOVA, analysis of variance; Casp3, caspase 3; CB, rats that were fed a corn oil-based high-fat diet and their offspring; Cebpβ, CCAAT/enhancer-binding protein beta; Cfl, cofilin; CO, rats that were fed a control diet and their offspring; c-Raf, v-raf leukemia viral oncogene; DMBA, 7,12-dimethylbenz[a]anthracene; EMT, epithelial-to-mesenchymal transition; Erk1/2, extracellular signal-regulated kinase 1/2; Igf1r, insulin-like growth factor 1 receptor; ITT, insulin tolerance test; LB, rats that were fed a lard-based high-fat diet and their offspring; LSD, least significant difference; miRNA or miR, microRNA; Mkk4, mitogen-activated protein kinase kinase 4; mRNA, messenger RNA; Mtor, mechanistic target of rapamycin; MUFA, monounsaturated fatty acid; p38, mitogen-activated protein kinase 14; p-Jnk, phosphorylated mitogen-activated protein kinase 8; Pkd1, protein kinase D1; p-Smad3/Smad3, phosphorylated Smad family member 3/Smad family member 3 ratio; PTEN, phosphatase and tensin homolog; PUFA, polyunsaturated fatty acid; Ras, Harvey rat sarcoma virus oncogene; SCLB, somatic cell lysis buffer; SFA, saturated fatty acid; TEB, terminal end bud; Tgfβr1, transforming growth factor, beta receptor I | The dietary habits of rat fathers may affect their daughters' breast cancer risk, a study in 60 male rats and their offspring has found. The study is published in the open access journal Breast Cancer Research. Researchers at the University of Sao Paulo showed that the female offspring of male rats which had been fed a diet rich in animal fats had an increased risk of breast cancer. A diet that was rich in vegetable fats reduced the offspring's risk of breast cancer. Thomas Ong, the corresponding author, said: "Although in recent years, interest in the fathers' role in their offspring's health has grown, information concerning the influence of paternal factors on their daughters' breast cancer risk is very limited. In this study we have used a rat model to compare the impact of the consumption of high levels of animal or vegetable fat by fathers before conception on their daughters' risk of breast cancer." The researchers fed 60 male rats (3 groups, 20 rats per group) either a lard-based or corn-oil based high-fat diet (60% of energy derived from fat) or a control diet (16% energy derived from fat). The rats were then mated with female rats that had been fed a standard laboratory diet. Female offspring were fed a standard laboratory diet and induced with mammary tumors at 50 days of age. The researchers sought to determine the time it took for tumors to appear (latency), the number of animals with tumors (incidence) and the number of tumors per animal (multiplicity), as well as tumor volume, as indicators of breast cancer risk. Female offspring of male rats on both high-fat diets showed reduced tumor cell death compared to controls. However, offspring of corn oil-fed male rats showed decreased tumor growth compared to the offspring of male rats that had been fed a lard-based or a control diet. Offspring of corn-oil fed rats also had longer tumor latency - it took tumors longer to start growing - and fewer tumors compared to the offspring of male rats fed a lard-based diet. Thomas Ong said: "Because the consumption of high levels of fat is considered bad for health, the decreased breast cancer risk in the female offspring of fathers that consumed corn oil was surprising. Lard contains high levels of saturated fat whereas corn oil is rich in n-6 polyunsaturated fat. This suggests that the type of dietary fat consumed by fathers is an important factor influencing their daughters' breast cancer risk." The researchers also collected sperm from male rats and mammary glands from their female offspring to investigate changes in microRNA and protein expression. They showed that both male rats and their female offspring exhibited changes in microRNAs and proteins that could affect processes including cell growth, cell survival or cell death. The findings suggest that diet-induced changes in paternal germ cells even before conception can influence the breast cancer risk of female offspring, according to the researchers. Thomas Ong said: "If this is confirmed in human studies, potential breast cancer prevention strategies could be developed focusing on fathers' diets during preconception." Since fathers, mothers and their daughters often share the same nutrition habits, further research is needed on how the fat intake of mothers and their female offspring may affect breast cancer risk, according to the researchers. As changes in microRNAs in male rodents can be normalized through exercise and dietary intervention, the researchers propose that the effect of similar interventions on female offspring's breast cancer risk should also be investigated. | 10.1186/s13058-016-0729-x |
Earth | Study provides scenarios for assessing long-term benefits of climate action | "Integrated economic and climate projections for impact assessment." Climatic Change DOI: 10.1007/s10584-013-0892-3 Journal information: Climatic Change | http://dx.doi.org/10.1007/s10584-013-0892-3 | https://phys.org/news/2015-09-scenarios-long-term-benefits-climate-action.html | Abstract We designed scenarios for impact assessment that explicitly address policy choices and uncertainty in climate response. Economic projections and the resulting greenhouse gas emissions for the “no climate policy” scenario and two stabilization scenarios: at 4.5 W/m 2 and 3.7 W/m 2 by 2100 are provided. They can be used for a broader climate impact assessment for the US and other regions, with the goal of making it possible to provide a more consistent picture of climate impacts, and how those impacts depend on uncertainty in climate system response and policy choices. The long-term risks, beyond 2050, of climate change can be strongly influenced by policy choices. In the nearer term, the climate we will observe is hard to influence with policy, and what we actually see will be strongly influenced by natural variability and the earth system response to existing greenhouse gases. In the end, the nature of the system is that a strong effect of policy, especially directed toward long-lived GHGs, will lag by 30 to 40 years its implementation. Working on a manuscript? Avoid the common mistakes 1 Introduction The evidence that the climate is changing and that human activities are responsible has been confirmed by the Intergovernmental Panel on Climate Change (IPCC 2007c ) and by the US National Academy of Sciences (National Research Council 2011 ). Attention has turned to how likely future climate change might affect the economy and natural ecosystems. The US government has adopted a “social cost of carbon” concept for regulatory proceedings on greenhouse gas mitigation, based on a measure of damage caused by climate change (IWGSCC 2010 ; Marten et al. 2013 ). While there is hope of avoiding climate change—the international community has set the goal of keeping warming to less than 2 °C above preindustrial—even if that goal is achieved the world and the US would see on the order of another 1.2 °C of warming over the next several decades. Given the challenge of achieving such a goal and the lack of progress in developing policies that would likely be needed to achieve it, the world may well experience considerably more warming. Beyond just understanding the potential impacts of climate change on the US for a given scenario, one may be interested in better understanding the benefits of a mitigation strategy: that is how much disruption is avoided for a given mitigation scenario compared with doing nothing or much less mitigation. To answer these questions we need an integrated approach to economic and climate scenario design so that there is a clear relationship between a specific population and economic growth scenario, the resultant emissions and climate outcomes, and back again to the impacts on an economic scenario at least approximately consistent with that driving the emissions scenario. In this paper we lay out the construction of such a set of scenarios that form the basis for additional analyses of climate impacts. In particular, the scenarios are used for climate change impacts in the US as part of the Climate Impacts and Risk Analysis (CIRA) project (see an overview paper by Waldhoff et al. 2013 , in this issue). The scenarios are constructed using the MIT Integrated Earth System Model (IGSM) version 2.3 (Dutkiewicz et al. 2005 ; Sokolov et al. 2005 ). A fully integrated approach would include climate impact and adaptation feedbacks on the economy (see e.g., Reilly et al. 2012 , for scenarios with the feedbacks and Reilly et al. 2013 , for an approach for valuing impacts) so that, in principle, those feedbacks might affect the level of energy, land use, and industrial activity and thus the emissions of greenhouse gases themselves, but that was beyond this exercise. It is useful to compare very briefly with what has gone before. On the one side, there are models such as DICE (Nordhaus 2008 ), FUND (Anthoff and Tol 2009 ), and PAGE (Hope 2013 ) that attempt a full integration where the overall level of the economy is jointly determined by conventional macroeconomic factors such as labor productivity growth, savings and investment, costs of mitigating greenhouse gases and the damage/adaptation costs related to climate change itself. The goal of these models is often to estimate an “optimal” emissions control path where the marginal social cost of mitigation is equal to the marginal social damage cost of climate change. However, to accomplish this feat, typically the mitigation costs and the damage costs are estimated as a reduced form functions, and so details such as whether warming would change heating demands or land use and thus emissions are omitted. Damage functions are often derived from a meta-analysis of the literature. In this formulation damages are often simply a single dollar value with no trace back to whether they are due to reduced crop yields in region a and increased air conditioning in region b . At the other end of the spectrum are impact and adaptation studies, reviewed extensively by the IPCC working group II (IPCC 2007a ), that are the basis of these meta-analyses. These are often highly detailed, but may be focused on a single region, and based on one or a few climate scenarios that differ widely among the studies because of inherent differences in climate models as well as different economic and emissions scenarios. Emissions vary because of different economic growth and technology assumptions as well as different assumptions about policy. Meta-analyses often summarize the climate change scenario by a single indicator, global mean surface temperature increase, ignoring other patterns of change that may be important. While useful for many purposes, the mix of different reasons for different climate scenarios makes it difficult to trace back and conclude that Policy A leads to damages of X, and more stringent policy B leads to damages of Y, and so the benefits of taking the more stringent policy are X minus Y. The goal of the scenario development exercise developed here is to allow such a calculation while also including considerable detail on each impact sector. The scenarios presented here are part of a multi-model project to achieve consistent evaluation of climate change impacts in the US (Waldhoff et al. 2013 ). 2 Description of the modeling framework The MIT Integrated Global System Model version 2.3 (IGSM2.3) is an integrated assessment model that couples a human activity model to a fully coupled earth system model of intermediate complexity that allows simulation of critical feedbacks among its various components, including the atmosphere, ocean, land and urban processes. The earth system component of the IGSM includes a two-dimensional zonally averaged statistical dynamical representation of the atmosphere, a three-dimensional dynamical ocean component with a thermodynamic sea-ice model and an ocean carbon cycle (Dutkiewicz et al. 2005 , 2009 ) and a Global Land Systems (GLS, Schlosser et al. 2007 ) that represents terrestrial water, energy and ecosystem processes, including terrestrial carbon storage and the net flux of carbon dioxide, methane and nitrous oxide from terrestrial ecosystems. The IGSM2.3 also includes an urban air chemistry model (Mayer et al. 2000 ) and a detailed global scale zonal-mean chemistry model (Wang et al. 1998 ) that considers the chemical fate of 33 species including greenhouse gases and aerosols. Finally, the human systems component of the IGSM is the MIT Emissions Predictions and Policy Analysis (EPPA) model (Paltsev et al. 2005 ), which provides projections of world economic development and emissions over 16 global regions along with analysis of proposed emissions control measures. EPPA is a recursive-dynamic multiregional general equilibrium model of the world economy, which is built on the Global Trade Analysis Project (GTAP) dataset of the world economic activity augmented by data on the emissions of greenhouse gases, aerosols and other relevant species, and details of selected economic sectors. The model projects economic variables (gross domestic product, energy use, sectoral output, consumption, etc.) and emissions of greenhouse gases (CO2, CH4, N2O, HFCs, PFCs and SF6) and other air pollutants (CO, VOC, NOx, SO2, NH3, black carbon and organic carbon) from combustion of carbon-based fuels, industrial processes, waste handling and agricultural activities (see Waugh et al. 2011 , for emission inventory sources). The model identifies sectors that produce and convert energy, industrial sectors that use energy and produce other goods and services, and the various sectors that consume goods and services (including both energy and non-energy). The model covers all economic activities and tracks domestic use and international trade. Energy production and conversion sectors include coal, oil, and gas production, petroleum refining, and an extensive set of alternative low-carbon and carbon-free generation technologies. A major feature of the IGSM is the flexibility to vary key climate parameters controlling the climate response: the climate sensitivity, ocean heat uptake rate and net aerosol forcing. The IGSM is also computationally efficient and thus particularly adapted to conduct sensitivity experiments like estimating probability distribution functions of climate parameters using optimal fingerprint diagnostics (Forest et al. 2008 ) or deriving probabilistic projections of 21st century climate change under varying emissions scenarios and climate parameters (Sokolov et al. 2009 ; Webster et al. 2012 ). The IGSM has also been used to run several-millenia-long simulations (Eby et al. 2012 ; Zickfeld et al. 2013 ). Because the atmospheric component of the IGSM is two-dimensional (zonally averaged), regional climate cannot be directly resolved. To simulate regional climate change, two methods have been applied. First, the IGSM-CAM framework (Monier et al. 2013a ) links the IGSM to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). Secondly, a pattern scaling method extends the latitudinal projections of the IGSM 2-D zonal-mean atmosphere by applying longitudinally resolved patterns from observations, and from climate-model projections (Schlosser et al. 2012 ). The companion paper by Monier et al. ( 2013b ) describes both a method and presents a matrix of simulations to investigate regional climate change uncertainty in the US. 3 Scenario design Scenario design is a challenge because of the desire to span a wide range of possible outcomes while keeping the number of scenarios to a manageable level. A full uncertainty analysis would sample from uncertain economic/technology and climate parameter inputs. Policy could be addressed as an additional uncertain variable where estimates of the likelihood of policy occurring at different times and stringencies would be necessarily a subjective judgment. The alternative, as in Webster et al. ( 2012 ), is to choose multiple “certain” policy scenarios, and produce a large ensemble of runs for each where the different ensemble members represent uncertainty in non-policy related economic/technology and climate parameters. A typical single large ensemble that is able to span the likely range of outcomes may include 400 members. If, then, three separate policy scenarios are examined the total number of scenarios would be 1200. The broader study design envisioned here was that a variety of analytical teams would use the scenarios for impact assessment. While this study design has the advantage of bringing in more highly resolved models of specific sectors there are some tradeoffs. One is that often the impact analysis approach requires considerable effort to set up each scenario, or the impact models themselves are computationally intensive, thus realistically these teams were likely to be able to consider only a few to a dozen scenarios. Another limitation is that it precludes a full integration and feedbacks. We thus considered 3 policy scenarios and 4 scenarios to capture uncertainty in climate response resulting in 12 core simulations with the IGSM. The scenarios are then used by other modeling groups to provide a consistent evaluation of climate change impacts (see Waldhoff et al. 2013 , for an overview). 3.1 Policy design Given the potential interest in understanding the benefits of mitigation, we necessarily considered a “no policy” scenario (Reference, or REF) that would then be the basis for comparison of any mitigation scenarios. A set of Representative Concentration Pathway (RCP) scenarios have been developed in support of the Intergovernmental Panel on Climate Change as described in van Vuuren et al. ( 2011 ). The RCPs were defined in terms of total radiative forcing from preindustrial emissions of anthropogenic greenhouse substances including both long-lived greenhouse gases (GHGs), aerosols and tropospheric ozone but excluding effects of land cover change, jet contrails, and other smaller contributors. A total of 4 RCPs were defined where radiative forcing would not exceed 2.6, 4.5, 6 and 8.5 W/m 2 by 2100 (i.e., RCP 2.6 does exceed the stated level of forcing at some point, it reaches 2.6 W/m 2 in 2100). Much of the international negotiations are focused on staying below 2 °C of warming from preindustrial. To have a reasonable chance of staying at that level would require the most stringent RCP2.6. That said there is considerable question as to whether such a target is feasible given the lack of significant progress in developing an international agreement to limit greenhouse gases. Most analyses that attempt to represent such a scenario either relax the requirement to allow for overshoot with gradual return to the lower level, or require some type of negative emission technology. Given the unlikelihood of reaching this goal, we focused on a scenario of stabilization at 4.5 W/m 2 and then constructed a 3.7 W/m 2 scenario, referred to subsequently as POL4.5 and POL3.7, respectively. While not among the RCP family, it might be considered a somewhat more realistic scenario than RCP2.6, and allows a comparison of what is gained from making the extra effort to get from POL4.5 to POL3.7. While there are many issues that arise in estimating an optimal mitigation trajectory (see e.g., Jacoby 2004 ) such optimality is where the marginal benefit equals the marginal cost. The differences between POL4.5 and POL3.7 provide something closer to a marginal benefit. In contrast to RCPs, where each scenario is developed by a different modeling group (and as such some aspects of the scenarios are not compatible, such as, for example, land-use emissions), an advantage of our approach is that all scenarios are constructed with a set of consistent interactions between population growth, economic development, energy and land system changes and the resulting emissions of GHGs, aerosols, and air pollutants. In addition, while the scenarios underlying the different RCPs were developed by different models and assumed different baselines, this study not only uses the same model, but also a single baseline. Another potentially important element of the scenario design is how the policy is implemented. One might hope to represent a realistic policy design, however, actual policies being implemented by countries today are not close to achieving POL3.7 or even POL4.5. For example, Paltsev et al. ( 2012 ) estimate that the Copenhagen-Cancun international agreement would lead to about 9.0 W/m 2 by 2100. Hence current policies provide no guidance for what would be needed to achieve these tighter targets. To have any chance of achieving them the policy measures need to be universal, including essentially all countries, and cover all greenhouse gases. And the policies need to be effective. For example, Clarke et al. ( 2009 ) consider several models in an idealized cost-effective mitigation setting and conclude that a delay in participation by developing countries increases the costs and challenges of meeting long-term climate goals. Other important elements of policy design are which countries bear the cost burden of reducing emissions as well as the timing of emissions reduction. While there are many schemes to distribute the burden, such as per capita emissions targets and the like, these often have relatively perverse equity effects (see e.g., Jacoby et al. 2009 ). As a result, we chose a simple policy design—a uniform global carbon tax, constant in net present value terms, where each region collects and recycles the revenue internally to its representative agent. Through an iterative procedure we determine the tax rate needed to achieve the target, and we apply the tax to all GHG emissions where Global Warming Potential (GWP) indices adjust the tax level for different greenhouse gases. Reilly et al. ( 2012 ) discuss the thought experiment that allows more stringent climate targets by ideally pricing land carbon, and show the significant trade-offs with this integrated land-use approach when prices for agricultural products rise substantially because of mitigation costs borne by the sector and higher land prices. There are also policy coordination issues of extending a carbon tax to land (Reilly et al. 2012 ) and competition between energy crops and forest carbon strategies (Wise et al. 2009 ). Therefore, we exclude CO 2 emissions from land-use change from the tax in this study. This formulation of the tax policy means that each region bears the direct cost of its abatement activities but may benefit or lose from effects transmitted through trade. A uniform global tax that is constant in net present value terms, by equating marginal cost of reduction across space and time would, under some ideal conditions lead to a least-cost solution. However, interaction with other distortions and externalities could mean there are even more efficient solutions, e.g., if tax revenue were used to reduce other distortionary tax rates or there were other benefits of reduced conventional pollutants. Similarly, if one designed the tax strategy in consideration of existing energy taxes and policies the economic cost of the policy could be lower. A uniform global tax policy is far more efficient than existing policies because they are highly differentiated among sectors and regions, and often use multiple policy instruments that lead to wide disparities in marginal cost and suffer from leakage. 3.2 Climate parameter choice To represent uncertainty in earth system response to changing concentrations of greenhouse gases and aerosols, the climate sensitivity of the atmospheric model was altered to span the range given by the Intergovernmental Panel on Climate Change (IPCC), with an additional low probability/high risk value. The ocean heat uptake rate in all simulations lies between the mode and the median of the probability distribution obtained with the IGSM using optimal fingerprint diagnostics similar to Forest et al. ( 2008 ). This corresponds to an effective vertical eddy diffusivity of 0.5 cm 2 /s. The four values of climate sensitivity (CS) considered are 2.0, 3.0, 4.5 and 6.0 °C, which represent respectively the lower bound (CS2.0), best estimate (CS3.0) and upper bound (CS4.5) of climate sensitivity based on the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC 2007b ), and a low probability/high risk climate sensitivity (CS6.0). The associated net aerosol forcing was chosen to ensure a good agreement with the observed climate change over the 20th century. This is achieved based on the marginal posterior probability density function with uniform prior for the climate sensitivity-net aerosol forcing (CS-Fae) parameter space shown in Fig. 1 . The net aerosol forcing is chosen to provide the same transient climate response as the median set of parameters of the CS-Fae parameter space. The values are −0.25 W/m 2 , −0.70 W/m 2 , −0.85 W/m 2 and −0.95 W/m 2 for, respectively, CS2.0, CS3.0, CS4.5 and CS6.0. While choosing a single value of ocean heat uptake rate instead of sampling all three climate parameters is limiting the representation of the full range of uncertainty in future climate change, it allows for a reasonable number of simulations to be considered. In addition, if we considered a higher (lower) ocean heat uptake rate, it would require higher (lower) values of climate sensitivity in order to reproduce the observed 20th century, thus resulting in similar future climate change. Due to the correlation between climate parameters imposed by the requirement to match the observed 20th century temperature change, this decrease in the range of uncertainty will be rather small, especially on time scales less than one century considered here. Fig. 1 The marginal posterior probability density function with uniform prior for the climate sensitivity-net aerosol forcing (CS-Fae) parameter space from Forest et al. ( 2008 ). The shading denotes rejection regions for a given significance level – 50 %, 10 % and 1 %, light to dark, respectively (Forest et al. 2008 ). The positions of the red and green dots represent the parameters used in the simulations presented in this study. The green line represents combinations of climate sensitivity and net aerosol forcing leading to the same transient climate response as the median set of parameters ( green dot ) Full size image 4 Economic and energy projections In EPPA, key assumptions about population and labor growth interact to endogenously determine the rates of growth in each region and in the world. These assumptions concern: growth in productivity of labor, land and energy; the exhaustible and renewable resource base; technology availability/cost; and policy constraints. More rapid productivity growth leads to higher rates of output, savings, investment and GDP growth. More rapid growth will deplete resources faster and press against available renewable resources and these pressures will retard growth. Policy constraints that limit the use of available fossil resources will also tend to slow growth. However, the version of the model used here does not include any climate and environmental feedbacks on the economy. Given space limitations, we provide a high level overview of the key results. Detailed tables of results for each scenario are provided in the Online Resources (see Online Resource 1 for the list of files). GDP growth varies by region and policy scenario (Online Resource 1 ). The U.S. economic growth up to 2035 is based on the EIA Annual Energy Outlook (EIA 2011 ). In the other regions the 2005–2010 growth is based on historical data and also includes the recent recession, but by 2015 we assume rates of productivity growth similar to the past 30 years (similar assumption is made for the U.S. growth after 2035). Slowing population growth and gradual slowing of labor productivity over time leads to somewhat slower growth in all regions in the second half of the century. Growth is generally more rapid in lower income countries than higher income countries, assuming some catch-up. Growth in POL4.5 is slowed by about 0.2 % to as much as 0.6 % per year depending on the region compared with No Policy. Going from the POL4.5 to POL3.7 cuts another 0.1 to 0.2 %. EPPA models international trade, and so solves in market exchange rates. An alternative is to convert GDP to purchasing power parity (PPP). Using PPP would value upward the level of GDP in many of the lower income countries, but would not change the growth rate. Such revaluation would give more weight to lower income countries and so would tend to raise the world average. In later figures we aggregate these regions to include the USA, Other Developed, Other G-20, and Rest of the World. Countries and regions included in these regions are defined in Online Resource 1 . The G-20 is the largest 20 economies of the world. Our Other G-20 group—large economies not among the traditional “developed” economies—is an approximation given the level of aggregation in the EPPA model: South Africa, Turkey and Saudi Arabia are among the G-20 but not in our grouping because they are aggregated into other large regions in EPPA. High Income Asia includes Korea and Indonesia, among the G-20, but also several other smaller economies (see Online Resource 1 and Paltsev et al. 2012 ). The most important results in terms of effects on climate are global GHG emissions (Fig. 2 ). Included are CO 2 emissions from fossil energy combustion, industry (i.e. primarily cement), and CH 4 , N 2 O, PFCs, HFCs, and SF 6 . CO 2 emissions from land are not included in this total but an estimate of global net emissions from human activity is included in the Online Resources . The IGSM has an active terrestrial vegetation model that responds to climate and atmospheric CO 2 , an active ocean model that takes up CO 2 , and explicit chemistry, e.g., oxidizes CH 4 into CO 2 . We infer anthropogenic land use emissions given historical estimated fossil and industrial emissions, the modeled ocean and terrestrial vegetation response as the additional emissions needed to match the recent trends in observed concentrations. The IGSM also separately models natural sources of CH 4 (wetlands) and N 2 O (unfertilized soils), and how such emissions respond to climate change. Amplified natural emissions due to climate change are not included in Fig. 2 . However, all of these changes are reflected in estimates of changes in concentrations, radiative forcing and temperature reported in the next section. Fig. 2 Anthropogenic emissions: CO 2 (fossil and industrial) CH 4 , N 2 O, HFCs, SF6, and PFCs Emissions (Mt of CO 2 -equivalent). See Online Resource 1 for regions Full size image Emissions of other substances beyond the “Kyoto” gases are also important for the climate, either because of their direct radiative effect or, indirectly, as the result of atmospheric chemistry processes. EPPA projects emissions of SO 2 , CO, VOCs, BC, OC, and NH 3 . EPPA models multiple sources of each of these substances as both as co-products, with CO 2 , of the combustion of fossil fuels, and of other activities, many associated with agricultural practices and biomass burning. Coefficients of emissions per unit of activity vary by fuel (oil, coal, gas), by sector and technology, by region, and over time reflecting more attention to pollution reduction by different regions as they advance and develop. For those emissions directly associated with fuel combustion, carbon policy has a strong ancillary effect (mostly beneficial reductions). This is less so for those emissions associated with biomass burning. In our projections ( Online Resources ) China is often the largest regional source of these pollutants because of its large and rapidly growing energy use, and limited pollution control. The exceptions are BC and OC where Africa is the largest source because of biomass burning. In Reference, SO 2 , BC, and OC rise a bit initially but then fall, even though underlying fossil energy consumption is growing because we represent emissions coefficients as falling due to development and pollution (but not climate) policy. Emissions of NOx, CO and VOCs continue to grow, as these have proved more difficult to control, although slower than fossil fuel use. In both POL4.5 and POL3.7, the existence of policies that affect fossil fuel use, and the technologies used to burn it, reduce or further reduce other pollutant emissions. This link is weakest for those pollutants, e.g., BC and OC, that are dominated by biomass burning, especially related to agricultural practices such as open savannah burning or land clearing. 5 Global earth system implications Figure 3 shows CO 2 concentrations and GHG global radiative forcing for the three emissions scenarios along with the SRES (Nakićenović et al. 2000 ) and RCP scenarios (van Vuuren et al. 2011 ). Under the reference scenario, the CO 2 concentration reaches 830 ppm in 2100 (and 1750 ppm of CO 2 -equivalent) and the GHG radiative forcing is 9.7 W/m 2 (total radiative forcing of 10 W/m 2 ). Even though the CO 2 concentrations are lower than the RCP8.5 and the SRES A1FI scenarios, the GHG global radiative is larger. That is largely caused by higher emissions (and thus concentrations) of CH4, both natural and anthropogenic. The large reduction in greenhouse gas concentration and global radiative forcing achieved by the two policies is clear. The implementation of POL4.5 leads to a CO 2 concentration of 500 ppm (600 ppm of CO 2 -equivalent) and a GHG radiative forcing of 4.3 W/m 2 (total radiative forcing of 4.5 W/m 2 ) in 2100. Under POL3.7, the CO2 concentration is 460 ppm (500 ppm of CO 2 -equivalent) in 2100 and the GHG radiative forcing is 3.6 W/m 2 (total radiative forcing of 3.7 W/m 2 ). Fig. 3 CO 2 concentration, in ppm, and greenhouse gases (GHG) radiative forcing, in W/m 2 , (including CO 2 , CH 4 , N 2 O, PFCs, SF6, HFCs, CFCs and HCFCs) for the three scenarios presented in this paper, the four RCP scenarios and the SRES scenarios A1FI, A1B, A2 and B1. CO 2 concentrations observed at Mauna Loa are also shown until 2012 Full size image Figure 4 shows time series of global mean temperature, precipitation, sea level rise (including thermal expansion and the melting of glaciers, but excluding the melting of ice sheets) and ocean pH for the 12 core simulations with the IGSM. The difference in emissions scenarios between REF and even the more modest POL4.5 are quite large, and even more so for POL3.7. However, through about 2040 the uncertainty in climate sensitivity tends to dominate the sea level rise and changes in temperature and precipitation. That is because emissions cuts of long-lived GHGs must accumulate before we see a large effect on various earth system outcomes. On the other hand, ocean pH is mainly affected by emissions scenarios because the oceans are absorbing about a third of the CO2 emitted into the atmosphere. As atmospheric CO2 concentrations increase, oceans become more acidic. After 2040, the policy scenarios begin to separate from the reference scenario. By 2080, there is clear separation: Even with the lowest climate sensitivity (CS2.0), the temperature in the reference scenario is above that of POL4.5 with the highest climate sensitivity (CS6.0). These scenarios quantify the general conclusion that in the nearer term the future climate will be largely controlled by natural variability and the climate response (the actual climate sensitivity of the earth system). Scientific research combined with the simple unfolding of the climate over the next decades will narrow this outcome. Policy choices will have relatively little effect in the near term, but become the dominant factor by the second half of the century. Of course, this is a receding window; if we don’t start to significantly change the current emissions path until 2030 or 2040, then we will not see a strong effect of those policies until very late in the century. Fig. 4 Time series of global mean surface air temperature and precipitation anomalies from the 1981–2010 base period, sea level rise (including thermal expansion and melting of glaciers, but excluding ice sheets) from the 1981–2010 base period and ocean pH for the 12 core simulations with the IGSM. The black lines represent observations, the Goddard Institute for Space Studies (GISS) surface temperature (GISTEMP, Hansen et al. 2010 ), the 20th Century Reanalysis V2 precipitation (Compo et al. 2011 ) and the Church and White Global Mean Sea Level Reconstruction (Church and White 2011 ). The blue, green, orange and red lines represent, respectively, the simulations with a climate sensitivity of 2.0, 3.0, 4.5 and 6.0 °C. The solid, dashed and dotted lines represent, respectively, the simulations with the reference scenario, stabilization scenario at 4.5 W/m 2 and the stabilization scenario at 3.7 W/m 2 Full size image The reference scenario clearly takes the earth system into dangerous territory, even under the assumption of a low climate sensitivity, with temperature increases by the end of the century ranging from 3.5 to 8.5 °C. Global precipitation increases range from 0.3 to 0.6 mm/day by 2100. Generally, the simulation with the largest increase in temperature also shows the largest increase in precipitation. Sea level rise, excluding the melting of ice sheets, shows a range of 40 to 80 cm in 2100. However, the range of sea level rise is likely underestimated because only one value of ocean heat uptake rate is considered in this study. And even if temperature increases halted where they are at the end of these simulations, gradual warming of the ocean would continue for hundreds of years, and with it continued sea level rise. Finally, global mean ocean pH decreases to about 7.8 by the end of the simulations, compared to 8.05 during present-day, under the reference scenario. The reduced pH would strongly affect marine organisms and have economic implications for fisheries (see Lane et al. 2013 ). Both POL4.5 and POL3.7 greatly reduce these risks, and current uncertainty in climate response dominates the difference in these policies through 2100. Of course, regardless of how uncertainty is resolved, lower GHG concentrations will lead to less climate change. 6 Conclusions We designed scenarios for impact assessment that explicitly address policy choices and uncertainty in climate response. These were designed as part of a broader climate impact assessment for the US, with the goal of making it possible to provide a more consistent picture of climate impacts on the US, and how those impacts depend on uncertainty in climate system response and policy choices. We stressed a difference in outcomes between one policy scenario (POL4.5) and a somewhat more stringent one (POL3.7), particularly because the POL3.7 scenario is not among the RCPs, yet is likely a more plausible policy target than the RCP2.6 scenario. Clearly, the long-term risks, beyond 2050, of climate change can be strongly influenced by policy choices. In the nearer term, the climate we will observe is hard to influence with policy, and what we actually see will be strongly influenced by natural variability and the earth system response to existing greenhouse gases. In the end, the nature of the system is that a strong effect caused by policy, especially policy directed toward long-lived GHGs, will lag 30 to 40 years behind its implementation. Hence if we delay and make a choice only when climate sensitivity is revealed, we may find we are on a path that will take us into dangerous territory with little we can do to stop it, short of geoengineering. | Since the 1990s, scientists and policymakers have proposed limiting Earth's average global surface temperature to 2 degrees C above pre-industrial levels, thereby averting the most serious effects of global warming, such as severe droughts and coastal flooding. But until recently, they lacked a comprehensive estimate of the likely social and economic benefits—from lives saved to economies preserved—that would result from greenhouse gas emissions reduction policies designed to achieve the 2 C goal. Now, a team of researchers from the MIT Joint Program on the Science and Policy of Global Change has published a study in Climatic Change that provides scenarios that climate scientists can use to estimate such benefits. The study projects greenhouse gas emissions levels and changes in precipitation, ocean acidity, sea level rise and other climate impacts throughout the 21st century resulting from different global greenhouse gas (GHG) mitigation scenarios. The scenarios include a business-as-usual future and one aimed at achieving significant GHG emission reductions limiting global warming since pre-industrial times to 2 C. Research groups convened by the U.S. Environmental Protection Agency have already begun using the MIT projections to evaluate the benefits of a 2 C emissions reduction scenario for agriculture, water, health, and other global concerns. "The U.S. EPA used our scenarios for a report on the benefits of global climate action, which, to my knowledge, is the most comprehensive analysis to date to quantify the economic, health, and environmental benefits for the United States from greenhouse gas emission mitigation," says Sergey Paltsev, co-author of the Climatic Change study and a senior research scientist and deputy director at the MIT Joint Program. "We have much more experience defining the cost of mitigation than the benefits. The goal of this project was to put a dollar value on damages from climate change in a number of sectors." Putting a dollar value on the benefits of climate action Using its Integrated Global System Model (IGSM)—which tracks climate, socioeconomic, and technological change over time—to produce its greenhouse gas emissions and climate change projections, the MIT team ran global policy scenarios through simulations designed to capture a range of uncertainty in the climate's response to changes in average global temperature. According to the team's estimates, with no policy implemented between now and 2100, increases in global temperature will range from 3.5 to 8 degrees C, precipitation from 0.3 to 0.6 millimeters per day and sea level from 40 to 80 centimeters. Ocean acidity will also rise, threatening marine life and commercial fisheries. Global GHG emissions reduction policies, which lower greenhouse gas concentrations, would reduce these climate impacts considerably. Based on the MIT projections, the EPA report, "Climate Change in the United States: Benefits of Global Action," shows that a 2 C stabilization would save thousands of lives threatened by extreme heat and billions of dollars in infrastructure expenses, while preventing destruction of natural resources and ecosystems. Prepared as part of the ongoing Climate Change Impacts and Risk Analysis (CIRA) project, an EPA-led collaborative modeling effort among teams in the federal government, MIT, the Pacific Northwest National Laboratory, the National Renewable Energy Laboratory and several consulting firms, the report estimates how climate change would impact 20 sectors in health, infrastructure, electricity, water resources, ecosystems and agriculture and forestry. In more than 35 studies, the EPA-funded researchers pinpointed a large number of climate impacts that could be averted, or at least reduced, by a 2 C stabilization, from lost wages due to extreme temperatures, to damage to bridges from heavy river flows. By enabling scientists to calculate damages incurred under different global mitigation scenarios on each impact sector, the IGSM-based projections are empowering them to put a dollar value on the benefits of more aggressive climate action. A long-term problem The MIT study found that the intended effects of more stringent climate policy would not be realized until the second half of the century, when they would begin to outweigh the effects of natural climate variability. By the end of the century, however, climate policies would result in significantly lower temperatures, greenhouse gas emissions and climate impacts than the no-policy option. "Even in aggressive emissions reduction scenarios we don't see a response in climate and temperature until mid-century, but by 2100 the response is dramatic," Paltsev says. "It's hard to achieve global consensus on such policies because the costs must be paid now and the benefits come later." But this CIRA project, which only captures some of the impacts of climate change, demonstrates that the benefits to the U.S. of global climate action can be substantial, and that they grow over time. Paltsev cautions that by delaying action until more negative effects of climate change are felt, the world will have fewer options at its disposal to stabilize the global climate. | 10.1007/s10584-013-0892-3 |
Biology | Impact of urbanization on wild bees underestimated | Gordon Fitch et al, Changes in adult sex ratio in wild bee communities are linked to urbanization, Scientific Reports (2019). DOI: 10.1038/s41598-019-39601-8 Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-019-39601-8 | https://phys.org/news/2019-03-impact-urbanization-wild-bees-underestimated.html | Abstract Wild bees are indispensable pollinators, supporting global agricultural yield and angiosperm biodiversity. They are experiencing widespread declines, resulting from multiple interacting factors. The effects of urbanization, a major driver of ecological change, on bee populations are not well understood. Studies examining the aggregate response of wild bee abundance and diversity to urbanization tend to document minor changes. However, the use of aggregate metrics may mask trends in particular functional groups. We surveyed bee communities along an urban-to-rural gradient in SE Michigan, USA, and document a large change in observed sex ratio (OSR) along this gradient. OSR became more male biased as urbanization increased, mainly driven by a decline in medium and large bodied ground-nesting female bees. Nest site preference and body size mediated the effects of urbanization on OSR. Our results suggest that previously documented negative effects of urbanization on ground-nesting bees may underestimate the full impact of urbanization, and highlight the need for improved understanding of sex-based differences in the provision of pollination services by wild bees. Introduction Wild bees (Apoidea: Hymenoptera) are critically important both to agricultural production and the maintenance of angiosperm biodiversity 1 , 2 . But populations of many species are in widespread decline 3 due to multiple interacting factors, including parasites and disease 4 , climate change 3 , 4 , pesticide use 5 and habitat loss 5 . Pesticide use and habitat loss in particular, are largely driven by agricultural conversion and intensification 5 , 6 , 7 . Urbanization has also contributed to habitat loss worldwide, evidenced by the increase in the amount of land occupied by urban development in the past 50 years 8 , 9 and this trend is expected to accelerate in the coming decades 9 . Less well understood, however, is how urbanization affects bee communities. Studies examining changes in bee communities along the rural-to-urban gradient have found relatively minor effects on overall abundance and diversity, particularly in comparison to the effects of agricultural intensification 10 , 11 , 12 . However, evaluating only aggregate abundance and diversity masks trends in particular guilds of bees; most notably, studies have consistently found reduced abundance and/or diversity of ground-nesting bees in urban areas 10 , 12 , 13 , 14 , 15 . This has been attributed to the lack of appropriate nesting substrate for ground-nesting bees in urban areas, though reduction in ground-nesting bee nest density or nest site availability has rarely been shown directly (but see ref. 16 ). Thus, while the available evidence suggests that urban areas are capable of supporting bee communities 17 , 18 , it also indicates that these communities are likely to differ systematically from those found outside cities, with, for example, an underrepresentation of ground-nesting bees 10 . While considering nesting or feeding ecology can reveal differential effects of urbanization on bee communities 10 , using ecological guild or even species as the unit of analysis may obscure other important effects of urbanization on bee communities. In particular, life history differences between female and male bees seem likely to result in distinct trends in observed sex ratio (OSR) with increasing urbanization 19 . There are non-exclusive mechanisms by which urbanization may drive changes in OSR, explored in greater detail below: (1) sex-specific patterns of movement and dispersal, (2) labile sex ratios and (3) temperature. Sex-specific movement patterns For most of their life cycle, non-parasitic female bees are central-place foragers, collecting nectar and pollen in order to provision their brood; as a result, most foraging occurs close to the nest site 20 . Male bees, on the other hand, do not engage in parental care, instead dispersing in search of mates. Moreover, while reproductive females also disperse from the natal nest prior to establishing their own nest, females tend to disperse shorter distances than males 21 , 22 , 23 . In urban landscapes, habitat patches (e.g. community gardens) are fragmented within a built matrix that is likely to be low in suitable nesting sites (at least for ground-nesting bees 10 , 16 ) and floral resources 24 , but see ref. 25 . Sex-based differences in movement patterns, in combination with this high degree of fragmentation, could result in changes to OSRs relative to those seen in more intact landscapes. Labile sex ratios Sex allocation in bees is labile, and dependent in part on (1) food resource availability, with greater food abundance resulting in a higher proportion of female offspring 26 , 27 , and (2) brood cell parasitism rates, with increased parasitism pressure resulting in reduced provisioning and therefore fewer female offspring 28 , 29 . Systematic changes in the ability of foragers to provision their brood along the urban-to-rural gradient, resulting from changes in either the abundance or distribution of suitable floral resources or brood parasitism rates, could therefore result in OSR shifts along an urbanization gradient. Temperature Another possible explanation for differences in OSR across an urban-to-rural gradient may be phenological shifts associated with the urban heat island effect. Bee emergence is related to temperature 30 suggesting that bee phenology will possibly be advanced in more urbanized areas, where temperatures are higher than the surrounding landscape 31 . Moreover, in many solitary bee species, male bees tend to emerge several days earlier than female bees. Since pan-trap sampling occurred on the same day at all sites for each sampling bout, this could lead to shifts in OSR as temperature increases along the urbanization gradient if the sampling date fell during the emergence period of one or more species. Environmentally-generated spatial variation in OSR in bees has been scarcely investigated. Recent work has documented decreases in female relative abundance in bumble bees ( Bombus spp.) along a rural-to-urban gradient 19 . Since most bumble bee species are eusocial (or social parasites), sex allocation - and resultant OSR - may be influenced by factors absent in solitary bee populations, including queen-worker genetic conflict 32 , 33 . Thus, findings from bumble bees cannot necessarily be extrapolated to the wild bee community as a whole, comprised as it is primarily of non-eusocial species. This study represents, to the best of our knowledge, the first investigation of OSR in a complete wild bee community along a land use gradient. The potential for urbanization to drive changes in OSR is significant for several reasons. First, changes in adult sex ratios can affect population dynamics 34 , 35 ; assuming a constant sex ratio when modeling demographic rates can lead to incorrect conclusions about population trends 34 , 35 . Second, there is evidence for sex-based differences in bee foraging behavior, including floral preferences 36 , floral constancy (i.e. the tendency to sequentially visit flowers of the same species) 37 , pollen transfer efficiency 38 , and flight distance between foraging bouts 37 , 38 . Thus, changes to OSR have the potential to impact both bee population dynamics and pollination services. Here, we document a shift in OSR in bee communities found in community gardens along a rural-to-urban gradient, where the proportion of male bees increases with urbanization. We find that the observed increase in male relative abundance is primarily due to declining absolute abundance of medium- and large-bodied female ground-nesting bees as urbanization increases. We discuss potential mechanisms that may generate the OSR shifts, as well as implications for future research on urban bee communities. Results We caught a total of 3,336 bees (Table S1 ) consisting of 143 species across 28 genera (Table S2 ). Of these, 2,481 (74%; 95 species) belonged to species that nest underground (hereafter ‘ground-nesting bees’), while 855 (26%; 48 species) belonged to species that nest above ground in cavities or hollow stems (hereafter ‘cavity-nesting bees’). Ground-nesting bees in the sampled population were comprised of 60.9% eusocial bees, 18.8% solitary bees, and 18.1% bees that either nest communally or exhibit variability in sociality mode (classified as “other” in this paper; see Methods). The remaining 2.2% of ground-nesting bees were cleptoparasites or bees with unknown sociality mode. Cavity-nesting bees were overwhelmingly solitary (96.0%). The effect of urbanization, as measured by impervious surface cover (hereafter ‘ISC’), on bee OSR was qualitatively similar regardless of the scale at which it was assessed (500 m, 1 km, 1.5 km, or 2 km), but the model assessing urbanization at the 2 km scale had the lowest AIC value (Table 1 ; ∆AIC = 1.48 for next-best model measuring urbanization at 1.5 km), so this scale was used for subsequent analyses. Table 1 Model comparison for predicting bee observed sex ratio. Full size table We found residual spatial autocorrelation (SA) in a small number of models; in all other cases calculation of Moran’s I indicated no SA. In cases where SA was detected, inclusion of Moran’s eigenvectors as predictors in the model had only minor effects on estimates and significance levels of other model terms and overall model fit, with no qualitative changes to results (Table S3 ). Below, where we report model output for a model that had SA, reported values are from the model modified to include Moran’s eigenvectors. Neither total bee abundance nor the abundance of ground-nesting bees was affected by urbanization (total: t = −0.36, d.f. = 24, p = 0.73; ground-nesting: t = −1.09, d.f. = 24, p = 0.29), but abundance of cavity-nesting bees increased with urbanization (t = 2.62, d.f. = 24, p = 0.01) (Figure S1 ). Local floral resource availability, whether measured by total number of blooms, total floral area or floral richness within 20 m of the sampling point, had no effect on bees when season-long average floral resource availability was considered, but positively influenced total bee abundance and richness when survey periods were taken into account (Table 2 ). While the effects of all 3 floral metrics were qualitatively similar, floral abundance, as measured by the total number of blooms, was the best predictor of bee abundance, while floral richness was the best predictor of bee richness (Table 2 ). The effects of floral resource availability were similar whether bees were considered in aggregate or with nesting guilds and sexes considered separately, except that floral resource availability was not found to influence the abundance of male cavity-nesting bees (AIC value for model including floral richness was identical to AIC value for model that omitted it; Table 2 ). Table 2 Relationship between floral resource availability and bee community characteristics. Full size table No metric of floral resource availability (considered either as season-long averages or separated by period) was a significant predictor of OSR (Tables 1 and 2 ). Mean minimum temperature was a significant predictor of OSR, though less so than ISC (Table S4 ) and the significant relationship was drive by mean minimum temperature’s correlation with ISC. When all observations across the survey were combined, the best model for predicting OSR included only urbanization, determined by AIC values, (Table 1 ). When sampling periods were considered separately, models including floral resource metrics were indistinguishable from the model including only urbanization and sampling period (∆AIC < 1; Table 2 ). These findings held whether the entire bee community was considered together or nesting guilds were considered independently (Tables S5 and S6 ). Moreover, there was no relationship between local floral resource metrics and urbanization (Table S7 ). The observed sex ratio (OSR) of the sample populations changed significantly along the rural-to-urban gradient, with relative abundance of females decreasing with urbanization ( z = −4.73, d.f. = 23, p < 0.001; Fig. 1A ). This overall change in OSR was driven entirely by changes in ground-nesting bees ( z = −3.66, d.f. = 23, p < 0.001, corrected for SA); in cavity-nesting bees OSR was consistent across the rural-to-urban gradient ( z = −1.42, d.f. = 23, p = 0.16). The change in OSR in ground-nesting bees is the result of declining female abundance with increasing urbanization ( t = −2.18, d.f. = 24, p = 0.04); male abundance remained essentially unchanged across the urbanization gradient ( t = 1.41, d.f. = 24, p = 0.17; Fig. 1B ). These results are robust even with the removal of eusocial members of the genus Bombus - the most abundant genus of ground-nesting bees - from the dataset (Table S3 ). By contrast, in cavity-nesting bees, abundance of both sexes increased with urbanization, marginally so in females ( t = 1.98, d.f. = 24, p = 0.06) and significantly in males ( t = 3.36, d.f. = 24, p = 0.003; Fig. 1C ). These results were robust even when we removed the 2 sites with >50% ISC from the dataset (Table S8 ), indicating that the relationship between OSR and urbanization occurs with even low-to-moderate degree of urbanization. The decline in female relative abundance in ground-nesting bees was significant and consistent across sociality classes (Table 3 ) suggesting that shifts in OSR were not related to degree of sociality. This was true despite the smaller number of solitary bees and bees in the ‘other’ category caught relative to eusocial bees. Therefore, the pattern of OSR shift in ground-nesting bees was not driven exclusively by eusocial species. Figure 1 Effects of urbanization on wild bee community. Relationship between the level of urban development (measured as proportional ISC within 2 km of the sampling site) and ( A ) bee observed sex ratio per site (OSR) (z = −4.73, d.f. = 23, p < 0.001); ( B ) ground-nesting bee abundance per site of females (red, t = −2.18, d.f. = 24, p = 0.04) and males (blue, t = 1.41, d.f. = 24,p = 0.17); and ( C) cavity-nesting bee abundance per site of females (red, t = 1.98, d.f. = 24, p = 0.06) and males (blue, t = 3.36, d.f. = 24, p = 0.003). Fitted line in A represents GLM fit of female abundance offset by total abundance; in ( B , C ) lines represent GLM fit of female (red) or male (blue) abundance. Shaded regions represent standard error. Full size image Table 3 Bee observed sex ratio responses to urbanization by sociality class. Full size table The relationship between urbanization and OSR in ground nesters is mediated by body size: there was no effect of urbanization on OSR in small ground-nesting bees ( z = −0.686, d.f. = 24, p = 0.49); while both medium and large ground-nesting bees experienced decreases in female relative abundance with increasing urbanization. The effect of urbanization on OSR was stronger for large than medium bees [large: z = −3.79, d.f. = 23, p < 0.001 (corrected for SA); medium: z = −3.07, d.f. = 24, p = 0.002; Figure S2 ]. No change in OSR was seen across the season for cavity-nesting bees (z = −0.872, d.f. = 86, p = 0.383). In ground-nesting bees, OSR changed significantly across the season, with the proportion of female bees caught declining at each successive sampling period (z = –8.345, d.f. = 99, p < 0.001). Moreover, correlation between ground nesting OSR and urbanization was seen only in the second half of the survey season; periods one (19 May–5 Jun) and two (19 Jun–2 Jul) show no significant change in OSR across the rural-to-urban gradient (Fig. 2 ). Figure 2 Relationship between wild bee observed sex ratio (OSR) in ground nesting bees and urbanization across the flying season. Each period includes one bout of netting and two flanking bouts of pan trapping. ( A ) Period 1: 19 May – 5 June, ( B ) Period 2: 19 June – 2 July, ( C ) Period 3: 17 July – 13 August, ( D ) Period 4: 26 August – 26 September. The p-values refer to the effect of ISC on OSR. Shaded region represents standard error. Full size image Discussion Here we document a shift in observed sex ratio (OSR) of ground-nesting bees along an urbanization gradient, with the relative abundance of female bees declining as urbanization increases while the abundance of male ground-nesting bees remained unaffected by urbanization. Because provisioning female bees tend to focus foraging efforts in the vicinity of their nest 20 , female abundance is likely to be correlated with local nest density. Therefore, these data suggest that urbanization reduces nest density of ground-nesting bees, consistent with findings from other researchers 10 , 12 , 13 , and that this generates the observed OSR shift in ground-nesting bees. A key question, then, is why we do not see a parallel decline in male ground-nesting bees with increasing urbanization. One possible explanation is that, because male bees are not tied to a nest, they may be disproportionately abundant in floral resource-rich areas even if nest density in these areas is low. Male bees also tend to disperse longer distances than dispersing reproductive females 21 , 22 , 23 . Sampling sites for this study were located in community gardens, which tend to have higher density and diversity of floral resources than the surrounding landscape 39 and the disparity between within-garden and outside-garden floral resource availability increases with urbanization in the study region 39 . This pattern could lead to a disproportionate concentration of male bees at more urban sampling sites. Our finding that urbanization-associated changes in ground-nesting bee OSR occur only among medium- and large-bodied bees further supports this explanation. Movement distance in bees is strongly correlated with body size 40 ; thus, males of larger species are more likely to disperse sufficiently far from their natal nest to reach resource patches in urban landscapes. The sample of smaller-bodied bees, on the other hand, more closely reflects the makeup of the locally-originating population. An alternative explanation for the observed OSR shift could be urbanization-induced changes in sex allocation by bees. Specifically, it is known that sex allocation in bees can be influenced by floral resource availability 26 , 27 and abundance of brood parasites 28 , 29 , as both factors influence provisioning of brood cells 27 , 28 . In most bee species, the production of reproductive females requires greater resource investment than the production of males; consequently, reduced maternal provisioning may result in a shift towards production of males 27 , 28 . In eusocial species, production of workers (which are female but do not reproduce), is also correlated with resource availability 41 . Thus, the observed decrease in female relative abundance with urbanization could be the result of reduction in the production of females due to reduced floral resource availability in urban landscapes. Our finding that OSR is influenced by urbanization for only medium and large bees is potentially consistent with either sex allocation or dispersal-based explanations for OSR shifts: larger bees are likely to both have larger foraging ranges (and thus be more affected by floral resource availability in the wider landscape, potentially leading to increased production of males in resource-scarce landscapes) and disperse greater distances (allowing for disproportionate concentration of dispersing males in urban habitat patches). However, effects of floral resource scarcity should not depend on bee nesting strategy; the fact that the OSR shift was found only in ground-nesting bees argues against a resource-mediated shift in sex allocation. Moreover, floral surveys revealed no relationship between urbanization and local (20 m) floral resource availability. Study sites were located within community gardens, which in our study area tend to have higher floral abundance and richness than the surrounding landscape 39 , so the lack of correlation between local floral resource availability and urbanization does not preclude the possibility that landscape-scale floral resource availability was negatively correlated with urbanization; we did not assess landscape-scale floral resource availability in this study. However, the high diversity and abundance of floral resources found within garden study sites likely attenuates the effect of landscape-level floral resource availability 42 . In contrast, parasitism rates may depend on nesting strategy, with ground-nesting bees likely experiencing higher parasite pressure 43 , 44 . This is consistent with our finding that OSR shifts occurred only in ground-nesting bees, but does not offer any immediate explanations as to why OSR shifts were not apparent in small ground nesting bees. While it is possible that parasite pressure along the urbanization gradient contributes to OSR shifts, we did not directly assess parasite abundance in this study nor are we aware of any study that assesses brood cell parasitism rates along an urban gradient. Further research on the environmental drivers of brood parasitism in bees is needed before we can reach conclusions about the role of parasitism in the observed OSR shift. We also considered temperature as one of the potential mechanisms driving shifts in OSR. Urbanization and average daily minimum temperature are significantly correlated but the data available does not support temperature as a mechanism for shifts in OSR. While temperature did significantly correlate with the changes seen in overall OSR, this is due to the aforementioned correlation between temperature and urbanization. Temperature also had a distinctly less favorable AIC score when compared to urbanization (Table S4 ). Additionally, if higher temperatures were causing earlier emergence of males in urban sites, then it would be reasonable to expect that OSR would be more male biased earlier in the season, which we did not find (Fig. 2 ). The lack of pattern in the OSR of cavity-nesting bees further weakens the case for temperature as one would expect similar changes as those seen in the OSR of ground nesting bees if ambient temperature was driving the OSR shift. Our finding that bee abundance and richness are positively related to local floral resource availability is consistent with other studies of the determinants of bee community composition in urbanized landscapes 14 , 45 , 46 . This is true despite the fact that the scale at which floral resources were sampled is far smaller than the foraging range of most bees. These findings highlight the importance of floral resource-rich habitat patches for conserving wild bee populations, particularly in highly fragmented landscapes. Finally, we found that the abundance of both male and female cavity nesting bees increased with urbanization. This is an anticipated result given that a number of studies have found similar results for cavity-nesting bees 10 , 12 , 13 , 45 , 46 , 47 , 48 . This may be due to the presence of anthropogenic cavities in urbanized habitats providing nesting resources for cavity-nesting bees 12 , 49 in contrast to ground-nesting bees. Cavity-nesting bees may also benefit from a reduction in competition for floral resources due to the reduced abundance of female ground-nesting bees. Our findings highlight the importance of considering sex-specific differences in bee behavior when analyzing the effects of environmental change on bee populations. Even though our results pertain to just one year of sampling and interannual variation may affect the degree of change in OSR, they suggest that research may be underestimating the negative impacts of urbanization on ground-nesting bees. While multiple studies have found reductions in ground-nesting bee populations in urban areas 10 , 12 , 13 , the magnitude of these reductions may be greater than what total abundance measures indicate if, as we suggest in this study, urban ground-nesting bee populations are subsidized by males dispersing from less urban areas. Further research in other urban areas is needed to determine the generality of the trend we document here, and to conclusively distinguish among the potential mechanisms driving urbanization-related OSR shifts in ground-nesting bees. Finally, these results stress the need for improved understanding of how sex-specific behavior in bees, including patterns of floral preference and pollen transfer efficiency, affect pollination services. At this point, while we know enough to suspect that these differences may be substantial 36 , 37 , 38 , further research is needed to predict the effects of a local shift in bee community sex ratio on plant communities. Methods Study location Sampling occurred at 26 sites distributed along a rural-to-urban gradient in southeastern Michigan, USA (Figure S3 ). Sites spanned a distance of 110 km, with the surrounding land use ranging from dense urban core to suburban to rural-agricultural. 21 of 26 sites were community gardens, 3 sites were nature reserves, and the remaining 2 sites were rural farms. The gardens or farms sampled in each city were either part of an independent managing organization, single home owner properties or property of the University of Michigan (see Table S9 ). All gardens and farms included in the study observe organic growing practices prohibiting the use of synthetic pesticides and fertilizers. Pollinator sampling Bee fauna were sampled from 19 May to 25 September 2014, using pan traps and active netting. This combination of sampling techniques is widely used in studies of bees, and has been shown to thoroughly sample bee communities 50 . For pan trapping, 2 oz (59 mL) plastic cups (Dart Container Corporation, Mason, MI USA) coated with UV-reflective paint in one of three colors (white, yellow, and blue), were filled with water and a small amount of soap as a surfactant. Pan traps were placed at all sites once every two weeks for a total of 9 trapping dates. Sampling occurred only on days that were sunny or partly sunny, with wind speeds below 4 m/s. On sampling days, pan traps were placed in all 26 sites before 1000 h and left for 24 h. Pan traps were then removed and all trapped arthropods were placed in 70% ethanol for later processing. Within each site, two pan traps of each color (6 total) were arranged in an 4 m × 2 m rectangle with a pan trap placed at each vertex and middle of the longer sides of the rectangle. This arrangement of pan traps is more compact than that used by some studies, and was devised to accommodate the small areas we were often granted access to sample in (e.g. one plot within a community garden) and to keep our sampling standardized across the study area. To maintain visibility to bees and accommodate changing vegetation heights over the course of the sampling period, pan traps were affixed to adjustable-height PVC pipes, and positioned to be 5–10 cm above surrounding herbaceous vegetation height. Netting at each site occurred 4 times over the sampling season, once a month from May-September. To account for variation in diurnal activity patterns across species, each sampling event comprised two 30-minute sessions, one between 0900–1200 h and another between 1300–1600h, with the same requirements on meteorological conditions as for pan trapping (see above). Netted bees were transferred to vials containing 70% ethanol for later processing. All bees were identified to species and assigned to sex. Identification was accomplished using the Discoverlife key 51 , with additional identifications made by Dr. Jason Gibbs (University of Manitoba, Winnipeg, Canada) and Jamie Pawelek (Wild Bee Garden Design, formerly University of California Berkeley, USA). Specimens are housed at the University of Michigan Museum of Zoology (accession numbers UMMZI-99924 through UMMZI-103259). Pollinator natural history and body size data Once all specimens were identified to species, natural history profiles were compiled for each species using four characteristics: preferred nesting substrate, sociality, native status, and body size (Table S2 ). Most natural history data were generously provided by Dr. Jason Gibbs, supplemented as necessary with literature searches. Bees with known modes of sociality were placed in one of four categories: eusocial, solitary, cleptoparasitic, and other. The ‘other’ category includes species that (1) exhibit both solitary and eusocial strategies either within or across populations or (2) nest communally (‘solitary social’ and ‘communal’ designations in Table S2 ). Analyses across sociality showed no pattern for cleptoparasites or bees with unknown sociality. However, our sample number of bees in these categories was very low, which affected statistical power. As a measure of body size, we used female intertegular (IT) distance, which is strongly correlated with flight ability and is therefore a proxy measurement of bee dispersal ability and foraging distance 40 . When IT distance could not be found in the literature 52 , we measured IT distance of 5 females of that species from our collection and took the mean as the species-specific IT distance. In cases where the species was represented by fewer than 5 individuals, we took measurements from all available samples; in general variance in IT distance across conspecific individuals was small (Table S2 ). We only measured IT distances of workers for eusocial species. Bees were then classified as small (≤1.5 mm), medium (>1.5–3.0 mm), or large (>3.0 mm) on the basis of IT span. Landscape-level impervious surface measurements We used National Land Cover Database (NLCD) data from 2011 53 to calculate the amount of urban development surrounding each study site as described in ref. 19 . Briefly, we used proportion of ISC as our measure of urbanization, and measured ISC at radii of 500 m, 1 km, 1.5 km, and 2 km around the study site. The NCLD classifies land cover in 30 m cells; summing cells within the relevant radius categorized as high- or medium-intensity developed gives the total area of impervious surfaces within each buffer (Table S10 ). We used GLMs with Poisson distribution and log-link function to determine the radius at which ISC had the most explanatory power over bee observed sex ratio (OSR), and which, therefore, to include in subsequent analyses. For each radius, we fit a model with overall OSR as the response variable, and proportional ISC at the radius of interest as the sole predictor. We used AIC values to select the best radius. Local floral resource and temperature measurements Floral resource availability within 20 m of the center of pan trap placement was measured at each pan trap sampling date. We identified all plants in flower within this circle to species or morphospecies, and recorded the number of open blooms on each species using a modified logarithmic scale (1–10 blooms, 11–50, 51–100, 101–200, 201–500, 500–1000, >1000). Species-specific flower dimensions were recorded in the field, and per-flower area calculated, as in ref. 4 . Per-species floral area at a given survey was then calculated by multiplying floral abundance (mean value of the abundance bin) by flower size. Summing counts and area for all species gives the overall floral abundance and area, respectively, at each site per sampling date. Mean floral count and area, and total floral richness were calculated for each survey period (see Analysis ) and for the entire season (Table S11 ), and we used these metrics to assess the effect of floral resources on the bee community. Temperature at each site was measured by data loggers (HOBO, Onset Computing Corporation, Bourne, MA USA) placed in an unshaded area within the floral survey circle. Data loggers remained throughout the sampling season and recorded daily average, minimum and maximum temperatures every 24 hours. Because data loggers at several sites were compromised, temperature data were available for 22 of 26 sites (Table S9 ). While mean minimum temperature had a significant effect on OSR ( z = − 2.92, d.f. = 20, p = 0.003), it was also significantly correlated with ISC (e.g. p < 0.001 at 2 km radius). The direction and magnitude of the effects of temperature and ISC were similar, and the model including ISC had a lower AIC value (ΔAIC = 5.43). Thus, we omitted any measure of temperature from the analyses described below; including mean minimum temperature in our models had little impact on model outcomes (Table S4 ). Analysis All analyses were carried out in R v.3.4.1 54 . Because we were interested in the response of wild bees to urbanization, we excluded records of the managed European honey bee ( Apis mellifera ) from our analysis; A . mellifera represented 4.9% of collected bees (164 individuals). The OSR was found by adding together the number of female bees of all species collected at a site, then dividing this by the total number of bees collected at that site. For analysis of the relationship between floral resources and OSR only, OSR at each site was calculated for each sampling period, rather than for the entire season. To model the relationship between OSR and environmental variables, we used GLMs with Poisson distribution and log-link function. To avoid the difficulties of interpretation when modeling ratios, we used number of female bees as our response variable, with log(total bee abundance) included as an offset. Predictor variables in the maximal model included: ISC within 2 km, season-long average floral area within 20 m and season-long total floral richness within 20 m. To test the relationship between OSR and each predictor, we conducted stepwise reduction of the model, beginning with the predictor showing the least explanatory power. The best model was then selected using AIC comparison. Models were checked for overdispersion, and in all cases the dispersion parameter value was <1.4. Because floral resource availability varied idiosyncratically across sites over the study duration, we evaluated the relationship between floral resource availability and bee community measures for each of 4 periods (in addition to the season-long averages outlined above). Each period consisted of a net-sampling event and the two bracketing pan-trapping events, and had a duration of 3–4 weeks. To model this relationship, we used GLMMs with site as a random effect, and an additional observation-level random effect to account for overdispersion. The maximal model for each bee community measure (i.e. abundance or OSR) in this case included a floral resource metric (abundance, area, or richness), survey period, and ISC. Because of strong collinearity among the 3 floral metrics, a separate model was fit for each metric. Model selection proceeded as outlined above. Since OSR was not affected by floral resource availability (see Results), further investigation of the association between OSR and environmental or bee community attributes was conducted using season-long averages and GLMs, as described in the previous paragraph, rather than data separated by period. Particularly when we increased the radius at which impervious surface was considered, there was some overlap in study sites. To account for this, we assessed spatial autocorrelation (SA) in all models by calculating Moran’s I for the residuals of all models. In cases where residuals showed significant SA, we used the Moran eigenvector approach 55 , implemented with the ‘ME()’ function in R package ‘spdep’ 56 to adjust the model to account for SA. This approach accounts for SA through the addition of one or more orthogonal eigenvectors describing the spatial structure in the data as model predictor term(s). Only 2 of 26 sites in the study had >50% ISC at the 2 km scale. To test whether these 2 sites had undue influence on our results, we conducted an identical analysis to that described above after removing observations from those sites from the dataset (Table S8 ). To assess whether patterns in OSR differed across the season, we fit a model similar to those outlined for overall OSR, but with the addition of period and a period × ISC interaction as predictors. Because this model indicated both a significant effect of period and a significant period × ISC interaction (see Results), we then fit a separate model for each period, with ISC at the 2 km range the sole predictor. To determine whether OSR response to urbanization was significantly affected by species attributes (i.e. preferred nest substrate, body size, and sociality), we constructed GLMs that included urbanization, species attribute of interest, and an urbanization × species attribute interaction term as predictors; a significant interaction term indicated significant differences among species in OSR response to urbanization, mediated by the attribute of interest. When a significant interaction was found, we divided the data by attribute of interest (e.g. ground-nesting vs. cavity-nesting; small, medium, or large bees), then ran a separate model for each guild to assess how the relationship between OSR and urbanization varied across guilds. A parallel analysis was conducted for bee abundance, with model form, predictors, and model selection process as above, with two exceptions. First, because we were looking at abundance, rather than OSR, these models omitted the offset term. Second, abundance data were in all cases significantly overdispersed; to account for this overdispersion, a quasi-Poisson distribution was used in place of the Poisson distribution. Because AIC values cannot be calculated from quasi- distributions, we instead used the related quasi-AIC metric for model comparison. Bumble bees ( Bombus spp.) made up a large portion of our sample set; most bumble bee species are eusocial, and as such may experience controls on OSR that differ substantially from other bees (see Introduction). In order to verify that changes in the OSR were not driven solely by bumble bees, additional analyses of OSR changes in ground nesting bees and urbanization were done with bumble bees removed. Analyses show no qualitative differences with bumble bees removed: OSR still becomes less female-biased as urbanization increases (Table S3 ). We assessed the relationship between metrics of floral resource availability and urbanization using GLMs with log-link function. As with bee abundance data, floral richness and area metrics were significantly overdispersed, so quasi-Poisson distributions were used to account for overdispersion. Data Availability All data from the analysis presented here is including as Supplementary Data files with this publication. | Wild bees are indispensable pollinators, supporting both agricultural productivity and the diversity of flowering plants worldwide. But wild bees are experiencing widespread declines resulting from multiple interacting factors. A new University of Michigan-led study suggests that the effects of one of those factors—urbanization—may have been underestimated. The study, led by a group of current and former U-M students and conducted at sites across southeast Michigan, looks at one aspect of this topic they say has received scant attention from bee researchers: the sex ratio of wild bees and how it changes across a rural-to-urban land-use gradient. The team found that the sex ratio of wild bees became more male-dominated as urbanization increased, mainly driven by a decline in medium- and large-bodied ground-nesting female bees. The study, published March 6 in the journal Scientific Reports, is believed to be the first investigation of observed sex ratio in a complete wild bee community along a rural-to-urban gradient. "These findings have potential implications on bee population health and pollination services, since male and female bees often have different pollination behaviors," said Paul Glaum, one of the study's first authors and a postdoctoral researcher in U-M's Department of Ecology and Evolutionary Biology. Female and male bees of the same species often pollinate different plant species. As a result, a decline in female bees has the potential to limit pollination services for part of the plant community, he said. Additionally, a declining female population can mean fewer mates for male bees. This threatens ground-nesting bees' reproduction rates and their ability to maintain future generations of pollinating bees. It may even threaten the genetic diversity of these species, Glaum said. "Our results suggest that research may be underestimating the negative impacts of urbanization on ground-nesting bees and highlight the importance of considering sex-specific differences in bee behavior when analyzing the effects of environmental change on bee populations," he said. To better understand how urbanization affects wild bee populations, the U-M-led team sampled wild bees at community gardens, nature reserves and farms across southeast Michigan. Sampling was done at 26 sites spanning nearly 70 miles. Land use surrounding the sampling sites ranged from densely populated cities to suburban to rural-agricultural. Sampling was done in several southeast Michigan cities, including Dexter, Ann Arbor, Ypsilanti, Dearborn and Detroit. Team members caught more than 3,300 bees from 143 species. Because they were strictly interested in the effects of urbanization on wild bees, domesticated European honeybees were not included in the analysis. Seventy-four percent of the captured bees belonged to species that nest underground (ground-nesting), and the rest belonged to species that nest above ground in cavities or hollow tree stems. The researchers found that the sex ratio of the wild bees became more male biased as urbanization increased, mainly driven by a decline in medium- and large-bodied ground-nesting female bees. In their Scientific Reports paper, the researchers suggest several possible explanations for these findings. In urban landscapes, where floral resources are scattered and patchy, larger-bodied male bees are more likely than females to disperse sufficiently far from their home nest to reach food sources and to survive. An alternative explanation for the observed sex ratio shift has to do with urbanization-induced changes in sex allocation by bees. In most bee species, the production of reproductive females requires a greater investment of food resources than the production of males. As a result, a scarcity of pollen and nectar could result in a shift toward production of more male bees. "While multiple studies have found reductions in ground-nesting bee populations in urban areas, the magnitude of these reductions may be greater than what total abundance measures indicate if, as we suggest in this study, urban ground-nesting bee populations are subsidized by males dispersing from less urban areas," the authors wrote. Populations of many wild bee species are in widespread decline worldwide to due multiple interacting factors. Habitat loss, parasites and disease, pesticide use and climate change have all been blamed. Urbanization contributes to habitat loss, and that trend is expected to accelerate in coming decades. Previous studies have consistently found a reduced abundance and/or diversity of ground-nesting bees in urban areas, a finding that has been attributed to the lack of suitable nesting sites in cities. | 10.1038/s41598-019-39601-8 |
Medicine | AI offers tool to improve surgeon performance | Dani Kiyasseh et al, A vision transformer for decoding surgeon activity from surgical videos, Nature Biomedical Engineering (2023). DOI: 10.1038/s41551-023-01010-8 Dani Kiyasseh et al, Human visual explanations mitigate bias in AI-based assessment of surgeon skills, npj Digital Medicine (2023). DOI: 10.1038/s41746-023-00766-2 Dani Kiyasseh et al, A multi-institutional study using artificial intelligence to provide reliable and fair feedback to surgeons, Communications Medicine (2023). DOI: 10.1038/s43856-023-00263-3 Journal information: Communications Medicine , Nature Biomedical Engineering , npj Digital Medicine | https://dx.doi.org/10.1038/s41551-023-01010-8 | https://medicalxpress.com/news/2023-04-ai-tool-surgeon.html | Abstract The intraoperative activity of a surgeon has substantial impact on postoperative outcomes. However, for most surgical procedures, the details of intraoperative surgical actions, which can vary widely, are not well understood. Here we report a machine learning system leveraging a vision transformer and supervised contrastive learning for the decoding of elements of intraoperative surgical activity from videos commonly collected during robotic surgeries. The system accurately identified surgical steps, actions performed by the surgeon, the quality of these actions and the relative contribution of individual video frames to the decoding of the actions. Through extensive testing on data from three different hospitals located in two different continents, we show that the system generalizes across videos, surgeons, hospitals and surgical procedures, and that it can provide information on surgical gestures and skills from unannotated videos. Decoding intraoperative activity via accurate machine learning systems could be used to provide surgeons with feedback on their operating skills, and may allow for the identification of optimal surgical behaviour and for the study of relationships between intraoperative factors and postoperative outcomes. Main The overarching goal of surgery is to improve postoperative patient outcomes 1 , 2 . It was recently demonstrated that such outcomes are strongly influenced by intraoperative surgical activity 3 , that is, what actions are performed by a surgeon during a surgical procedure and how well those actions are executed. For the vast majority of surgical procedures, however, a detailed understanding of intraoperative surgical activity remains elusive. This scenario is all too common in other domains of medicine, where the drivers of certain patient outcomes either have yet to be discovered or manifest differently. The status quo within surgery is that intraoperative surgical activity is simply not measured. Such lack of measurement makes it challenging to capture the variability in the way surgical procedures are performed across time, surgeons and hospitals, to test hypotheses associating intraoperative activity with patient outcomes, and to provide surgeons with feedback on their operating technique. Intraoperative surgical activity can be decoded from videos commonly collected during robot-assisted surgical procedures. Such decoding provides insight into what procedural steps (such as tissue dissection and suturing) are performed over time, how those steps are executed (for example, through a set of discrete actions or gestures) by the operating surgeon, and the quality with which they are executed (that is, mastery of a skill; Fig. 1 ). Currently, if a video were to be decoded, it would be through a manual retrospective analysis by an expert surgeon. However, this human-driven approach is subjective, as it depends on the interpretation of activity by the reviewing surgeon; unreliable, as it assumes that a surgeon is aware of all intraoperative activity; and unscalable, as it requires the presence of an expert surgeon and an extensive amount of time and effort. These assumptions are particularly unreasonable where expert surgeons are unavailable (as in low-resource settings) and already pressed for time. As such, there is a pressing need to decode intraoperative surgical activity in an objective, reliable and scalable manner. Fig. 1: An AI system that decodes intraoperative surgical activity from videos. a , Surgical videos commonly collected during robotic surgeries are decoded via SAIS into multiple elements of intraoperative surgical activity: what is performed by a surgeon, such as the suturing subphases of needle handling, needle driving and needle withdrawal, and how that activity is executed by a surgeon, such as through discrete gestures and at different levels of skill. b , SAIS is a unified system since the same architecture can be used to independently decode different elements of surgical activity, from subphase recognition to gesture classification and skill assessment. Full size image Given these limitations, emerging technologies such as artificial intelligence (AI) have been used to identify surgical activity 4 , gestures 5 , surgeon skill levels 6 , 7 and instrument movements 8 exclusively from videos. However, these technologies are limited to decoding only a single element of intraoperative surgical activity at a time (such as only gestures), limiting their utility. These technologies are also seldom rigorously evaluated, where it remains an open question whether they generalize to, or perform well in, new settings, such as with unseen videos from different surgeons, surgical procedures and hospitals. Such a rigorous evaluation is critical to ensuring the development of safe and trustworthy AI systems. In this study, we propose a unified surgical AI system (SAIS) that decodes multiple elements of intraoperative surgical activity from videos collected during surgery. Through rigorous evaluation on data from three hospitals, we show that SAIS reliably decodes multiple elements of intraoperative activity, from the surgical steps performed to the gestures that are executed and the quality with which they are executed by a surgeon. This reliable decoding holds irrespective of whether videos are of different surgical procedures and from different surgeons across hospitals. We also show that SAIS decodes such elements more reliably than state-of-the-art AI systems, such as Inception3D (I3D; ref. 6 ), which have been developed to decode only a single element (such as surgeon skill). We also show that SAIS, through deployment on surgical videos without any human-driven annotations, provides information about intraoperative surgical activity, such as its quality over time, that otherwise would not have been available to a surgeon. Through a qualitative assessment, we demonstrate that SAIS provides accurate reasoning behind its decoding of intraoperative activity. With these capabilities, we illustrate how SAIS can be used to provide surgeons with actionable feedback on how to modulate their intraoperative surgical behaviour. Results SAIS reliably decodes surgical subphases We decoded the ‘what’ of surgery by tasking SAIS to distinguish between three surgical subphases: needle handling, needle driving and needle withdrawal (Fig. 1 ). For all experiments, we trained SAIS on video samples exclusively from the University of Southern California (USC) (Table 1 ). A description of the surgical procedures and subphases is provided in Methods . Table 1 Total number of videos and video samples associated with each of the hospitals and tasks Full size table Generalizing across videos We deployed SAIS on the test set of video samples from USC, and present the receiver operating characteristic (ROC) curves stratified according to the three subphases (Fig. 2a ). We observed that SAIS reliably decodes surgical subphases with area under the receiver operating characteristic curve (AUC) of 0.925, 0.945 and 0.951, for needle driving, needle handling and needle withdrawal, respectively. We also found that SAIS can comfortably decode the high-level steps of surgery, such as suturing and dissection (Supplementary Note 3 and Supplementary Fig. 2 ). Fig. 2: Decoding surgical subphases from videos. a – c , SAIS is trained on video samples exclusively from USC and evaluated on those from USC ( a ), SAH ( b ) and HMH ( c ). Results are shown as an average (±1 standard deviation) of ten Monte Carlo cross-validation steps. d , We trained variants of SAIS to quantify the marginal benefit of its components on its PPV. We removed test-time augmentation (‘without TTA’), RGB frames (‘without RGB’), flow maps (‘without flow’) and the self-attention mechanism (‘without SA’). We found that the attention mechanism and the multiple modality input (RGB and flow) are the greatest contributors to PPV. e , We benchmarked SAIS against an I3D model when decoding subphases from entire VUA videos without human supervision. Each box reflects the quartiles of the results, and the whiskers extend to 1.5× the interquartile range. Full size image Generalizing across hospitals To determine whether SAIS can generalize to unseen surgeons at distinct hospitals, we deployed it on video samples from St. Antonius Hospital (SAH) (Fig. 2b ) and Houston Methodist Hospital (HMH) (Fig. 2c ). We found that SAIS continued to excel with AUC ≥0.857 for all subphases and across hospitals. Benchmarking against baseline models We deployed SAIS to decode subphases from entire videos of the vesico-urethral anastomosis (VUA) suturing step (20 min long) without any human supervision (inference section in Methods ). We present the F1 10 score (Fig. 2e ), a commonly reported metric 9 , and contextualize its performance relative to that of a state-of-the-art I3D network 6 . We found that SAIS decodes surgical subphases more reliably than I3D, with these models achieving F1 10 of 50 and 40, respectively. The performance of SAIS stems from attention mechanism and multiple data modalities To better appreciate the degree to which the components of SAIS contributed to its overall performance, we trained variants of SAIS, after having removed or modified these components (ablation section in Methods ), and report their positive predictive value (PPV) when decoding the surgical subphases (Fig. 2d ). We found that the self-attention (SA) mechanism was the largest contributor to the performance of SAIS, where its absence resulted in ∆PPV of approximately −20. This finding implies that capturing the relationship between, and temporal ordering of, frames is critical for the decoding of intraoperative surgical activity. We also observed that the dual-modality input (red–green–blue, or RGB, frames and flow) has a greater contribution to performance than using either modality of data alone. By removing RGB frames (‘without RGB’) or optical flow (‘without flow’), the model exhibited an average ∆PPV of approximately −3 relative to the baseline implementation. Such a finding suggests that these two modalities are complementary to one another. We therefore used the baseline model (SAIS) for all subsequent experiments. SAIS reliably decodes surgical gestures In the previous section, we showed the ability of SAIS to decode surgical subphases (the ‘what’ of surgery) and to generalize to video samples from unseen surgeons at distinct hospitals, and also quantified the marginal benefit of its components via an ablation study. In this section, we examine the ability of SAIS to decode surgical gestures (the ‘how’ of surgery) performed during both tissue suturing and dissection activities (the description of gestures and activities is provided in Methods ). For the suturing activity (VUA), we trained SAIS to distinguish between four discrete suturing gestures: right forehand under (R1), right forehand over (R2), left forehand under (L1) and combined forehand over (C1). For the dissection activity, known as nerve sparing (NS), we trained SAIS to distinguish between six discrete dissection gestures: cold cut (c), hook (h), clip (k), camera move (m), peel (p) and retraction (r). We note that training was performed on video samples exclusively from USC. Generalizing across videos We deployed SAIS on the test set of video samples from USC, and present the ROC curves stratified according to the discrete suturing gestures (Fig. 3a ) and dissection gestures (Fig. 3b ). There are two main takeaways here. First, we observed that SAIS can generalize well to both suturing and dissection gestures in unseen videos. This is exhibited by the high AUC achieved by SAIS across the gestures. For example, in the suturing activity, AUC was 0.837 and 0.763 for the right forehand under (R1) and combined forehand over (C1) gestures, respectively. In the dissection activity, AUC was 0.974 and 0.909 for the clip (k) and camera move (m) gestures, respectively. These findings bode well for the potential deployment of SAIS on unseen videos for which ground-truth gesture annotations are unavailable, an avenue we explore in a subsequent section. Second, we found that the performance of SAIS differs across the gestures. For example, in the dissection activity, AUC was 0.701 and 0.974 for the retraction (r) and clip (k) gestures, respectively. We hypothesize that the strong performance of SAIS for the latter stems from the clear visual presence of a clip in the surgical field of view. On the other hand, the ubiquity of retraction gestures in the surgical field of view could be a source of the relatively lower ability of SAIS in decoding retractions, as explained next. Retraction is often annotated as such when it is actively performed by a surgeon’s dominant hand. However, as a core gesture that is used to, for example, improve a surgeon’s visualization of the surgical field, a retraction often complements other gestures. As such, it can occur simultaneously with, and thus be confused for, other gestures by the model. Fig. 3: Decoding surgical gestures from videos. a , SAIS is trained and evaluated on the VUA data exclusively from USC. The suturing gestures are right forehand under (R1), right forehand over (R2), left forehand under (L1) and combined forehand over (C1). b – d , SAIS is trained on the NS data exclusively from USC and evaluated on the NS data from USC ( b ), NS data from SAH ( c ) and HD data from USC ( d ). The dissection gestures are cold cut (c), hook (h), clip (k), camera move (m), peel (p) and retraction (r). Note that clips (k) are not used during the HD step. Results are shown as an average (±1 standard deviation) of ten Monte Carlo cross-validation steps. e , Proportion of predicted gestures identified as correct (precision) stratified on the basis of the anatomical location of the neurovascular bundle in which the gesture is performed. f , Gesture profile where each row represents a distinct gesture and each vertical line represents the occurrence of that gesture at a particular time. SAIS identified a sequence of gestures (hook, clip and cold cut) that is expected in the NS step of RARP procedures, and discovered outlier behaviour of a longer-than-normal camera move gesture corresponding to the removal, inspection and re-insertion of the camera into the patient’s body. Full size image Generalizing across hospitals To measure the degree to which SAIS can generalize to unseen surgeons at a distinct hospital, we deployed it on video samples from SAH (Fig. 3c and video sample count in Table 1 ). We found that SAIS continues to perform well in such a setting. For example, AUC was 0.899 and 0.831 for the camera move (m) and clip (k) gestures, respectively. Importantly, such a finding suggests that SAIS can be reliably deployed on data with several sources of variability (surgeon, hospital and so on). We expected, and indeed observed, a slight degradation in performance in this setting relative to when SAIS was deployed on video samples from USC. For example, AUC was 0.823 → 0.702 for the cold cut (c) gesture in the USC and SAH data, respectively. This was expected due to the potential shift in the distribution of data collected across the two hospitals, which has been documented to negatively affect network performance 10 . Potential sources of distribution shift include variability in how surgeons perform the same set of gestures (for instance, different techniques) and in the surgical field of view (for example, clear view with less blood). Furthermore, our hypothesis for why this degradation affects certain gestures (such as cold cuts) more than others (such as clips) is that the latter exhibits less variability than the former, and is thus easier to classify by the model. Generalizing across surgical procedures While videos of different surgical procedures (such as nephrectomy versus prostatectomy) may exhibit variability in, for example, anatomical landmarks (such as kidney versus prostate), they are still likely to reflect the same tissue dissection gestures. We explored the degree to which such variability affects the ability of SAIS to decode dissection gestures. Specifically, we deployed SAIS on video samples of a different surgical step: renal hilar dissection (HD), from a different surgical procedure: robot-assisted partial nephrectomy (RAPN) (Fig. 3d and Table 1 for video sample count). We observed that SAIS manages to adequately generalize to an unseen surgical procedure, albeit exhibiting degraded performance, as expected (0.615 < AUC < 0.858 across the gestures). Interestingly, the hook (h) gesture experienced the largest degradation in performance (AUC 0.768 → 0.615). We hypothesized that this was due to the difference in the tissue in which a hook is performed. Whereas in the NS dissection step, a hook is typically performed around the prostatic pedicles (a region of blood vessels), in the renal HD step, it is performed in the connective tissue around the renal artery and vein, delivering blood to and from the kidney, respectively. Validating on external video datasets To contextualize our work with previous methods, we also trained SAIS to distinguish between suturing gestures on two publicly available datasets: JHU-ISI gesture and skill assessment working set (JIGSAWS) 11 and dorsal vascular complex University College London (DVC UCL) 12 ( Methods ). While the former contains videos of participants in a laboratory setting, the latter contains videos of surgeons in a particular step (dorsal vascular complex) of the live robot-assisted radical prostatectomy (RARP) procedure. We compare the accuracy of SAIS with that of the best-performing methods on JIGSAWS (Supplementary Table 6 ) and DVC UCL (Supplementary Table 7 ). We found that SAIS, despite not being purposefully designed for the JIGSAWS dataset, performs competitively with the baseline methods (Supplementary Table 6 ). For example, the best-performing video-based method achieved accuracy of 90.1, whereas SAIS achieved accuracy of 87.5. It is conceivable that incorporating additional modalities and dataset-specific modifications into SAIS could further improve its performance. As for the DVC UCL dataset, we followed a different evaluation protocol from the one originally reported 12 (see Implementation details of training SAIS on external video datasets in Methods ) since only a subset of the dataset has been made public. To fairly compare the models in this setting, we quantify their improvement relative to a naive system that always predicts the majority gesture (Random) (Supplementary Table 7 ). We found that SAIS leads to a greater improvement in performance relative to the state-of-the-art method (MA-TCN) on the DVC UCL dataset. This is evident by the three-fold and four-fold increase in accuracy achieved by MA-TCN and SAIS, respectively, relative to a naive system. SAIS provides surgical gesture information otherwise unavailable to surgeons One of the ultimate, yet ambitious, goals of SAIS is to decode surgeon activity from an entire surgical video without annotations and with minimal human oversight. Doing so would provide surgeons with information otherwise less readily available to them. In pursuit of this goal, and as an exemplar, we deployed SAIS to decode the dissection gestures from entire NS videos from USC (20–30 min in duration) to which it has never been exposed ( Methods ). Quantitative evaluation To evaluate this decoding, we randomly selected a prediction made by SAIS for each dissection gesture category in each video ( n = 800 gesture predictions in total). This ensured we retrieved predictions from a more representative and diverse set of videos, thus improving the generalizability of our findings. We report the precision of these predictions after manually confirming whether or not the corresponding video samples reflected the correct gesture (Fig. 3e ). We further stratified this precision on the basis of the anatomical location of the neurovascular bundle relative to the prostate gland. This allowed us to determine whether SAIS was (a) learning an unreliable shortcut to decoding gestures by associating anatomical landmarks with certain gestures, which is undesirable, and (b) robust to changes in the camera angle and direction of motion of the gesture. For the latter, note that operating on the left neurovascular bundle often involves using the right-hand instrument and moving it towards the left of the field of view (Fig. 3f , top row of images). The opposite is true when operating on the right neurovascular bundle. We found that SAIS is unlikely to be learning an anatomy-specific shortcut to decoding gestures and is robust to the direction of motion of the gesture. This is evident by its similar performance when deployed on video samples of gestures performed in the left and right neurovascular bundles. For example, hook (h) gesture predictions exhibited precision of ~0.75 in both anatomical locations. We also observed that SAIS was able to identify an additional gesture category beyond those it was originally trained on. Manually inspecting the video samples in the cold cut (c) gesture category with a seemingly low precision, we found that SAIS was identifying a distinct cutting gesture, also known as a hot cut, which, in contrast to a cold cut, involves applying heat/energy to cut tissue. Qualitative evaluation To qualitatively evaluate the performance of SAIS, we present its gesture predictions for a single 30-min NS video (Fig. 3f ). Each row represents a distinct gesture, and each vertical line represents the occurrence of this gesture at a particular time. We observed that, although SAIS was not explicitly informed about the relationship between gestures, it nonetheless correctly identified a pattern of gestures over time which is typical of the NS step within RARP surgical procedures. This pattern constitutes a (a) hook, (b) clip and (c) cold cut and is performed to separate the neurovascular bundle from the prostate while minimizing the degree of bleeding that the patient incurs. We also found that SAIS can discover outlier behaviour, despite not being explicitly trained to do so. Specifically, SAIS identified a contiguous 60-s interval during which a camera move (m) was performed, and which is 60× longer than the average duration (1 s) of a camera move. Suspecting outlier behaviour, we inspected this interval and discovered that it coincided with the removal of the camera from the patient’s body, its inspection by the operating surgeon, and its re-insertion into the patient’s body. SAIS reliably decodes surgical skills At this point, we have demonstrated that SAIS, as a unified AI system, can independently achieve surgical subphase recognition (the what of surgery) and gesture classification (the how of surgery), and generalize to samples from unseen videos in the process. In this section, we examine the ability of SAIS to decode skill assessments from surgical videos. In doing so, we also address the how of surgery, however through the lens of surgeon skill. We evaluated the quality with which two suturing subphases were executed by surgeons: needle handling and needle driving (Fig. 1a , right column). We trained SAIS to decode the skill level of these activities using video samples exclusively from USC. Generalizing across videos We deployed SAIS on the test set of video samples from USC, and present the ROC curves associated with the skills of needle handling (Fig. 4a ) and needle driving (Fig. 4b ). We found that SAIS can reliably decode the skill level of surgical activity, achieving AUC of 0.849 and 0.821 for the needle handling and driving activity, respectively. Fig. 4: Decoding surgical skills from videos and simultaneous provision of reasoning. a , b , We train SAIS on video samples exclusively from USC to decode the skill-level of needle handling ( a ) and needle driving ( b ), and deploy it on video samples from USC, SAH and HMH. Results are an average (±1 standard deviation) of ten Monte Carlo cross-validation steps. c , d , We also present the attention placed on frames by SAIS for a video sample of low-skill needle handling ( c ) and needle driving ( d ). Images with an orange bounding box indicate that SAIS places the highest attention on frames depicting visual states consistent with the respective skill assessment criteria. These criteria correspond to needle repositions and needle adjustments, respectively. e , Surgical skills profile depicting the skill assessment of needle handling and needle driving from a single surgical case at SAH. f , g , Ratio of low-skill needle handling ( f ) and needle driving ( g ) in each of the 30 surgical cases at SAH. The horizontal dashed lines represent the average ratio of low-skill activity at USC. Full size image Generalizing across hospitals We also deployed SAIS on video samples from unseen surgeons at two hospitals: SAH and HMH (Fig. 4a,b and Table 1 for video sample count). This is a challenging task that requires SAIS to adapt to the potentially different ways in which surgical activities are executed by surgeons with different preferences. We found that SAIS continued to reliably decode the skill level of needle handling (SAH: AUC 0.880, HMH: AUC 0.804) and needle driving (SAH: AUC 0.821, HMH: AUC 0.719). The ability of SAIS to detect consistent patterns across hospitals points to its potential utility for the objective assessment of surgical skills. Benchmarking against baseline models Variants of the 3D convolutional neural network (3D-CNN) have achieved state-of-the-art results in decoding surgical skills on the basis of videos of either a laboratory trial 6 or a live procedure 13 . As such, to contextualize the utility of SAIS, we fine-tuned a pre-trained I3D model (see Implementation details of I3D experiments in Methods ) to decode the skill level of needle handling and needle driving (Table 2 ). We found that SAIS consistently outperforms this state-of-the-art model when decoding the skill level of surgical activities across hospitals. For example, when decoding the skill level of needle handling, SAIS and I3D achieved AUC of 0.849 and 0.681, respectively. When decoding the skill level of needle driving, they achieved AUC of 0.821 and 0.630, respectively. We also found that I3D was more sensitive to the video samples it was trained on and the initialization of its parameters. This is evident by the higher standard deviation of its performance relative to that of SAIS across the folds (0.12 versus 0.05 for needle driving at USC). Such sensitivity is undesirable as it points to the lack of robustness and unpredictable behaviour of the model. Table 2 SAIS outperforms a state-of-the-art model when decoding the skill level of surgical activity. SAIS is trained on video samples exclusively from USC. We report the average AUC (±1 standard deviation) on the test-set across all of the ten folds Full size table SAIS provides accurate reasoning behind decoding of surgical skills The safe deployment of clinical AI systems often requires that they are interpetable 14 . We therefore wanted to explore whether or not SAIS was identifying relevant visual cues while decoding the skill level of surgeons. This would instill machine learning practitioners with confidence that SAIS is indeed latching onto appropriate features, and can thus be trusted in the event of future deployment within a clinical setting. We first retrieved a video sample depicting a low-skill activity (needle handling or needle driving) that was correctly classified by SAIS. By inspecting the attention placed on such frames by the attention mechanism (architecture in Fig. 5 ), we were able to quantify the importance of each frame. Ideally, high attention is placed on frames of relevance, where relevance is defined on the basis of the skill being assessed. Fig. 5: A vision-and-attention-based AI system. SAIS consists of two parallel streams that process distinct input data modalities: RGB surgical videos and optical flow. Irrespective of the data modality, features are extracted from each frame via a ViT pre-trained in a self-supervised manner on ImageNet. Features of video frames are then input into a stack of transformer encoders to obtain a modality-specific video feature. These modality- specific features are aggregated and passed into a projection head to obtain a single video feature, which is either attracted to, or repelled from, the relevant prototype. Although we illustrate two prototypes to reflect binary categories (high-skill activity versus low-skill activity), we would have C prototypes in a setting with C categories. Full size image We present the attention (darker is more important) placed on frames of a video sample of needle handling (Fig. 4c ) and needle driving (Fig. 4d ) and that was correctly classified by SAIS as depicting low skill. We found that SAIS places the most attention on frames that are consistent with the skill assessment criteria. For example, with the low-skill needle handling activity based on the number of times a needle is re-grasped by a surgeon, we see that the most important frames highlight the time when both robotic arms simultaneously hold onto the needle, which is characteristic of a needle reposition manoeuvre (Fig. 4c ). Multiple repetitions of this behaviour thus align well with the low-skill assessment of needle handling. Additionally, with needle driving assessed as low-skill based on the smoothness of its trajectory, we see that the needle was initially driven through the tissue, adjusted, and then completely withdrawn (opposite to direction of motion) before being re-driven through the tissue seconds later (Fig. 4d ). SAIS placed a high level of attention on the withdrawal of the needle and its adjustment and was thus in alignment with the low-skill assessment of needle driving. More broadly, these explainable findings suggest that SAIS is not only capable of providing surgeons with a reliable, objective, and scalable assessment of skill but can also pinpoint the important frames in the video sample. This capability addresses why a low-skill assessment was made and bodes well for when SAIS is deployed to provide surgeons with targeted feedback on how to improve their execution of surgical skills. SAIS provides surgical skill information otherwise unavailable to surgeons We wanted to demonstrate that SAIS can also provide surgeons with information about surgical skills that otherwise would not have been available to them. To that end, we tasked SAIS with assessing the skill of all needle handling and needle driving video samples collected from SAH. With needle handling (and needle driving) viewed as a subphase of a single stitch and knowing that a sequence of stitches over time makes up a suturing activity (such as VUA) in a surgical case, SAIS can generate a surgical skills profile for a single case (Fig. 4e ) for needle handling and needle driving. We would like to emphasize that this profile, when generated for surgical cases that are not annotated with ground-truth skill assessments, provides surgeons with actionable information that otherwise would not have been available to them. For example, a training surgeon can now identify temporal regions of low-skill stitch activity, relate that to anatomical locations perhaps, and learn to focus on such regions in the future. By decoding profiles for different skills within the same surgical case, a surgeon can now identify whether subpar performance for one skill (such as needle handling) correlates with that for another skill (such as needle driving). This insight will help guide how a surgeon practises such skills. SAIS can also provide actionable information beyond the individual surgical case level. To illustrate this, we present the proportion of needle handling (Fig. 4f ) and needle driving (Fig. 4g ) actions in a surgical case that were deemed low-skill, for all 30 surgical cases from SAH. We also present the average low-skill ratio observed in surgical videos from USC. With this information, the subset of cases with the lowest rate of low-skill actions can be identified and presented to training surgeons for educational purposes. By comparing case-level ratios to the average ratio at different hospitals (Fig. 4g ), surgeons can identify cases that may benefit from further surgeon training. SAIS can provide surgeons with actionable feedback We initially claimed that the decoding of intraoperative surgical activity can pave the way for multiple downstream applications, one of which is the provision of postoperative feedback to surgeons on their operating technique. Here we provide a template of how SAIS, based on the findings we have presented thus far, can deliver on this goal. In reliably decoding surgical subphases and surgical skills while simultaneously providing its reasoning for doing so, SAIS can provide feedback of the following form: ‘when completing stitch number three of the suturing step, your needle handling (what—subphase) was executed poorly (how—skill). This is probably due to your activity in the first and final quarters of the needle handling subphase (why—attention)’. Such granular and temporally localized feedback now allows a surgeon to better focus on the element of intraoperative surgical activity that requires improvement, a capability that was not previously available. The skill assessments of SAIS are associated with patient outcomes While useful for mastering a surgical technical skill itself, surgeon feedback becomes more clinically meaningful when grounded in patient outcomes. For example, if low-skill assessments are associated with poor outcomes, then a surgeon can begin to modulate specific behaviour to improve such outcomes. To that end, we conducted a preliminary analysis regressing the surgeon skill assessments of SAIS at USC onto a patient’s binary recovery of urinary continence (ability to voluntarily control urination) 3 months after surgery ( Methods ). When considering all video samples (multiple per surgical case), and controlling for surgeon caseload and patient age, we found that urinary continence recovery was 1.31× (odds ratio (OR), confidence interval (CI) 1.08–1.58, P = 0.005) more likely when needle driving was assessed as high skill than as low skill by SAIS. When aggregating the skill assessments of video samples within a surgical case, that relationship is further strengthened (OR 1.89, CI 0.95–3.76, P = 0.071). These preliminary findings are consistent with those based on manual skill assessments from recent studies 15 , 16 . Discussion Only in the past decade or so has it been empirically demonstrated that intraoperative surgical activity can have a direct influence on postoperative patient outcomes. However, discovering and acting upon this relationship to improve outcomes is challenging when the details of intraoperative surgical activity remain elusive. By combining emerging technologies such as AI with videos commonly collected during robotic surgeries, we can begin to decode multiple elements of intraoperative surgical activity. We have shown that SAIS can decode surgical subphases, gestures and skills, on the basis of surgical video samples, in a reliable, objective and scalable manner. Although we have presented SAIS as decoding these specific elements in robotic surgeries, it can conceivably be applied to decode any other element of intraoperative activity from different surgical procedures. Decoding additional elements of surgery will simply require curating a dataset annotated with the surgical element of interest. To facilitate this, we release our code such that others can extract insight from their own surgical videos with SAIS. In fact, SAIS and the methods that we have presented in this study apply to any field in which information can be decoded on the basis of visual and motion cues. Compared with previous studies, our study offers both translational and methodological contributions. From a translational standpoint, we demonstrated the ability of SAIS to generalize across videos, surgeons, surgical procedures and hospitals. Such a finding is likely to instil surgeons with greater confidence in the trustworthiness of SAIS, and therefore increases their likelihood of adopting it. This is in contrast to previous work that has evaluated AI systems on videos captured in either a controlled laboratory environment or a single hospital, thereby demonstrating limited generalization capabilities. From a methodological standpoint, SAIS has much to offer compared with AI systems previously developed for decoding surgical activity. First, SAIS is unified in that it is capable of decoding multiple elements of intraoperative surgical activity without any changes to its underlying architecture. By acting as a dependable core architecture around which future developments are made, SAIS is likely to reduce the amount of resources and cognitive burden associated with developing AI systems to decode additional elements of surgical activity. This is in contrast to the status quo in which the burdensome process of developing specialized AI systems must be undertaken to decode just a single element. Second, SAIS provides explainable findings in that it can highlight the relative importance of individual video frames in contributing to the decoding. Such explainability, which we systematically investigate in a concurrent study 17 is critical to gaining the trust of surgeons and ensuring the safe deployment of AI systems for high-stakes decision making such as skill-based surgeon credentialing. This is in contrast to previous AI systems such as MA-TCN 12 , which is only capable of highlighting the relative importance of data modalities (for example, images versus kinematics), and therefore lacks the finer level of explainability of SAIS. SAIS is also flexible in that it can accept video samples with an arbitrary number of video frames as input, primarily due to its transformer architecture. Such flexibility, which is absent from previous commonly used models such as 3D-CNNs, confers benefits to training, fine-tuning and performing inference. During training, SAIS can accept a mini-batch of videos each with a different number of frames. This can be achieved by padding videos in the mini-batch (with zeros) that have fewer frames, and appropriately masking the attention mechanism in the transformer encoder (see Implementation details and hyperparameters in Methods ). This is in contrast to existing AI systems, which must often be presented with a mini-batch of equally sized videos. Similarly, during fine-tuning or inference, SAIS can be presented with an arbitrary number of video frames, thus expanding the spectrum of videos that it can be presented with. This is in contrast to existing setups that leverage a 3D-CNN that has been pre-trained on the Kinetics dataset 18 , whereby video samples must contain either 16 frames or multiples thereof 6 , 13 . Abiding by this constraint can be suboptimal for achieving certain tasks, and departing from it implies the inability to leverage the pre-trained parameters that have proven critical to the success of previous methods. Furthermore, SAIS is architecturally different from previous models in that it learns prototypes via supervised contrastive learning to decode surgical activity, an approach that has yet to be explored with surgical videos. Such prototypes pave the way for multiple downstream applications from detecting out-of-distribution video samples, to identifying clusters of intraoperative activity, and retrieving samples from a large surgical database 19 . We also showed that SAIS can provide information that otherwise would not have been readily available to surgeons. This includes surgical gesture and skill profiles, which reflect how surgical activity is executed by a surgeon over time for a single surgical case and across different cases. Such capabilities pave the way for multiple downstream applications that otherwise would have been difficult to achieve. For example, from a scientific perspective, we can now capture the variability of surgical activity across time, surgeons and hospitals. From a clinical perspective, we can now test hypotheses associating intraoperative surgical activity with long-term patient outcomes. This brings the medical community one step closer to identifying, and eventually modulating, causal factors responsible for poor outcomes. Finally, from an educational perspective, we can now monitor and provide surgeons with feedback on their operating technique. Such feedback can help surgeons master necessary skills and contribute to improved patient outcomes. There are important challenges our work does not yet address. First, our framework, akin to others in the field, is limited to only decoding the elements of surgical activity that have been previously outlined in some taxonomy (such as gestures). In other words, it cannot decode what it does not know. Although many of these taxonomies have been rigorously developed by teams of surgeons and through clinical experience, they may fail to shed light on other intricate aspects of surgical activity. This, in turn, limits the degree to which automated systems can discover novel activity that falls beyond the realm of existing protocol. Such discovery can lend insight into, for example, optimal but as-of-yet undiscovered surgical behaviour. In a similar vein, SAIS is currently incapable of decoding new elements of surgical activity beyond those initially presented to it. Such continual learning capabilities 10 are critical to adapting to an evolving taxonomy of surgical activity over time. The goal of surgery is to improve patient outcomes. However, it remains an open question whether or not the decoded elements of intraoperative surgical activity: subphases, gestures and skills, are the factors most predictive of postoperative patient outcomes. Although we have presented preliminary evidence in this direction for the case of surgical skills, large-scale studies are required to unearth these relationships. To further explore these relationships and more reliably inform future surgical practice, we encourage the public release of large-scale surgical video datasets from different hospitals and surgical specialties. Equipped with such videos and SAIS, researchers can begin to decode the various elements of surgery at scale. Moving forward, we look to investigate whether SAIS has the intended effect on clinical stakeholders. For example, we aim to deploy SAIS in a controlled laboratory environment to assess the skill level of activity performed by medical students and provide them with feedback based on such assessments. This will lend practical insight into the utility of AI-based skill assessments and its perception by surgical trainees. We also intend to explore the interdependency of the elements of intraoperative surgical activity (subphase recognition, gesture classification and skill assessment). This can be achieved, for example, by training a multi-task variant of SAIS in which all elements are simultaneously decoded from a video. In such a setting, positive interference between the tasks could result in an even more reliable decoding. Alternatively, SAIS can be trained to first perform subphase recognition (a relatively easy task) before transferring its parameters to perform skill assessment (a relatively harder task). This is akin to curriculum learning 20 , whereby an AI system is presented with increasingly difficult tasks during the learning process in order to improve its overall performance. In a concurrent study 21 , we also investigate whether SAIS exhibits algorithmic bias against various surgeon subcohorts 22 . Such a bias analysis is particularly critical if SAIS is to be used for the provision of feedback to surgeons. For example, it may disadvantage certain surgeon subcohorts (such as novices with minimal experience) and thus affect their ability to develop professionally. Methods Ethics approval All datasets (data from USC, SAH, and HMH) were collected under institutional review board approval in which informed consent was obtained (HS-17-00113). These datasets were de-identified before model development. Previous work Computational methods Previous work has used computational methods, such as AI, to decode surgery 23 , 24 . One line of research has focused on exploiting robot-derived sensor data, such as the displacement and velocity of the robotic arms (kinematics), to predict clinical outcomes 25 , 26 , 27 , 28 . For example, researchers have used automated performance metrics to predict a patient’s postoperative length of stay within a hospital 26 . Another line of research has instead focused on exclusively exploiting live surgical videos from endoscopic cameras to classify surgical activity 4 , 29 , gestures 5 , 30 , 31 , 32 , 33 and skills 6 , 7 , 13 , 34 , 35 , among other tasks 36 , 37 . For information on additional studies, we refer readers to a recent review 9 . Most recently, attention-based neural networks such as transformers 38 have been used to distinguish between distinct surgical steps within a procedure 39 , 40 , 41 , 42 . Evaluation setups Previous work often splits their data in a way that has the potential for information ‘leakage’ across training and test sets. For example, it is believed that the commonly adopted leave-one-user-out evaluation setup on the JIGSAWS dataset 11 is rigorous. Although it lends insight into the generalizability of a model to a video from an unseen participant, this setup involves reporting a cross-validation score, which is often directly optimized by previous methods (for example, through hyperparameter tuning), therefore producing an overly optimistic estimate of performance. As another example, consider the data split used for the CholecT50 dataset 43 . Here there is minimal information about whether videos in the training and test sets belong to the same surgeon. Lastly, the most recent DVC UCL dataset 12 consists of 36 publicly available videos for training and 9 private videos for testing. After manual inspection, we found that these nine videos come from six surgeons whose data are also in the training set. This is a concrete example of surgeon data leakage, and as such, we caution the use of such datasets for benchmarking purposes. It is therefore critical to more rigorously evaluate the performance of SAIS, and in accordance with how it is likely to be deployed in a clinical setting. Description of surgical procedures and activities We focused on surgical videos depicting two types of surgical activity commonly performed within almost any surgery: tissue dissection and suturing, which we next outline in detail. Tissue dissection Tissue dissection is a fundamental activity in almost any surgical procedure and involves separating pieces of tissue from one another. For example, the RARP surgical procedure, where a cancerous prostate gland is removed from a patient’s body, entails several tissue dissection steps, one of which is referred to as nerve-sparing, or NS. NS involves preserving the neurovascular bundle, a mesh of vasculature and nerves to the left and right of the prostate, and is essential for a patient’s postoperative recovery of erectile function for sexual intercourse. Moreover, an RAPN surgical procedure, where a part of a cancerous kidney is removed from a patient’s body, entails a dissection step referred to as hilar dissection, or HD. HD involves removing the connective tissue around the renal artery and vein to control any potential bleeding from these blood vessels. These dissection steps (NS and HD), although procedure specific (RARP and RAPN), are performed by a surgeon through a common vocabulary of discrete dissection gestures. In our previous work, we developed a taxonomy 44 enabling us to annotate any tissue dissection step with a sequence of discrete dissection gestures over time. Tissue suturing Suturing is also a fundamental component of surgery 45 and involves bringing tissue together. For example, the RARP procedure entails a suturing step referred to as vesico-urethral anastomosis, or VUA. VUA follows the removal of the cancerous prostate gland and involves connecting, via stitches, the bladder neck (a spherical structure) to the urethra (a cylindrical structure), and is essential for postoperative normal flow of urine. The VUA step typically consists of an average of 24 stitches where each stitch can be performed by a surgeon through a common vocabulary of suturing gestures. In our previous work, we developed a taxonomy 5 enabling us to annotate any suturing activity with a sequence of discrete suturing gestures. We note that suturing gestures are different to, and more subtle than, dissection gestures. Each stitch can also be deconstructed into the three recurring subphases of (1) needle handling, where the needle is held in preparation for the stitch, (2) needle driving, where the needle is driven through tissue (such as the urethra), and (3) needle withdrawal, where the needle is withdrawn from tissue to complete a single stitch. The needle handling and needle driving subphases can also be evaluated on the basis of the skill level with which they are executed. In our previous work, we developed a taxonomy 46 enabling us to annotate any suturing subphase with a binary skill level (low skill versus high skill). Surgical video samples and annotations We collected videos of entire robotic surgical procedures from three hospitals: USC, SAH and HMH. Each video of the RARP procedure, for example, was on the order of 2 h. A medical fellow (R.M.) manually identified the NS tissue dissection step and VUA tissue suturing step in each RARP video. We outline the total number of videos and video samples from each hospital in Table 1 . We next outline how these steps were annotated with surgical subphases, gestures and skill levels. It is important to note that human raters underwent a training phase whereby they were asked to annotate the same set of surgical videos, allowing for the calculation of the inter-rater reliability (between 0 and 1) of their annotations. Once this reliability exceeded 0.8, we deemed the training phase complete 47 . Surgical gesture annotations Each video of the NS dissection step (on the order of 20 min) was retrospectively annotated by a team of trained human raters (R.M., T.H. and others) with tissue dissection gestures. This annotation followed the strict guidelines of our previously developed taxonomy of dissection gestures 44 . We focused on the six most commonly used dissection gestures: cold cut (c), hook (h), clip (k), camera move (m), peel (p) and retraction (r). Specifically, upon observing a gesture, a human rater recorded the start time and end time of its execution by the surgeon. Therefore, each NS step resulted in a sequence of n ≈ 400 video samples of gestures (from six distinct categories) with each video sample on the order of 0–10 s in duration. Moreover, each video sample mapped to one and only one gesture. The same strategy was followed for annotating the VUA suturing step with suturing gestures. This annotation followed the strict guidelines of our previously developed taxonomy of suturing gestures 5 . We focused on the four most commonly used suturing gestures: right forehand under (R1), right forehand over (R2), left forehand under (L1) and combined forehand over (C1). Surgical subphase and skill annotations Each video of the VUA suturing step (on the order of 20 min) was retrospectively annotated by a team of trained human raters (D.K., T.H. and others) with surgical subphases and skills. This annotation followed the strict guidelines of our previously developed taxonomy referred to as the end-to-end assessment of suturing expertise or EASE 46 . Since the VUA step is a reconstructive one in which the bladder and urethra are joined together, it often requires a series of stitches (on the order of 24 stitches: 12 on the bladder side and another 12 on the urethral side). With a single stitch consisting of the three subphases of needle handling, needle driving and needle withdrawal (always in that order), a human rater would first identify the start time and end time of each of these subphases. Therefore, each VUA step may have n = 24 video samples of the needle handling, needle driving and needle withdrawal subphases with each video sample on the order of 10–30 s. The distribution of the duration of such video samples is provided in Supplementary Note 2 . Human raters were also asked to annotate the quality of the needle handling or needle driving activity (0 for low skill and 1 for high skill). For needle handling, a high-skill assessment is based on the number of times the surgeon must reposition their grip on the needle in preparation for driving it through the tissue (the fewer the better). For needle driving, a high-skill assessment is based on the smoothness and number of adjustments required to drive the needle through the tissue (the smoother and fewer number of adjustments the better). Since each video sample was assigned to multiple raters, it had multiple skill assessment labels. In the event of potential disagreements in annotations, we considered the lowest (worst) score. Our motivation for doing so was based on the assumption that if a human rater penalized the quality of the surgeon’s activity, then it must have been due to one of the objective criteria outlined in the scoring system, and is thus suboptimal. We, in turn, wanted to capture and encode this suboptimal behaviour. Motivation behind evaluating SAIS with Monte Carlo cross-validation In all experiments, we trained SAIS on a training set of video samples and evaluated it using ten-fold Monte Carlo cross- validation where each fold’s test set consisted of subphases from videos unseen during training. Such an approach contributes to our goal of rigorous evaluation by allowing us to evaluate the ability of SAIS to generalize to unseen videos (hereon referred to as across videos). This setup is also more challenging and representative of real-world deployment than one in which an AI system generalizes to unseen samples within the same video. As such, we adopted this evaluation setup for all experiments outlined in this study, unless otherwise noted. A detailed breakdown of the number of video samples used for training, validation and testing can be found in Supplementary Note 1 . Data splits For all the experiments conducted, unless otherwise noted, we split the data at the case video level into a training (90%) and test set (10%). We used 10% of the videos in the training set to form a validation set with which we performed hyperparameter tuning. By splitting at the video level, whereby data from the same video do not appear across the sets, we are rigorously evaluating whether the model generalizes across unseen videos. Note that, while it is possible for data from the same surgeon to appear in both the training and test sets, we also experiment with even more rigorous setups: across hospitals—where videos are from entirely different hospitals and surgeons—and across surgical procedures—where videos are from entirely different surgical procedures (such as nephrectomy versus prostatectomy). While there are various ways to rigorously evaluate SAIS, we do believe that demonstrating its generalizability across surgeons, hospitals and surgical procedures, as we have done, is a step in the right direction. We report the performance of models as an average, with a standard deviation, across the folds. Leveraging both RGB frames and optical flow To capture both visual and motion cues in surgical videos, SAIS operated on two distinct modalities: live surgical videos in the form of RGB frames and the corresponding optical flow of such frames. Surgical videos can be recorded at various sampling rates, which have the units of frames per second (fps). Knowledge of the sampling rate alongside the natural rate with which activity occurs in a surgical setting is essential to multiple decisions. These can range from the number of frames to present to a deep learning network, and the appropriate rate with which to downsample videos, to the temporal step size used to derive optical flow maps, as outlined next. Including too many frames where there is very little change in the visual scene leads to a computational burden and may result in over-fitting due to the inclusion of highly similar frames (low visual diversity). On the other hand, including too few frames might result in missing visual information pertinent to the task at hand. Similarly, deriving reasonable optical flow maps, which is a function of a pair of images which are temporally spaced, is contingent upon the time that has lapsed between such images. Too short of a timespan could result in minimal motion in the visual scene, thus resulting in uninformative optical flow maps. Analogously, too long of a timespan could mean missing out on informative intermediate motion in the visual scene. We refer to these decisions as hyperparameters (see Implementation details and hyperparameters section in Methods ). Throughout this paper, we derived optical flow maps by deploying a RAFT model 48 , which we found to provide reasonable maps. SAIS is a model for decoding activity from surgical videos Our AI system—SAIS—is vision based and unified (Fig. 5 ). It is vision based as it operates exclusively on surgical videos routinely collected as part of robotic surgical procedures. It is unified as the same architecture, without any modifications, can be used to decode multiple elements of intraoperative surgical activity (Fig. 1b ). We outline the benefits of such a system in Discussion . Single forward pass through SAIS Extracting spatial features We extract a sequence of D -dimensional representations, \(\left\{ {v_t \in {\Bbb R}^D} \right\}_{t = 1}^T\) , from T temporally ordered frames via a (frozen) vision transformer (ViT) pre-trained on the ImageNet dataset in a self-supervised manner 49 . In short, this pre-training setup, entitled DINO, involved optimizing a contrastive objective function whereby representations of the same image, augmented in different ways (such as random cropping), are encouraged to be similar to one another. For more details, please refer to the original paper 50 . ViTs convert each input frame into a set of square image patches of dimension H × H and introduce a self-attention mechanism that attempts to capture the relationship between image patches (that is, spatial information). We found that this spatial attention picks up on instrument tips, needles, and anatomical edges (Fig. 6 ). We chose this feature extractor on the basis of (a) recent evidence favouring self-supervised pre-trained models relative to their supervised counterparts and (b) the desire to reduce the computational burden associated with training a feature extractor in an end-to-end manner. Fig. 6: ViT feature extractor places the highest importance on instrument tips, needles and anatomical edges. We present two sample RGB video frames of the needle handling activity and the corresponding spatial attention placed by ViT on patches of these frames. Full size image Extracting temporal features We append a learnable D -dimensional classification embedding, \(e_{{{{\mathrm{cls}}}}} \in {\Bbb R}^D\) , to the beginning of the sequence of frame representations, \(\left\{ {v_t} \right\}_{t = 1}^T\) . To capture the temporal ordering of the frames of the images, we add D -dimensional temporal positional embeddings, \(\left\{ {e_t \in {\Bbb R}^D} \right\}_{t = 1}^T\) , to the sequence of frame representations before inputting the sequence into four Transformer encoder layers. Such an encoder has a self-attention mechanism whereby each frame attends to every other frame in the sequence. As such, both short- and long-range dependencies between frames are captured. We summarize the modality-specific video through a modality-specific video representation, \(h_{{{{\mathrm{cls}}}}} \in {\Bbb R}^D\) , of the classification embedding, e cls , at the final layer of the Transformer encoder, as is typically done. This process is repeated for the optical flow modality stream. Aggregating modality-specific features The two modality-specific video representations, h RGB and h Flow , are aggregated as follows: $$h_{{\mathrm{agg}}} = h_{{{{\mathrm{RGB}}}}} + h_{{{{\mathrm{Flow}}}}}$$ (1) The aggregated representation, h agg , is passed through two projection heads, in the form of linear layers with a non-linear activation function (ReLU), to obtain an E -dimensional video representation, \(h_{{{{\mathrm{Video}}}}} \in {\Bbb R}^E\) . Training protocol for SAIS To achieve the task of interest, the video-specific representation, h Video , undergoes a series of attractions and repulsions with learnable embeddings, which we refer to as prototypes. Each prototype, p , reflects a single category of interest and is of the same dimensionality as h Video . The representation, \(h_{{{{\mathrm{Video}}}}} \in {\Bbb R}^E\) , of a video from a particular category, c , is attracted to the single prototype, \(p_{{{\mathrm{c}}}} \in {\Bbb R}^E\) , associated with the same category, and repelled from all other prototypes, \(\left\{ {p_j} \right\}_{j = 1}^C,j \ne c\) , where C is the total number of categories. We achieve this by leveraging contrastive learning and minimizing the InfoNCE loss, \({{{\mathcal{L}}}}_{{{{\mathrm{NCE}}}}}\) : $$\begin{array}{l}{{{\mathcal{L}}}}_{{{{\mathrm{NCE}}}}} = - \mathop {\sum}\limits_{i = 1}^B {\log \frac{{e^{s\left( {{{{\mathrm{h}}}}_{{{{\mathrm{Video}}}}},p_c} \right)}}}{{\mathop {\sum}\nolimits_j {e^{s\left( {h_{{{{\mathrm{Video}}}}},p_j} \right)}} }}} \\ s\left( {h_{{\mathrm{Video}}},p_j} \right) = \frac{{h_{{{{\mathrm{Video}}}}} \cdot p_j}}{{\left| {h_{{{{\mathrm{Video}}}}}} \right|\left| {p_j} \right|}}\end{array}$$ (2) During training, we share the parameters of the Transformer encoder across modalities to avoid over-fitting. As such, we learn, in an end-to-end manner, the parameters of the Transformer encoder, the classification token embedding, the temporal positional embeddings, the parameters of the projection head and the category-specific prototypes. Evaluation protocol for SAIS To classify a video sample into one of the categories, we calculate the similarity (that is, cosine similarity) between the video representation, h Video , and each of the prototypes, \(\left\{ {p_j} \right\}_{j = 1}^C\) . We apply the softmax function to these similarity values in order to obtain a probability mass function over the categories. By identifying the category with the highest probability mass (argmax), we can make a classification. The video representation, h Video , can be dependent on the choice of frames (both RGB and optical flow) which are initially input into the model. Therefore, to account for this dependence and avoid missing potentially informative frames during inference, we deploy what is known as test-time augmentation (TTA). This involves augmenting the same input multiple times during inference, which, in turn, outputs multiple probability mass functions. We can then average these probability mass functions, analogous to an ensemble model, to make a single classification. In our context, we used three test-time inputs; the original set of frames at a fixed sampling rate, and those perturbed by offsetting the start frame by K frames at the same sampling rate. Doing so ensures that there is minimal frame overlap across the augmented inputs, thus capturing different information, while continuing to span the most relevant aspects of the video. Implementation details and hyperparameters During training and inference, we use the start time and end time of each video sample to guide the selection of video frames from that sample. For gesture classification, we select ten equally spaced frames from the video sample. For example, for a video sample with a frame rate of 30 Hz and that is 3 s long, then from the original 30 × 3 = 90 frames, we would only retrieve frames ∈ [0, 9, 18, …]. In contrast, for subphase recognition and skill assessment, we select every other tenth frame. For example, for the same video sample above, we would only retrieve frames ∈ [0, 10, 20,…]. We found that these strategies resulted in a good trade-off between computational complexity and capturing sufficiently informative signals in the video to complete the task. Similarly, optical flow maps were based on pairs of images that were 0.5 s apart. Shorter timespans resulted in frames that exhibited minimal motion and thus uninformative flow maps. During training, to ensure that the RGB and optical flow maps were associated with the same timespan, we retrieved maps that overlapped in time with the RGB frames. During inference, and for TTA, we offset both RGB and optical flow frames by K = 3 and K = 6 frames. We conduct our experiments in PyTorch 51 using a V100 GPU on a DGX machine. Each RGB frame and optical flow map was resized to 224 × 224 (from 960 × 540 at USC and SAH and 1,920 × 1,080 at SAH) before being input into the ViT feature extractor. The ViT feature extractor pre-processed each frame into a set of square patches of dimension H = 16 and generated a frame representation of dimension D = 384. All video representations and prototypes are of dimension E = 256. In practice, we froze the parameters of the ViT, extracted all such representations offline (that is, before training), and stored them as h5py files. We followed the same strategy for extracting representations of optical flow maps. This substantially reduced the typical bottleneck associated with loading videos and streamlined our training and inference process. This also facilitates inference performed on future videos. Once a new video is recorded, its features can immediately be extracted in an offline manner, and stored for future use. Unless otherwise stated, we trained SAIS using a mini-batch size of eight video samples and a learning rate of 1 e −1 , and optimize its parameters via stochastic gradient descent. Mini-batch samples are often required to have the same dimensionality ( B × T × D ) where B is the batch size, T is the number of frames and D is the dimension of the stored representation. Therefore, when we encountered video samples in the same mini-batch with a different number of temporal frames (such as T = 10 versus T = 11), we first appended placeholder representations (tensors filled with zeros) to the end of the shorter video samples. This ensured all video samples in the mini-batch had the same dimension. To avoid incorporating these padded representations into downstream processing, we used a masking matrix (matrix with binary entries) indicating which representations the attention mechanism should attend to. Importantly, padded representations are not attended to during a forward pass through SAIS. Description of ablation study We trained several variants of SAIS to pinpoint the contribution of each its components on overall performance. Specifically, the model variants are trained using SAIS (baseline), evaluated without test-time augmentation (‘without TTA’), and exposed to only optical flow (‘without RGB’) or RGB frames (‘without flow’) as inputs. We also removed the self-attention mechanism which captured the relationship between, and temporal ordering of, frames (‘without SA’). In this setting, we simply averaged the frame features. Although we present the PPV in Results , we arrived at similar findings when using other evaluation metrics. Implementation details of inference on entire videos After we trained and evaluated a model on video samples (on the order of 10–30 s), we deployed it on entire videos (on the order of 10–30 min) to decode an element of surgical activity without human supervision. We refer to this process as inference. As we outline next, a suitable implementation of inference is often dependent on the element of surgical activity being decoded. Suturing subphase recognition Video samples used for training and evaluating SAIS to decode the three suturing subphases of needle handling, needle driving and needle withdrawal spanned, on average, 10–30 s (Supplementary Note 2 ). This guided our design choices for inference. Curating video samples for inference During inference, we adopted two complementary approaches, as outlined next. Approach 1: we presented SAIS with 10-s video samples from an entire VUA video with 5-s overlaps between subsequent video samples, with the latter ensuring we capture boundary activity. As such, each 10-s video sample was associated with a single probabilistic output, { s NH , s ND , s NW }, reflecting the probability, s , of needle handling (NH), needle driving (ND) and needle withdrawal (NW). Approach 2: we presented SAIS with 5-s non-overlapping video samples from the same video. The motivation for choosing a shorter video sample is to capture a brief subphase that otherwise would have bled into another subphase when using a longer video sample. As such, each 5-s video sample was associated with a single probabilistic output. Note that we followed the same approach for selecting frames from each video sample as we did during the original training and evaluation setup (see Implementation details and hyperparameters ). As an example of these approaches, the first video sample presented to SAIS in approach 1 spans 0–10 s whereas the first two video samples presented to SAIS in approach 2 span 0–5 s and 5–10 s, respectively. When considering both approaches, the timespan 0–10 s is thus associated with three unique probabilistic outputs (as is every other 10-s timespan). Using ensemble models Recall that we trained SAIS using ten-fold Monte Carlo cross-validation, resulting in ten unique models. To increase our confidence in the inference process, we performed inference following the two aforementioned approaches with each of the ten models. As such, each 10-s timespan was associated with 3 probabilistic outputs ( P ) × 10 folds ( F ) × 3 TTAs = 90 probabilistic outputs in total. As is done with ensemble models, we then averaged these probabilistic outputs (a.k.a. bagging) to obtain a single probabilistic output, \(\left\{ {\overline s _{{\mathrm{NH}}},\overline s _{{\mathrm{ND}}},\overline s _{{\mathrm{NW}}}} \right\}\) , where the j th probability value for j ∈ [1, C ] ( C categories) is obtained as follows: $$\bar s_i = \mathop {\sum}\limits_{P = 1}^3 {\mathop {\sum}\limits_{F = 1}^{10} {\mathop {\sum}\limits_{{\mathrm{TTA}} = 1}^3 {s_j^{{\mathrm{TTA}},F,P}\forall j \in \left[ {1,C} \right]} } }$$ (3) Abstaining from prediction In addition to ensemble models often outperforming their single model counterparts, they can also provide an estimate of the uncertainty about a classification. Such uncertainty quantification can be useful for identifying out-of-distribution video samples 52 such as those the model has never seen before or for highlighting video samples where the classification is ambiguous and thus potentially inaccurate. To quantify uncertainty, we took inspiration from recent work 53 and calculated the entropy, S , of the resultant probabilistic output post bagging. With high entropy implying high uncertainty, we can choose to abstain from considering classifications whose entropy exceeds some threshold, S thresh : $$S = - \mathop {\sum}\limits_{j = 1}^c {\overline s _j\log \overline s _j > S_{{{{\mathrm{thresh}}}}}}$$ (4) Aggregating predictions over time Once we have filtered out the predictions which are uncertain (that is, exhibit high entropy), we were left with individual predictions for each subphase spanning at most 10 s (because of how we earlier identified video samples). However, we know from observation that certain subphases can be longer than 10 s (Supplementary Note 2 ). To account for this, we aggregated subphase predictions that were close to one another over time. Specifically, we aggregated multiple predictions of the same subphase into a single prediction if they were less than 3 s apart, in effect chaining the predictions. Although this value is likely to be dependent on other choices in the inference process, we found it to produce reasonable results. NS dissection gesture classification Video samples used for training and evaluating SAIS to decode the six dissection gestures spanned, on average, 1–5 s. This also guided our design choices for inference. Identifying video samples for inference During inference, we found it sufficient to adopt only one of the two approaches for inference described earlier (inference for subphase recognition). Specifically, we presented SAIS with 1-s non-overlapping video samples of an entire NS video. As such, each 1-s video sample was associated with a single probabilistic output, \(\{ s_j\} _{j = 1}^6\) reflecting the probability, s , of each of the six gestures. Using ensemble models As with inference for suturing subphase recognition, we deployed the ten SAIS models (from the ten Monte Carlo folds) and three TTAs on the same video samples. As such, each 1-s video sample was associated with 10 × 3 = 30 probabilistic outputs. These are then averaged to obtain a single probabilistic output, \(\{ \bar s_j\} _{j = 1}^6\) . Abstaining from prediction We also leveraged the entropy of gesture classifications as a way to quantify uncertainty and thus abstain from making highly uncertain gesture classifications. We found that S thresh = 1.74 led to reasonable results. Aggregating predictions over time To account for the observation that gestures can span multiple seconds, we aggregated individual 1-s predictions that were close to one another over time. Specifically, we aggregated multiple predictions of the same gesture into a single prediction if they were less than 2 s apart. For example, if a retraction gesture (r) is predicted at intervals 10–11 s, 11–12 s and 15–16 s, we treated this as two distinct retraction gestures. The first one spans 2 s (10–12 s) while the second one spans 1 s (15–16 s). This avoids us tagging spurious and incomplete gestures (for example, the beginning or end of a gesture) as an entirely distinct gesture over time. Our 2-s interval introduced some tolerance for a potential misclassification between gestures of the same type and allowed for the temporal continuity of the gestures. Implementation details of training SAIS on external video datasets We trained SAIS on two publicly available datasets: JIGSAWS 11 and DVC UCL 12 . In short, these datasets contain video samples of individuals performing suturing gestures either in a controlled laboratory setting or during the dorsal vascular complex step of the RARP surgical procedure. For further details on these datasets, we refer readers to the original respective publications. JIGSAWS dataset We followed the commonly adopted leave-one-user-out cross-validation setup 11 . This involves training on video samples from all but one user and evaluating on those from the remaining user. These details can be found in a recent review 9 . DVC UCL dataset This dataset, recently released as part of the Endoscopic Vision Challenge 2022 at MICCAI, consists of 45 videos from a total of eight surgeons performing suturing gestures during the dorsal vascular complex step of the RARP surgical procedure 12 . The publicly available dataset, at the time of writing, is composed of 36 such videos (Table 1 ). Similar to the private datasets we used, each video (on the order of 2–3 min) is annotated with a sequence of eight unique suturing gestures alongside their start time and end time. Note that these annotations do not follow the taxonomy we have developed and are therefore distinct from those we outlined in the Surgical video samples and annotations section. The sole previous method to evaluate on this dataset does so on a private test set. As this test set is not publicly available, we adopted a leave-one-video-out setup and reported the ten-fold cross-validation performance of SAIS (Supplementary Table 3 for the number of video samples in each fold). Such a setup provides insight into how well SAIS can generalize to unseen videos. Furthermore, in light of the few samples from one of the gesture categories (G5), we distinguished between only seven of the gestures. To facilitate the reproducibility of our findings, we will release the exact data splits used for training and testing. Implementation details of I3D experiments We trained the I3D model to decode the binary skill level of needle handling and needle driving on the basis of video samples of the VUA step. For a fair comparison, we presented the I3D model with the same exact data otherwise presented to SAIS (our model). In training the I3D model, we followed the core strategy proposed in ref. 6 . For example, we loaded the parameters pre-trained on the Kinetics dataset and froze all but the last three layers (referred to as Mixed 5b , Mixed 5c and logits). However, having observed that the I3D model was quite sensitive to the choice of hyperparameters, we found it necessary to conduct an extensive number of experiments to identify the optimal setup and hyperparameters for decoding surgical skill, the details of which are outlined next. First, we kept the logits layer as is, resulting in a 400-dimensional representation, and followed it with a non-linear classification head to output the probability of, for example, a high-skill activity. We also leveraged both data modalities (RGB and flow) which we found to improve upon the original implementation that had used only a single modality. Specifically, we added the two 400-dimensional representations (one for each modality) to one another and passed the resultant representation through the aforementioned classification head. With the pre-trained I3D expecting an input with 16 frames or multiples thereof, we provided it with a video sample composed of 16 equally spaced frames between the start time and end time of that sample. While we also experimented with a different number of frames, we found that to yield suboptimal results. To train I3D, we used a batch-size of 16 video samples and a learning rate of 1 e −3 . Association between the skill assessments of SAIS and patient outcomes To determine whether the skill assessments of SAIS are associated with patient outcomes, we conducted an experiment with two variants. We first deployed SAIS on the test set of video samples in each fold of the Monte Carlo cross-validation setup. This resulted in an output, Z 1 ∈ [0, 1], for each video sample reflecting the probability of a high-skill assessment. In the first variant of this experiment, we assigned each video sample, linked to a surgical case, a urinary continence recovery (3 months after surgery) outcome, Y . To account for the fact that a single outcome, Y , is linked to an entire surgical case, in the second variant of this experiment, we averaged the outputs, Z , for all video samples within the same surgical case. This, naturally, reduced the total number of samples available. In both experiments, we controlled for the total number of robotic surgeries performed by the surgeon (caseload, Z 2 ) and the age of the patient being operated on ( Z 3 ), and regressed the probabilistic outputs of SAIS to the urinary continence recovery outcome using a logistic regression model (SPSS), as shown below ( σ is the sigmoid function). After training this model, we extracted the coefficient, b 1 , and report the odds ratio (OR) and the 95% confidence interval (CI). $$\begin{array}{l}Y = \sigma \left( {b_0 + b_1Z_1 + b_2Z_2 + b_3Z_3} \right)\\ {{{\mathrm{OR}}}} = \frac{{{\mathrm{odds}}_{{{{\mathrm{high}}}}-{{{\mathrm{skill}}}}}}}{{{\mathrm{odds}}_{{{{\mathrm{low}}}}-{{{\mathrm{skil}}}}l}}} = e^{b_1}\end{array}$$ (5) Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Data supporting the results in this study involve surgeon and patient data. As such, while the data from SAH and HMH are not publicly available, de-identified data from USC can be made available upon reasonable request from the authors. Code availability Code is made available at . | When surgeons are trained, they usually need the supervision of more experienced doctors who can mentor them on their technique. That may be changing due to a new artificial intelligence system developed by Caltech researchers and Keck Medicine of USC urologists that aims to provide valuable feedback to surgeons on the quality of their work. The goal of the new Surgical AI System (SAIS) is to provide surgeons with objective performance evaluations that can improve their work and, by extension, the outcomes of their patients. When provided with a video of a surgical procedure, SAIS can identify what type of surgery is being performed and the quality with which it was executed by a surgeon. The system was introduced through a series of articles in the journals Nature Biomedical Engineering, npj Digital Medicine, and Communications Medicine, which were published concurrently at the end of March 2023. "In high stakes environments such as robotic surgery, it is not realistic for AI to replace human surgeons in the short term," says Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences and senior author of the studies. "Instead, we asked how AI can safely improve surgical outcomes for the patients, and hence, our focus on making human surgeons better and more effective through AI." SAIS was trained using a large volume of video data that was annotated by medical professionals. Surgeons' performances were assessed down to the level of individual discrete motions, i.e., holding a needle, driving it through tissue, and withdrawing it from tissue. After training, SAIS was tasked with reviewing and evaluating surgeons' performance during a wide range of procedures using video from a variety of hospitals. "SAIS has the potential to provide surgeon feedback that is accurate, consistent, and scalable," says Dani Kiyasseh, lead author of the studies, a former postdoctoral researcher at Caltech and now a senior AI engineer at Vicarious Surgical. The hope, according to the researchers, is for SAIS to provide surgeons with guidance on what skill sets need to be improved. To make the tool more valuable for surgeons, the team developed the AI's ability to justify its skill assessments. The AI can now inform surgeons about their level of skill and provide detailed feedback on its rationale for making that assessment by pointing to specific video clips. "We were able to show that such AI-based explanations often align with explanations that surgeons would have otherwise provided," Kiyasseh says. "Reliable AI-based explanations can pave the way for providing feedback when peer surgeons are not immediately available." Early on, researchers testing SAIS noted that an unintended bias crept into the system in which the AI sometimes rated surgeons as more or less skilled than their experience would otherwise indicate based solely on an analysis of their overall movements. To address this issue, the researchers guided the AI system to focus exclusively on pertinent aspects of the surgical video. Narrowing the focus mitigated, though did not eliminate, the bias, which the researchers are continuing to address. "Human-derived surgical feedback is not presently objective nor scalable," says Andrew Hung, a urologist with Keck Medicine of USC and associate professor of urology at Keck School of Medicine of USC. "AI-derived feedback, such as what our system delivers, presents a major opportunity to provide surgeons actionable feedback." | 10.1038/s41551-023-01010-8 |
Biology | New whale species discovered along the coast of Hokkaido | Tadasu K. Yamada et al. Description of a new species of beaked whale (Berardius) found in the North Pacific, Scientific Reports (2019). DOI: 10.1038/s41598-019-46703-w | http://dx.doi.org/10.1038/s41598-019-46703-w | https://phys.org/news/2019-09-whale-species-coast-hokkaido.html | Abstract Two types of Berardius are recognised by local whalers in Hokkaido, Japan. The first is the ordinary Baird’s beaked whale, B . bairdii , whereas the other is much smaller and entirely black. Previous molecular phylogenetic analyses revealed that the black type is one recognisable taxonomic unit within the Berardius clade but is distinct from the two known Berardius species. To determine the characteristics of the black type, we summarised external morphology and skull osteometric data obtained from four individuals, which included three individuals from Hokkaido and one additional individual from the United States National Museum of Natural History collection. The whales differed from all of their congeners by having the following unique characters: a substantially smaller body size of physically mature individuals, proportionately shorter beak, and darker body colour. Thus, we conclude that the whales are a third Berardius species. Introduction Beaked whales (Family Ziphiidae, Odontoceti, Cetacea) include the second largest number of species among toothed whale families. Their preference for deep ocean waters, elusive habits, and long dive capacity 1 make beaked whales hard to see and inadequately understood. A total of 22 species are currently recognized in six genera ( Berardius , Hyperoodon , Indopacetus , Mesoplodon , Tasmacetus , and Ziphius ) 2 . The genus Berardius has two species, Baird’s beaked whale Berardius bairdii , found in the North Pacific and adjacent waters, and Arnoux’s beaked whale B . arnuxii , found in the Southern Ocean 3 . Besides the two nominal species, however, whalers’ observations off Hokkaido, northern Japan, have alluded to the occurrence of two groups of Berardius , one being slate-gray form and the other, the black form, which are smaller in body size 4 , 5 . Today, slate-gray form is common around Japan, which are traditionally considered as B . bairdii , but black form is rare, and no detailed morphological examinations have been conducted so far. Recent molecular phylogenetic analyses strongly suggest the black and the slate-gray forms in the North Pacific as genetically separate stocks of Berardius 6 , 7 , awaiting further work with sufficient morphological data to verify the differences between the two types of Berardius . Here, we examined black type beaked whale external morphology and skull osteometric data obtained from four specimens including three from Hokkaido and one from the United States National Museum of Natural History (USNM) collection, to highlight the morphological characteristics of the black form after comparison with those of their congeners, B . bairdii and B . arnuxii . The observed unique external characters and skull osteomorphology, coupled with updated molecular phylogeny of Berardius , distinguish the black form as a third Berardius species previously unknown in cetacean taxonomy. Genus Berardius Before discussing the above-mentioned subject, it would be useful to summarise what is known about the genus Berardius . Berardius was established by Duvernoy in 1851 8 , who described B . arnuxii based on a specimen collected in New Zealand. The skull and mandibles of this individual are preserved in le Museum Nationalle d’Histoire Naturelle (MNHN) in Paris. Stejneger 9 described a similar species of this genus, B . bairdii Stejneger (USNM 20992), as a northern counterpart in 1883; this description was published just a few months earlier than Malm’s 10 description of B . vegae , which was later defined as a junior synonym of B . bairdii 11 . Both specimens were collected from Bering Island. The B . bairdii holotype includes a skull and mandibles, and the B . vegae holotype consisted of broken skull pieces. B . arnuxii and B . bairdii could be good examples of antitropical distribution 12 . As summarised by Kasuya 5 , 13 , there have been extensive debates on the identities of these two species, because they are very similar except for body size and distribution. B . arnuxii is slightly smaller than B . bairdii . True 11 pointed out several characters that are distinct between these two species. However, as the number of specimens increased, most of the characters lost systematic significance, and their validity was disputed 14 , 15 . Dalebout et al . 16 put an end to this discussion and showed that the two species are genetically distinct and independent. However, morphological discrimination of these two species is not currently well established and we have to rely on molecular results or distribution to discriminate these two species. Ross 17 noted that more thorough morphological investigations are needed to distinguish B . bairdii and B . arnuxii . Berardius skulls are the least asymmetrical and sexually dimorphic among genera of the family Ziphiidae; only the body length of females is slightly larger than that of males. The beak is straight and long. Unlike most other ziphiids, they have two pairs of teeth in the lower jaw. The blowhole slit is unique, with a posteriorly opened arch that is unlike those of all other odontocete groups (e.g. Kasuya 18 ). Although the nasals are large, they do not overhang the superior nares. History of Berardius in Japan In 1910, True 11 summarised the ziphiid specimens that were preserved and stated “ Berardius is the rarest genus, only about fourteen specimens having been collected thus far”. Also in 1910, Andrews visited the Imperial Museum at Tokyo, which is now called the National Museum of Nature and Science (NMNS), to find a B . bairdii skeleton 19 ; this occurred when existence of Berardius in Japan was known to science and, on this historical occasion, B . bairdii was confirmed to correspond to “tsuchi-kujira” 20 of Japan. When considering the recognition of B . bairdii in Japan, however, the Japanese name tsuchi-kujira had been used since the early 18th century, and whaling activities have been aimed at this species since then 21 , 22 , 23 , 24 . Proper comparison and recognition of this species using the Western (or Linnean) systematic scheme took some time after the introduction of modern science from the West, which began in 1868 after the Meiji Restoration. Researchers such as Okada 25 incorrectly identified tsuchi-kujira as Hyperoodon rostratus , and this notion was generally accepted in most publications. In 1910, Andrews examined the specimens of tsuchi-kujira (then recognised as H . rostratus ) that were exhibited in the Imperial Museum in Tokyo, and identified them as B . bairdii 19 . He surveyed the locality of this B . bairdii specimen and collected a whole skeleton of this species in Chiba. This event was reported by Nagasawa 20 to the Zoological Society of Japan and confirmed the existence of B . bairdii in Japanese waters. Results The following description was prepared by Tadasu K. Yamada, Shino Kitamura and Takashi F. Matsuishi. Systematics Order CETARTIODACTYLA Montgelard, Catzeflis and Douzery, 1997 26 . Infraorder CETACEA Brisson, 1762 27 Parvorder ODONTOCETI Flower, 1864 28 Family ZIPHIIDAE Gray, 1865 29 Genus BERARDIUS Duvernoy, 1851 8 Berardius minimus sp. nov. (New Japanese name: Kurotsuchikujira) Etymology The specific name reflects the smallest body size of physically mature individuals of this species compared with the other Berardius species. Historically, whalers in Hokkaido recognised this species as different from B . bairdii and called them “kuro-tsuchi”, which means black Baird’s beaked whale; however, the colour difference mainly depends on the scar density and is not biologically fundamental (Figs 1 and 2 ). We therefore chose the most basic difference, the significantly small body size, which is smallest among the congeners, to be reflected in the scientific name. Figure 1 Unidentified beaked whale incidentally caught in Shibetsu, Hokkaido (photo taken by Minako Kurasawa on 20 July 2004, courtesy of Hal Sato). Full size image Figure 2 Unidentified beaked whales sighted in Nemuro strait. Note the short beak, dark body colour, and sparse linear scars (photo taken by Hal Sato on 21 May 2009). Full size image Holotype Adult male (NSMT-M35131) skull, mandible, and most of post of postcranial skeleton at National Museum of Nature and Science (NMNS). In addition, tissue samples are also preserved at the NMNS. This specimen, a fairly well decomposed stranded carcass was found on 4 June 2008 (Fig. 3A–C ). Upon receiving notice, SNH took action, and Prof. Mari Kobayashi of Tokyo University of Agriculture and her students examined the carcass on-site. The carcass was then buried at a nearby. The whole skeleton was excavated and recovered on 26 and 27 August 2009 by one of us (SNH), Tokyo University of Agriculture, Institute of Cetacean Research, and NMNS. Figure 3 Severely decomposed beaked whale stranded in Kitami, Hokkaido on 4 June 2008. ( A ) The relatively shorter beak indicates it is not B . bairdii (photo taken by Mari Kobayashi), ( B ) although the blow hole shape indicates it belongs to Berardius (photo taken by Mari Kobayashi). ( C ) The general body shape is that of typical ziphiid species. When compared with adult B . bairdii , this specimen is more spindle-shaped. Full size image Type Locality Tokoro Town (44°07′14.5N, 144°06′29.6E), Kitami City, Hokkaido, Japan, southern Okhotsk Sea, North Pacific. Nomenclatural statement A Life Science Identifier (LSID) was obtained for the new species ( B. minimus ): urn:lsid:zoobank.org:act:C8D63A76-B1A3-4C67-8440-AFCE08BE32E9, and for this publication: urn:lsid:zoobank.org:pub:52AD3A26-4AE6-42BA-B001-B161B73E5322. Diagnosis Berardius minimus differs from all of its congeners by having the following unique characters: remarkably smaller body size of physically mature individuals, proportionately shorter beak, darker body colour subsequent noticeable cookie-cutter shark bites. External characters External appearance is mostly known from a male individual found stranded on 10 November 2012 in Sarufutsu, Hokkaido (Fig. 4 ). Most of the external characters of B . minimus are typical of medium- to large-sized ziphiids, with several discriminating characters, such as the narrow, straight, and longer beak; reverse V-shaped throat grooves; relatively smaller flippers (flipper length is 11.4% of body length on average; range, 7.7–13.4%); small dorsal fin (dorsal fin height is 3.7% of body length on average; range, 3.4–3.9%) located 70% of body length (on average; range, 66.7–71.8%); and tail flukes that lack the median notch. However, the posteriorly opened crescent-shaped blowhole slit indicates Berardius affinity. Additionally, B . minimus has a substantially smaller body size (maximum body length of 6.9 m in physically mature individuals, so far), more spindle-shaped body, and relatively shorter beak, which is approximately 4% of the body length and is not consistent with the morphology of either of the known Berardius species. Figure 4 Fresh carcass of Berardius minimus (male, 662 cm) found stranded on 10 November 2012 in Sarufutsu Hokkaido. ( A ) Ventral view of the carcass. Note the whole body is almost black except for the faintly white beak. ( B ) The relatively short beak of the same individual (photos taken by Yasushi Shimizu). Full size image Body colour is almost black with a pale white portion on the rostrum; this is in contrast to B . bairdii , which is described as “slatish” 4 or “slate grey” 6 , 7 or B . arnuxii , which is described as black 30 or light grey 31 . The greyish tone of the B . bairdii body is mainly attributed to the dense healed scars that are probably caused by intraspecific conflicts and/or behaviour. At least in adult and subadult individuals of B . minimus , cookie-cutter shark bites are fairly conspicuous, but not to the extent as usually seen in some other species such as Ziphius cavirostris , Mesoplodon densirostris , and/or Balaenoptera borealis . The darker body colour with almost no scars produces a sharp contrast with the healed cookie-cutter shark bites, which are white and very conspicuous against the black body of B . minimus . The beak is much shorter than in the other two Berardius species. In B . bairdii , the head proportions are extremely small, and are much smaller than that of B . minimus . Body colour is almost uniformly dark brown with a whiter portion at the tip of rostrum. No white patch on the belly was confirmed in B . minimus . An illustration of an adult male of B . minimus is shown as Fig. 5 . At present, we do not know what adult females look like. Figure 5 Illustrations of ( A ) Berardius minimus , and ( B ) B . bairdii . The black bars show 1 m. In general appearance, B . minimus resembles a small B . bairdii with a proportionately shorter beak and more spindle-shaped body (drawn by Yoshimi Watanabe, National Museum of Nature and Science). Full size image External measurements As mentioned above, the distinctly small body length of physically mature individuals and proportionately shorter beak are the most reliable characters which indicate that the population in question represents a species that was previously not known to science. Regarding body length, a strong significant difference was found between the body length of male B . bairdii from the Okhotsk Sea (n = 34) 32 and mature male B . minimus (n = 4, Table 1 ) (Welch’s t-test, t = 18.5, P < 0.001). Table 1 List of Berardius minimus specimens that were stranded or drifting and collected in Hokkaido. Full size table To confirm relative rostrum-to-body length, Welch’s t -test was also conducted. For B . minimus , four samples in Table 1 were analysed. For B . bairdii , the mean and standard deviations for male B . bairdii in the Okhotsk Sea (n = 29) that appeared in Table 2 of Kishiro 32 were used. Rostrum length was standardised by body length, and was 3.62 ± 0.39 SD% (n = 4) for B . minimus and 5.81 ± 0.80 SD% (n = 29) for B . bairdii . Welch’s t -test showed strong significant difference (P = 2.3 × 10 −5 ). Female B . bairdii relative length was 6.27, which is longer than that of males. Note this female was not physically mature. The difference between B . minimus and B . bairdii was obviously larger if the sex-pooled data were used. A strong significant difference was also found between B . minimus and B . bairdii in the Pacific Ocean and Sea of Japan (P < 0.001). Thus, the relative rostrum length of B . minimus was significantly shorter than that of B . bairdii . However, we note that the sample size for both B . minimus and B . arnuxii are extremely small, in contrast to B . bairdii . Table 2 Mean, standard deviation, and range of each measurement by species. Full size table Skull morphology The skull morphology resembles the skulls of both existing Berardius species, but B . minimus has a distinctly shorter rostrum if contrasted to the condylobasal length, and smaller bulla and periotic bone. In general, the sutures are more tightly closed in B . minimus than those in the other Berardius species. In the hyoid bone, thylohyal and basihyal are not fused at all (Fig. 6 ). Figure 6 Skull of the B . minimus holotype. ( A ) Dorsal, lateral, and ventral views of the skull. Note the relatively short rostrum. The white bar indicates 10 cm. ( B ) Anterior and posterior views of the skull. The dorsal view is more triangular, whereas the dorsal views in B . bairdii and B . arnuxii are more pentagonal. The white bar indicates 10 cm. ( C ) Lingual (inner, upper) and buccal (outer, lower) sides of the left mandible. The white bar indicates 10 cm. ( D ) Buccal (external) view of the anterior (left) and posterior (right) teeth of the lower jaw. The white bar indicates 1 cm. Full size image Superior aspect The following characters are readily recognisable as species-specific. The relative beak length in B . minimus is clearly smallest among the three Berardius species. The B . minimus skull has much tighter sutures compared with those in both B . arnuxii and B . bairdii . The proportional distance of the anterior end of the maxillae from the tip of the rostrum (i.e. premaxillae) relative to condylobasal length of the skull is much smaller in B . minimus (6.93% in NSMT35131) than the two previously known Berardius species (which have a distance of approximately 10%). The inclination of the occipital bone is stronger in B . minimus , and the occipital plane is much wider compared with the other two species. The antorbital notch is proportionately narrower in B . minimus than in B . bairdii but similar to that in B . arnuxii . The B . minimus rostrum has simple tapering contour lines toward the tip, whereas both contour lines of the rostrum are parallel in B . bairdii and B . arnuxii . The lateral border of the orbit, which consists of the maxilla and frontal bones, is almost parallel to the sagittal plane in B . minimus , but is oblique in other two species. Lateral aspect The relative rostrum length is obviously shorter in B . minimus , and the B . minimus rostrum also looks much shorter than those of the other two species in side view. The skull height relative to condylobasal length is much larger (0.41–0.44) in B . minimus than those in B . bairdii (0.35–0.40) and B . arnuxii (0.40–0.41). There is stronger inclination of the higher portion of the occipital plane in B . minimus , and the convexity of the occipital plane is stronger in B . minimus . The temporal fossa is the shallowest in B . minimus and the medial wall of the fossa is convex, but is concave in B . bairdii and B . arnuxii . Posterior aspect The structure above the temporal fossa is proportionately much larger and higher in B . minimus than those in B . bairdii and B . arnuxii , which gives the impression that the B . minimus skull is rather triangular in the posterior view, whereas those of the other two species are pentagonal. Anterior aspect In the frontal view, lateral expansion of the premaxillae at the posterior is prominent, and the posterior margins of both maxillae are clearly visible in B . minimus . In B . minimus , the height of skull relative to the width is much higher than those of the other Berardius species. The prominential notch and related structure are much higher, more distinct and more rugged in B . bairdii and B . arnuxii . Teeth As in the other two Berardius species, B . minimus has two pairs of teeth only at the tip of the lower jaw. The anterior tooth is much larger than the posterior tooth. Teeth dimensions of the holotype are shown in Table 3 (57-1 and 2, 58-1 and 2). In the holotype specimen of B . minimus the pulp cavities are almost closed in all teeth other than the right 2 nd tooth, where the pulp cavity is open. Table 3 Skulls used for craniometry analyses. Twenty-one specimens (10 Berardius bairdii , seven B . arnuxii , and four B . minimus ). Full size table Post cranial skeleton The vertebral column has proportionately high spinous processes, which is observed in most ziphiid species (Fig. 7 ). The bone matrix is coarse and porous, and they float on the processing water after internal soft tissue was removed. In the holotype specimen, the vertebral formula is C. 7, Th. 10, L. 10, Ca. 19, making the total count as 46. Among 7 cervical vertebrae, C1–C3 were fused. L4 and L5 are the tallest vertebrae. Ca10 and 11 are so-called ball vertebrae. Ten chevrons were counted. Ribs are in 10 pairs, among which seven pairs are dual-headed with both costovertebral and costotransversal articulations. The remaining three pairs have only one articulation facet which articulate with “transverse” processes of the caudal thoracic vertebrae. No ossified cervical ribs were found. The sternum is composed of five segments. Figure 7 Articulated skeleton of the B . minimus holotype specimen. Full size image Paired ossified pelvic bones have a lateral surface which is fairly smooth; however, in the medial surface, approximately two-thirds of the total length is an elevated area where the corpus cavernosum penis attaches. Viewed from the dorsal side, the pelvic bones show a very gentle s-shape. No rudimental femur or any additional appendicular bone was collected. Pectoral appendage Regrettably, we could not secure all phalangeal bones of the left flipper. On the right side, there are three carpal bones in the proximal row, possibly the Ossa radiale, centrale, and ulnare. In the distal row are another three carpal bones. All five digits have one each metacarpal; the phalangeal formula is 0-5-4-3-2. Multi-measurement comparison Table 2 shows the mean, standard deviation, and range of each measurement by species. PCA showed that the contribution of the first principal component (PC1) was 73.9%, and the cumulative contribution reached 90% for PC1-6. Thus, linear discriminant analysis was conducted using PC1-6. Table 4 shows the linear discriminant coefficients obtained by linear discriminant analysis (LDA). The linear discriminants coefficients of each sample are plotted in Fig. 8 . The distribution of the linear discriminants variates was very clearly separated by species. Table 4 Linear discriminant coefficients obtained by linear discriminant analysis (LDA). Full size table Figure 8 Linear discriminant variates of each sample are plotted. The linear discriminants variates are clearly separated by species. B: Berardius bairdii , A: B . arnuxii , M: B . minimus . Full size image Genetic considerations Molecular phylogenetic relationships among three Berardius species were examined using nucleotide sequence variation of the mitochondrial (mt)DNA control region (CR). The 879-bp complete CR sequence data from eight B . minimus specimens (Table 5 ) (Acc. Nos AB572006-AB572008 from Kitamura et al . 6 , Acc. Nos LC175771-LC175773 in this study, and Acc. Nos KT936580-KT936581 from Morin et al . 7 ) showed five haplotypes with only 1–4 nucleotide differences without gaps after multiple alignment. Using the CR sequences aligned with 430-bp B . arnuxii sequences (Acc. Nos AF036229 and AY579532 from Dalebout et al . 16 ) excluding gaps, the number of nucleotide differences between B . minimus and its congeners was 18–22 for B . bairdii and 25–29 for B . arnuxii . Thus, the mtDNA nucleotide difference between B . minimus and any of its congeners was much greater than the difference between B . bairdii and B . arnuxii , which is 12–16 nucleotides. The observed CR nucleotide differences supported the distinct position of B . minimus in the Berardius tree constructed from 430-bp sequences using the maximum likelihood method, where B . bairdii and B . arnuxii formed a sister clade (Fig. 9 ). Table 5 Individuals and sequences used in this study. SNH: Stranding Network Hokkaido, Hokkaido, Japan; EW: Ehime University es-Bank, Ehime, Japan; NSMT: National Museum of Nature and Science, Ibaraki, Japan. Full size table Figure 9 Maximum likelihood-based molecular phylogenetic relationships among the three Berardius species, with Indopacetus pacificus as the outgroup. See Materials and Methods for details regarding nucleotide sequencing and tree construction. Full size image Known distribution As is indicated by the map of localities where B . minimus was found (Fig. 10 ), their known distribution is very limited and occurs between 40°N and 60°N, and 140°E and 160°W. Figure 10 Berardius minimus localities plotted against the B . bairdii distribution map (shaded area, as described by Kasuya 18 ). Circles show B . minimus localities. The white circle with a black X indicates the B . minimus type locality, whereas the black circle with the white X indicates the B . bairdii type locality. Full size image Discussion Kasuya 5 , 18 summarised Hokkaido whalers’ traditional knowledge. The whalers recognised two types of tsuchi-kujira: the ordinary “tsuchi-kujira” ( Berardius bairdii ) and the darker and smaller “kuro-tsuchi” (black Baird’s beaked whale) or “karasu” (crow). However, it is unclear whether “kuro-tsuchi” and “karasu” are used to describe the same type of whales or each notion represents the different population. In this study, we described a new species, B . minimus , which corresponds to “kuro-tsuchi”. If “karasu” exists as a third type, it could be a species that is not yet recognised or a Mesoplodon species found in Hokkaido (either M . stejnegeri or M . carlhubbsi ). Recognition of these Mesoplodon species around Hokkaido is rather recent; the earliest M . stejnegeri specimen was collected in 1985 33 , and the earliest M . carlhubbsi in 2004 34 . These Mesoplodon species were not recognised as distinct species by whalers or the media until recently. As was also pointed out by Kasuya 18 , Fig. 364 and 366 of Heptner 35 hinted at the possibility of a Hyperoodon -like whale in the northern Pacific. The animal in the photo was definitely not Berardius . This could be a species of probably about 10-m long with a beak almost like that of Hyperoodon . We suspect this could be an example of an extralimital occurrence of H . ampullatus . Considering the recent sightings of the gray whales in the Mediterranean or in Namibia 36 , 37 , the possibility of vagrant individual navigate through the Northwest passage during summer should be studied. The species we described is rather readily recognisable by people with whale taxonomy experience based on the external characters. The species has an obviously smaller body size, which is 6.3–6.9 m in physically mature individuals, so far we confirmed (Morin et al . 7 reported an adult male with 7.3 m body size). Their body size ranges from 9.1–11.1 m in B . bairdii and 8.5–9.75 m in B . arnuxii 38 . They have a relatively short beak that is approximately 4% of the body length. They have a dark body colour, which is almost uniformly black with noticeable healed cookie-cutter shark bites forming white dots; this impressively contrasts with the much lighter colouration of B . bairdii and likely result from healed scratches and scars that were probably caused by intra-specific struggling and bottom-feeding behaviour. Osteologically, the small body size of physically mature individuals is the main defining character of B . minimus . Condylobasal length of the skull is 935–1042 mm, in contrast to 1343–1524 mm in B . bairdii and 1174–1420 mm in B . arnuxii 9 . Skull characters indicate significant influence of size difference, such as tighter bone sutures compared with those of other Berardius species. Skull elements of the brain case are relatively large and conspicuous. The vertebral formula of the type specimen is C. 7, Th. 10, L. 10, Ca. 19 (totalling 46), whereas it is C. 7, Th. 9–11, L. 12–14, Ca. 17–22 (47–52) in B . bairdii and C. 7, Th. 10–11, L. 12–13, Ca. 17–19 (47–49) in B . arnuxii 13 . Rib count, which reflects the thoracic vertebral count, is 10 in the B . minimus type specimen. As was mentioned above, when comparing the skull sutures in similarly mature individuals of different species of cetaceans, there is a general tendency that the larger the adult form is the less rigid skull composition is observed. Cetacean facial skull consists loosely articulated bones, including the maxillae, premaxillae and frontals, which are adhered to the mesorostral cartilage pillar on the vomer by connective tissue. It is a physically significant principle where cetaceans swing their rostrum in the water for foraging. It requires tremendous power and the flexibility of the skull structure must ease the stress given to the skull structure. In this context it is quite reasonable that the skull of B . minimus is far more rigidly composed compared to those of the far larger species, such as B . bairdii and B . arnuxii . It means adult size of B . minimus is essentially far smaller than the other two Berardius species. The molecular biology of B . minimus was previously discussed by Kitamura et al . 6 , and specific genome characters were only identified in individuals collected from Hokkaido. However, we found a skull with B . minimus characters in the collection of the USNM which was collected from the Unalaska Island in 1943. Additional individuals were detected among the samples collected in the Aleutian area, and further analyses and considerations were conducted and discussed by Morin et al . 7 . Further detailed analyses on Berardius species in both the northern and southern hemispheres are needed to explain Berardius speciation processes. The currently recorded B . minimus distribution is very limited and occurs between 40°N and 60°N, and 140°E and 160°W. They have fairly dense cookie-cutter shark ( Isistius brasiliensis ) bites. The cookie-cutter shark is understood to be a tropical to warm-temperate species and their northern limit in the western North Pacific is reported to be 30°N to 43°N 39 . However, the southern limit of the B . minimus distribution might extend further south. Although species identities of B . arnuxii and B . bairdii have been previously debated, we described another species of this genus. However, it is unclear whether B . minimus speciation occurred before or after the antitropical split of B . arnuxii and B . bairdii . Additionally, the area where Berardius speciation took place should be examined in the future. Methods Specimens examined The specimens of this unknown species, which were collected in Hokkaido, are listed in Table 1 . No live animals were used for the current research. Observations on the external appearance and morphometrics, observations on the skeleton especially of the skull, skull morphology and measurements and molecular phylogenetic analysis were conducted. External morphology and measurements External observations of the five individuals of Berardius minimus (three physically mature males, one subadult female, and a head of one neonate female) were made, and the external morphometrics following previous studies 32 , 40 (Tables 6 and 7 ) were conducted on four B . minimus (all physically mature males; Table 1 ). Raw data examination revealed that body length and the ratio of beak length-to-body length significantly differed, and Welch’s t-test was applied to these variables. Table 6 External measurements of Berardius minimus used for the comparison with B . bairdii . Full size table Table 7 Measured external morphometrics characters for B . bairdii as described in previous studies 32 , 40 . Full size table Skeletal morphology and measurements of the skull Observations of the skeleton, especially of the skull, and skull measurements were made for 21 specimens (10 B . bairdii , seven B . arnuxii , and four B . minimus ) (Table 2 ). Specimens are stored at the USNM, NMNS, MNHN, Natural History Museum of London (BMNH), and Museo Acatushún (MA). Multivariate analysis To examine the difference between the morphological features among species, a multivariate analysis was conducted. To describe the effect of the difference of body size by species, a principal component analysis (PCA) was conducted for 27 measurements shown in Table 5 for 22 samples (four B . minimus , 10 B . bairdii , seven B . arnuxii ) shown in Table 3 . For all variables, measured values using this analysis are indicated in bold gothic. A linear discriminant analysis (LDA) was then conducted to compare species using the scores obtained from the principal component analysis (PCA). Calculations were carried out using “prcomp” and “lda” function in R ver.3.3.1 41 . Nucleotide sequence analysis and molecular phylogeny The 18 mtDNA control region (CR) sequences analysed (Table 5 ) included sequences from three B . minimus specimens (Acc. Nos LC175771-LC175773 for SNH12044, SNH12054, and SNH14016, respectively) and 15 previously reported sequences, which included seven for B . bairdii (Kitamura et al . 6 , AB571999-AB572005), five B . minimus (Kitamura et al . 6 , AB572006-AB572008, updated complete sequences August 2016; and Morin et al . 7 , Acc. Nos KT936580-KT936581), two B . arnuxii (Dalebout et al . 16 , Acc. Nos AF036229 and AY579532), and one Indopacetus pacificus (Kitamura et al . 6 , AB572012) as an outgroup. I . pacificus was selected because it belongs to the same family but is in a rather distant genus, which was inferred by a previous CR phylogenetic tree 6 . All the newly collected samples for the nucleotide sequence analysis and molecular phylogeny were officially transferred to the authors from the original sample holder, the Stranding Network Hokkaido. Nucleotide sequencing of the complete mtDNA CR in the three B . minimus was performed using primer pairs CRL (5′-CAA CAC CCA AAG CTG GAA TTC T-3′) 6 and CRH2 (5′-TAG ACA TTT TCA GTG TCT TGC-3′, which was newly designed for this study) for PCR amplification, and CRH (5′-CCA TCG AGA TGT CTT ATT TAA G-3′) 6 and LCR (5′-GAC ATC TGG TTC TTA CTT CAG G-3′) 42 as internal sequencing primers. CR sequence alignment was performed using CLUSTAL X 43 , and the output was inspected by eye following the application of multiple alignment parameters in the program. All CR sequences were adjusted to the short length of the B . arnuxii sequence, 430 bp (Dalebout et al . 16 , Acc. Nos AF036229 and AY579532), for multiple sequence comparison and molecular phylogenetic analysis. A molecular phylogenetic tree was constructed with 430-bp mitochondrial CR sequences of all analysed species using the maximum likelihood algorithm in MEGA version 7 44 based on the Tamura 3-parameter model 45 with gamma distribution (parameter = 0.2001), which was suggested to be the best nucleotide substitution model based on a model test in this program. Bootstrap values were calculated by 1,000 replicates 46 . Data Availability Genbank Accession Numbers for sequences used in molecular phylogenetic analysis are listed in Table 5 . Materials examined in this study and associated museum number are listed in Table 1 . | In a collaboration between the National Museum of Nature and Science, Hokkaido University, Iwate University, and the United States National Museum of Natural History, a beaked whale species which has long been called Kurotsuchikujira (black Baird's beaked whale) by local Hokkaido whalers has been confirmed as the new cetacean species Berardius minimus (B. minimus). Beaked whales prefer deep ocean waters and have a long diving capacity, making them hard to see and inadequately understood. The Stranding Network Hokkaido, a research group founded and managed by Professor Takashi F. Matsuishi of Hokkaido University, collected six stranded unidentified beaked whales along the coasts of the Okhotsk Sea. The whales shared characteristics of B. bairdii (Baird's beaked whale) and were classified as belonging to the same genus Berardius. However, a number of distinguishable external characteristics, such as body proportions and color, led the researchers to investigate whether these beaked whales belong to a currently unclassified species. "Just by looking at them, we could tell that they have a remarkably smaller body size, more spindle-shaped body, a shorter beak, and darker color compared to known Berardius species," explained Curator Emeritus Tadasu K. Yamada of the National Museum of Nature and Science from the research team. In the current study, the specimens of this unknown species were studied in terms of their morphology, osteology, and molecular phylogeny. The results, published in the journal Scientific Reports, showed that the body length of physically mature individuals is distinctively smaller than B. bairdii (6.2–6.9m versus 10.0m). Detailed cranial measurements and DNA analyses further emphasized the significant difference from the other two known species in the genus Berardius. Due to it having the smallest body size in the genus, the researchers named the new species B. minimus. Illustrations comparing the new species B. minimus (A) and the Baird’s beaked whale (B. bairdii) (B) in the same genus. Credit: Tadasu K. Yamada et al., Scientific Reports. August 30, 2019 "There are still many things we don't know about B. minimus," said Takashi F. Matsuishi. "We still don't know what adult females look like, and there are still many questions related to species distribution, for example. We hope to continue expanding what we know about B. minimus." Local Hokkaido whalers also refer to some whales in the region as Karasu (crow). It is still unclear whether B. minimus (or Kurotsuchikujira) and Karasu are the same species or not, and the research team speculate that it is possible Karasu could be yet another different species. Dorsal, ventral, and lateral views of the B. minimus skull (From the left). The rostrum is smaller than that of other Berardius species. Credit: Tadasu K. Yamada et al., Scientific Reports. August 30, 2019 | 10.1038/s41598-019-46703-w |
Earth | Using a pore structure inspired by biological fractals to collect uranium from seawater | Linsen Yang et al, Bioinspired hierarchical porous membrane for efficient uranium extraction from seawater, Nature Sustainability (2021). DOI: 10.1038/s41893-021-00792-6 Alexander I. Wiechert et al, The ocean's nuclear energy reserve, Nature Sustainability (2021). DOI: 10.1038/s41893-021-00808-1 Journal information: Nature Sustainability | http://dx.doi.org/10.1038/s41893-021-00792-6 | https://phys.org/news/2021-11-pore-biological-fractals-uranium-seawater.html | Abstract The oceans offer a virtually infinite source of uranium and could sustain nuclear power technology in terms of fuel supply. However, the current processes to extract uranium from seawater remain neither economically viable nor efficient enough to compete with uranium ore mining. Microporous polymers are emerging materials for the adsorption of uranyl ions due to their rich binding sites, but they still fall short of satisfactory performance. Here, inspired by the ubiquitous fractal structure in biology that is favourable for mass and fluid transfer, we describe a hierarchical porous membrane based on polymers of intrinsic microporosity that can capture uranium in seawater. This biomimetic membrane allows for rapid diffusion of uranium species, leading to a 20-fold higher uranium adsorption capacity in a uranium-spiked water solution (32 ppm) than the membrane with only intrinsic microporosity. Furthermore, in natural seawater, the membrane can extract as much uranium as 9.03 mg g −1 after four weeks. This work suggests a strategy to be extended to the rational design of a large family of microporous polymer adsorbents that could fulfil the vast promise of the oceans to fuel a reliable and potentially sustainable energy source. Main Nuclear power is an important part of modern energy systems, and the demand for it is expected to double before 2040 (ref. 1 ). Uranium is a key element in the nuclear industry 2 , 3 . In contrast to its limited availability on land (approximately 8 million tons of identified conventional resources) 4 , the oceans contain more than 4.5 billion tons of uranium 5 , representing an alternative and more abundant resource. However, the low uranium concentration (~3.3 μg l −1 ) of seawater necessitates the use of high-selectivity, high-capacity sorbents 6 . Materials currently under development for effectively extracting uranium from natural water include highly porous materials with high specific surface area, including inorganic porous materials 7 , 8 , metal–organic frameworks 9 , 10 , 11 , covalent organic frameworks 12 , 13 , 14 and porous aromatic frameworks 11 , 15 . However, most of these sorbents are available in powder form and are thus unsuitable for actual use at scale. Hence, an increasing number of efforts have been made to develop self-standing, easily prepared and economical adsorbents. Recent progress in the development of polymers of intrinsic microporosity (PIMs) allows for the design of sorbents with high specific surface areas that benefit from the highly rigid and contorted backbones of these polymers 16 , 17 , 18 . The high solubility of these polymers in common organic solvents makes it easy to prepare membranes of these materials 19 . These advantages indicate that PIMs and their functionalized derivatives have great potential for use in static adsorption applications, such as the recovery of uranium from seawater. Previous studies have attempted to use amidoxime-functionalized PIM-1 for the removal of uranyl ions from aqueous solution or seawater 20 , 21 . However, owing to the low capacity of the polymer and its large specific area, its high porosity cannot be maximally utilized during the adsorption process. Considering that some of the pores in PIMs are often subnanometre-sized 17 , uranium adsorption on the membrane surfaces or fibres will invariably reduce the pore size, thus hampering uranium migration from the liquid phase into the sorbent. In this situation, mass transfer, and not the specific surface area or porosity, is the primary factor determining the adsorption capacity. Moreover, the resistance of mass transfer inside the microporous polymeric network has raised many concerns 22 , 23 . Supporting structures such as porous foam have been used to load the polymer, thus shortening the ion diffusion path 24 . However, despite the increased adsorption capacity, the introduced invalid mass of the foam and the low space utilization caused by massive macropores are still problems that need to be resolved. The internal structure of microporous sorbents therefore needs to be carefully designed. Here, to ensure a high ion transfer rate and ready access to the adsorption sites in microporous membranes, we report a bioinspired, self-supporting amidoxime-functionalized polymer membrane with hierarchical porous structures for the efficient extraction of uranium from seawater. Fractal networks in organs such as blood vessels usually exhibit gradually decreasing diameters for branching and space-filling within a finite volume (Fig. 1a,b ). This allows for mass transfer with high efficiency using only a small amount of energy 25 , 26 , 27 , 28 , 29 . Similarly, the artificial bionic membranes, which were fabricated by traditional non-solvent-induced phase separation (NIPS), contain high numbers of interconnected multiscale channels (Fig. 1c ) and exhibit high permeability and reduced resistance to uranium migration. These channels coming from the exchange of solvent and non-solvent effectively divide the microporous polymer into smaller structural units suitable for mass transfer through micropores. Functionalization with amidoxime provides abundant active sites to bind the uranyl cations (Fig. 1d ). Consequently, the uranium-adsorption capacity of the synthesized hierarchical porous membrane in a 32-ppm uranium-spiked solution is approximately 20 times that of the solution-cast membrane with intrinsic microporosity. The remarkable difference in the performances of the two membranes implies that the efficiency of mass transfer is the main factor determining the adsorption efficacy of microporous-membrane-based adsorbents. Furthermore, the membrane exhibits an uptake capacity of 9.03 mg g −1 after four weeks of adsorption in actual seawater, which is among the highest for membrane-based uranium extraction materials. Our work provides a universal method for enhancing the applicability of porous polymers as efficient membrane-based uranium sorbents. Fig. 1: Biological inspiration and schematic of the bioinspired hierarchical porous membrane. a , Hierarchical networks of blood vessels in living organisms. b , Branched tube-based model inspired by the mammalian circulatory system. Increasing the number of branch points while reducing the branch diameter allows for efficient substance transfer at low energy consumption. c , Schematic illustration of the bioinspired hierarchical porous membrane. It contains pores with sizes on three different scales, including intrinsic micropores. d , Working principle of the hierarchical porous membrane for uranium adsorption. Amidoxime functionalization provides specific binding sites. Full size image Results Synthesis and characterization The synthesis of PIM-1 and PIM-1 with amidoxime groups (AO-PIM-1) is described in the Methods and Supplementary Fig. 1 (ref. 30 ). The number-average molecular weight of PIM-1 was 1.27 × 10 5 g mol −1 , and the weight-average molecular weight was 3.51 × 10 5 g mol −1 (Supplementary Fig. 2 ). The Fourier transform infrared spectra of PIM-1 and AO-PIM-1 are shown in Fig. 2a . The disappearance of the nitrile peak (C≡N) at 2,240 cm −1 and the emergence of peaks at 1,655 cm −1 (C=N) and 915 cm −1 (N–O), which were related to the stretching vibrations of the amidoxime groups, confirmed the successful functionalization of PIM-1. The nuclear magnetic resonance spectrum of AO-PIM-1 (Fig. 2b ) exhibits new peaks at 5.80 (–NH 2 ) and 9.45 (–OH) ppm. Thermogravimetric analysis was performed to investigate the thermal stabilities of powered PIM-1 and AO-PIM-1 samples (Supplementary Fig. 3 ). Compared with PIM-1, AO-PIM-1 underwent an additional step at approximately 200–300 °C before the degeneration of the backbone. This indicated the degradation of the amidoxime group. This result confirmed that the amidoxime groups are thermally stable in the conventional temperature range corresponding to uranium recovery 5 . Fig. 2: Characterization of PIM-1, AO-PIM-1 and the hierarchical porous membrane. a , Fourier transform infrared spectra of PIM-1 and AO-PIM-1. The disappearance of the nitrile peak and the appearance of new characteristic peaks related to amidoxime confirm the successful functionalization of PIM-1. b , Nuclear magnetic resonance spectra of PIM-1 and AO-PIM-1. New peaks related to hydroxyl and amino groups appeared after modification. CDCl 3 , deuterated chloroform; DMSO, dimethyl sulfoxide-d6. c , N 2 adsorption/desorption isotherms of PIM-1 and AO-PIM-1 at 77 K. Hydrogen bonding interactions between amidoxime groups result in tighter stacking of polymer chains, thus reducing specific surface area. d , Scanning electron microscopy images showing a cross-section of the hierarchical porous membrane; nanoscale pores are present on the walls of through-membrane channels. Scale bars, 50 μm (left) and 1 μm (right). e , Wetting process of the compact layer. The compact layer took 18 min to change from the dry state to the superhydrophilic state, indicating that the thin and dense structure was highly permeable. Full size image Gas adsorption measurements were performed to determine the porosities of powdered PIM-1 and AO-PIM-1 samples. The previous studies that focused on gas separation or adsorption have tended to investigate the subnanometre structures of PIMs instead of their midsized micropores and mesopores, which are essential for liquid-phase mass transfer 31 , 32 . Hence, N 2 adsorption (at 77 K) was performed to estimate the Brunauer–Emmett–Teller surface areas of PIM-1 and AO-PIM-1, which were found to be 893 and 356 m 2 g −1 , respectively (Fig. 2c ). A decrease in the Brunauer–Emmett–Teller surface area is generally attributable to tighter chain entanglement caused by the hydrogen bond interactions between the introduced amidoxime groups 17 , 20 . The quenched solid density functional theory method was employed to calculate the intraporosities of the materials. Functionalization with amidoxime decreased the pore volume from 0.650 to 0.318 cm 3 g −1 , which was still high. The tighter chains primarily reduced the size of the ultramicropores, instead of the micropores and mesopores. Consequently, the former did not come in contact with the N 2 gas during the measurements. The pore size distribution confirmed that the proportion of mesopores increased after the modification with amidoxime (Supplementary Fig. 4 ). Fabrication of the hierarchical porous membrane The hierarchical porous and self-supported AO-PIM-1 membrane was fabricated using NIPS. The membrane was characterized using scanning electron microscopy, which confirmed the presence of macropores with an approximate diameter of 20 μm running through the membrane (Fig. 2d ). Smaller pores were distributed on the walls of the channels; these pores had sizes in the 300–500 nm range. This branched structure allowed uranium to be transferred into the membrane and transported throughout the system efficiently. The intrinsic pores existing in the nanonetworks could thus be maximally utilized. This phenomenon was similar to the transfer of substances along natural fractal structures. The formation of a characteristic compact layer is an inevitable consequence of NIPS. This top layer, which is generally too dense, invariably impedes mass transfer. In this study, we attempted to reduce the thickness of the compact layer as much as possible. As a result, this layer was far thinner than the rest of the membrane. The top view of the compact layer shows that it had a loose structure that was different from that of conventional NIPS membranes (Supplementary Fig. 5 ). Because the fabricated membrane was not used for regular purposes such as ion sieving or nanofiltration, these structural characteristics helped minimize the mass transfer resistance attributable to the dense layer, thus facilitating the rapid transport of uranium through the layer. The elemental analysis of the membrane was characterized by time-of-flight secondary ion mass spectrometry (ToF-SIMS). Combined with the reaction time and the characterizations, the conversion of cyano groups can be considered as 100% (ref. 17 ). The nitrogen element therefore represents the distribution of amidoxime groups (Supplementary Fig. 6 ) 33 , 34 . We further investigated the wetting time of the top compact structure to estimate the permeability of the membrane. Figure 2f shows that the dense layer had a contact angle of 88.6° ± 0.2° initially, and after an 18-min spontaneous infiltration process, the dense layer became superhydrophilic due to the increase of water content. The short infiltration time indicates that the dense layer did not prevent the solution from fluxing through the membrane. Moreover, compared with adsorption processes that last days or even weeks, a short wetting time would make it easy for uranium to travel quickly through the body phase and into the sorbents used. Adsorption performance with respect to uranium-spiked water samples The uranium(VI) sorption performance of the hierarchical porous membrane was evaluated by immersing the membrane in 8-, 16- and 32-ppm uranium-spiked water samples (pH 5.5) for 50 h. The uranium concentrations of the solution were measured from their ultraviolet–visible absorption spectra. Arsenazo(III) was used as the chromogenic agent (Supplementary Fig. 7 ) 35 . The colour of the membrane changed from white to yellow after uranium adsorption (Fig. 3a ). Adsorption was verified on the basis of the chemical composition of the uranium-loaded membrane, which was determined using X-ray photoelectron spectroscopy. Figure 3b shows the characteristic double peaks of U 4 f 5/2 (389.8 eV) and U 4 f 7/2 (378.7 eV) in the spectrum of the membrane after adsorption, in contrast to that before its use, which confirm the presence of uranium on the membrane surface. Further analyses were carried out on the basis of the high-resolution X-ray photoelectron spectroscopy U 4 f 5/2 and U 4 f 7/2 spectra, and the distribution of elemental uranium was mapped using energy-dispersive X-ray spectrometry (Supplementary Figs. 8 and 9 ). Fig. 3: Uranium adsorption performance of the hierarchical porous membrane. a , Digital photographs of the membrane before and after uranium adsorption. b , X-ray photoelectron spectroscopy profile of membrane before and after uranium adsorption. The characteristic double peaks of uranium are located at 389.8 eV and 378.7 eV. c , Uranium adsorption capacities of the hierarchical porous membrane in uranium-spiked water with initial uranium concentrations of 8, 16 and 32 ppm. d , Mechanism of the two-step adsorption of uranyl ions by the hierarchical porous membrane. e , Uranium adsorption kinetics of the hierarchical porous NIPS membrane and two types of solution-cast control membrane in 32-ppm uranium-spiked solution. The capacity of the hierarchical porous membrane is about 20 times higher than that of the solution-cast membrane with intrinsic micropores, and about 5 times higher than that of the solution-cast membrane with macropores, which was made from a silicon template. f , Electrochemical impedance spectroscopy curves of the hierarchical porous NIPS membrane and two kinds of solution-cast control membrane in 0.3 M UO 2 (NO 3 ) 2 solution. Z represents total impedance, Re( Z ) is the real part of impedance and Im( Z ) is the imaginary part of impedance. The error bars in c and e represent the standard deviation ( n = 3). Full size image The adsorption kinetics were investigated to estimate the adsorption rate. After adsorption for 50 h (Fig. 3c ), the saturated adsorption capacities of the membrane with respect to the 8-, 16- and 32-ppm solutions were 124.17, 197.92 and 345.94 mg g −1 , respectively. However, in contrast to the adsorption kinetics that depend on micropore diffusion, the hierarchical porous membrane exhibited two-step sorption behaviour. In the first step, adsorption plateaued in the initial 6 h because of the quick transfer of uranium from the bulk phase to the surfaces of the artificial macropores. In the second step, the uranyl ions diffused into the intrinsic micropores of the polymer (Fig. 3d ). Uranium diffusion into the micropores would be slower than the external mass transfer process. The observed differences in the adsorption kinetics confirmed the advantages of using a membrane with biomimetic branching channels, as these allowed the ions to diffuse and fill the membrane quickly before being adsorbed in the micropores. In fact, this unique adsorption behaviour has been observed previously in a multiscale porous system based on particulate sorbents 36 . In the case of multiscale porous systems, the pores, whose sizes can span several scales, create an adsorption environment similar to that of intraparticle diffusion even if the sorbent is not particle-like in nature. Hence, the sorption processes were analysed using the Weber and Morris intraparticle diffusion model 37 . The linear fitting of the q t – t 0.5 curves indicated that the curves consisted of three regimes with different slopes (Supplementary Fig. 10 ), suggesting that the mass transfer process is controlled by the following three steps: external surface transfer, surface-to-interior ion migration and intrapore diffusion. The adsorption kinetics of the membrane were further evaluated using a pseudo-second-order kinetics model, and the correlation coefficients for the 8-, 16- and 32-ppm solutions were determined to be 0.9954, 0.9956 and 0.9933, respectively (Supplementary Fig. 11 ). The corresponding q e and k 2 values were 133.16 mg g −1 and 2.49 × 10 −5 g mg −1 min −1 for the 8-ppm solution, 218.34 mg g −1 and 1.80 × 10 −5 g mg −1 min −1 for the 16-ppm solution, and 390.63 mg g −1 and 7.34 × 10 −6 g mg −1 min −1 for the 32-ppm solution. The adsorption isotherm of the membrane was investigated using uranium-spiked water samples with concentrations of 4–150 ppm. The uranium uptake capacities could be fitted well using the Langmuir model, and the theoretical maximum adsorption capacity ( q m ) was calculated to be 569.93 mg g −1 (Supplementary Fig. 12 ). The outstanding adsorption performance of the hierarchical porous membrane in aqueous solutions suggests that it is a competitive candidate for extracting uranium from actual seawater. To explore the improved uranium adsorption capacity of the membrane, which depends on the introduced hierarchical pores, two solution-cast membranes with thicknesses similar to that of the compact layer of the NIPS membrane were prepared as controls. One such membrane (with a thickness of 7 μm; Supplementary Fig. 13 ) had intrinsic micropores, and another made from the silicon template (with a thickness of 8 μm; Supplementary Fig. 14 ) had 20 μm macropores and intrinsic micropores simultaneously. The solution-cast membrane with intrinsic micropores showed an unsatisfactorily low adsorption capacity of 15.66 mg g −1 in the 32-ppm solution (Fig. 3e ). This value is 20 times lower than that of the NIPS membrane. The solution-cast membrane with transmembrane macropores showed an adsorption capacity of 62.98 mg g −1 , which is five times lower than that of the NIPS membrane. The ionic conductivities of these three membranes were measured by electrochemical impedance spectroscopy to evaluate their ion transport behaviours. Figure 3f shows the Nyquist plots of the membranes in a 0.3 M UO 2 (NO 3 ) 2 solution, and the equivalent circuit shown in Supplementary Fig. 15 was used to calculate the membrane resistance. The solution-cast membrane with macropores had the lowest resistance of 202.8 Ω due to transmembrane channels. However, the thicker NIPS membrane exhibited a lower resistance (208.7 Ω) than that of the solution-cast membrane containing intrinsic micropores (230.2 Ω). ToF-SIMS depth profiling was further used to explore the working mechanism of the hierarchical pores. Supplementary Fig. 16 shows the diffusion depth of uranium in micropores after the aforementioned solution-cast membrane adsorbed uranium for 24 h in a 32-ppm aqueous solution. The depth was approximately 200 nm, which is why the micrometre-thick solution-cast membranes showed insufficient adsorption capacities. In contrast, hierarchical channels allow for rapid ion transfer to the surface and regulate the microstructure near the intrinsic micropores, making it possible to fully exploit the high porosity of the membrane. It can therefore be concluded that the artificial fabrication of hierarchical structures in the AO-PIM-1 membrane efficiently improved the adsorption performance of the membrane. pH dependence of uranium adsorption performance of the hierarchical porous membrane The pH of the external environment is a key factor affecting the uranium adsorption performance of amidoxime-based sorbents. Uranium species in aqueous solution and natural seawater, which are controlled by pH values, have been studied before and were considered as an important factor influencing the adsorption capacity (Supplementary Fig. 17 ) 38 , 39 , 40 . Moreover, surface charge affects the diffusion and local concentration of uranium in micropores. As shown in Fig. 4a , when the pH value of the uranium-spiked solution (32 ppm) was lower than the surface isoelectric point, the repulsive forces between the positively charged surface and the uranyl cations and the protonation of the hydroxyl groups cooperatively reduced the adsorption capacity 41 . Once the membrane became negatively charged, its uranium uptake ability increased significantly. However, further alkaline enhancement would hydrolyse the uranyl cations and affect their existing form, thus reducing the adsorption capacity of the membrane. Finite element method and molecular dynamics (MD) simulations were performed to further analyse the surface-charge-controlled diffusion process and further explain the fluctuations in the adsorption capacity with the pH. Finite element method simulations based on the Poisson–Nernst–Planck theory are suitable for modelling branched macropores that are unaffected by the microscopic interactions between the molecular chains. Here, the macropores were considered to be one-dimensional nanochannels. Furthermore, the positively charged surface (pH 4.0) repulses the similarly charged uranyl ions, leading to a low concentration distribution within the nanochannels (Fig. 4b ). When the surface charge becomes negative (pH 6.0), the ion concentration in the nanochannels increases, with the ions distributed mainly near the surface. An increase in the negative charge (pH 8.0) further promotes the diffusion of the uranyl ions, which then suffuse the entire nanochannel. These results confirmed that the surface charge controls the diffusion-based transport phenomenon, while the aggregation or dispersion of the uranyl ions near the secondary units affects their direct transfer within the micropores. Fig. 4: pH dependence of uranium adsorption performance of the hierarchical porous membrane. a , Effect of pH on uranium adsorption and zeta potential of the hierarchical porous membrane in 32-ppm uranium aqueous solution. The error bars represent the standard deviation ( n = 3). b , Results of the numerical simulations of uranyl diffusion in macropores at different pH levels. Uranyl cations tend to accumulate near the negatively charged surface. c , Results of the all-atom MD simulations of interactions between uranyl cations and the three AO-PIM-1 models (neutral, anion and cation) after equilibrium. The negatively charged backbone allows the uranyl cation to diffuse more easily into the intrinsic micropores. Full size image Figure 4c shows the interactions between the uranyl ions and the three AO-PIM-1 models, as determined through MD simulations of ion diffusion through the intrinsic micropores produced by the rigid molecular chain. The model of a single polymer chain included ten monomer units, and the terminal groups were capped with H atoms (Supplementary Fig. 18 ). In contrast to before equilibrium (Supplementary Fig. 19 ), uranyl ions are distributed more closely around the anionic AO-PIM-1 backbone after equilibrium, indicating that the interactions between the uranyl cations and the anionic AO-PIM-1 are stronger than those in the other two models. Compared with the other two models, the neutral model had the tightest chain stacking due to no electrostatic repulsion, hindering the ions from transferring close to the adsorption sites. Simultaneously, the electrostatic interaction energies between the uranyl cations and the anionic, neutral and cationic AO-PIM-1 were determined to be −8,560.7, −24.0 and 12.5 kJ mol −1 , respectively. The radial distribution functions for the O moiety of the OH group in AO-PIM-1 and the U moiety of the uranyl cation were also determined (Supplementary Fig. 20 ). The interactions between O and U in the anion model were the strongest among the three polymer models evaluated, and the distance between them was approximately 2.48 Å. The simulation results (including those of the structure, calculated electrostatic interaction energies and radial distribution functions) confirmed that the adsorption ability of the AO-PIM-1 membrane with respect to uranyl cations was the highest for the anion model, lower for the neutral model and the lowest for the cation model. These combined results confirmed that the membrane maintains a high adsorption capacity in a slightly alkaline environment. The pH of seawater usually fluctuates near 8.0, which implies that the hierarchical porous membrane would exhibit optimal performance under natural conditions. Reusability and selectivity of the hierarchical porous membrane for uranium adsorption The reusability and selectivity of the hierarchical porous membrane were studied using adsorption/desorption experiments for five cycles in a simulated seawater system, which was prepared by adding extra co-existing ions to natural seawater (including U, V, Fe, Co, Cu, Zn, Pb, Ba and Cd; Supplementary Table 1 ). The uranium-loaded hierarchical porous membrane could be regenerated by immersion in an eluent (20 ml of 0.5 M HCl solution) for 1 h. As shown in Fig. 5a , the adsorption capacity of uranium remained stable and in the range of 10.71–13.54 mg g −1 . The elution rate of uranium was kept higher than 98%. The high reusability of the membrane indicates that it should be able to endure long periods of extraction, resulting in a decrease in the related economic cost. Moreover, during the five adsorption/desorption cycles, the adsorption capacities of other co-existing ions were also measured. Similar to other amidoxime sorbents, vanadium (10.99–13.24 mg g −1 ) and iron (3.15–12.55 mg g −1 ) were major competitive ions (Fig. 5b ) 42 , 43 , 44 . The presence of co-existing interfering cations would reduce the uranium adsorption capacity. Furthermore, carbonate ions are also non-negligible because uranium usually exists as carbonate in natural seawater. The adsorbent therefore needs to compete with carbonate ions to adsorb the uranyl ions. According to a previous study, chemical reaction may be the rate-limiting step for uranium extraction in natural seawater 23 . The amount of each element that remained after elution was estimated from the ion concentration of the eluent (Supplementary Fig. 21 ). Compared with that of uranium, the elution of vanadium was more difficult during the desorption tests. Raising the eluant temperature and elution time may further reduce the residual rate, but the corresponding damage to the polymer chain should also be considered. Fig. 5: Reusability of the hierarchical porous membrane and its uranium adsorption performance in seawater. a , Uranium adsorption capacity and elution rate over five adsorption/desorption cycles in simulated seawater. b , Capacities of uranium and co-existing ions over five adsorption/desorption cycles in simulated seawater. c , Uranium extraction kinetics for 100 l of natural seawater over four weeks and variation in uranium concentration; the amount of sorbent used was 10 mg. The error bars represent the standard deviation ( n = 3). d , ToF-SIMS ion image of uranium on the hierarchical porous membrane after uranium extraction in natural seawater. MC represents the range of the colour scale and TC represents the total signal intensity (total counts). Full size image Uranium extraction from natural seawater using the hierarchical porous membrane To confirm the ability of the hierarchical porous AO-PIM-1 membrane to extract uranium from natural seawater, a 10-mg membrane was placed between two pieces of sponge in a dialysis tube, and 100 l of natural seawater was continuously pumped through it (Supplementary Fig. 22 ). The membrane showed an adsorption capacity of 6.63 mg g −1 in the first week, which is higher than that mentioned in the Uranium Extraction from Seawater standard (6 mg g −1 in 21 days) (Fig. 5c ) 11 , 15 . After 28 days of adsorption, the final concentration of uranium was reduced from 3.32 to 2.42 ppb, and the membrane achieved a uranium recovery capacity of approximately 9.03 ± 0.15 mg g −1 . The membrane was also digested in heated concentrated nitric acid to measure the adsorption capacity, which was 8.85 ± 0.09 mg g −1 . This high capacity places this membrane among the best-performing Uranium Extraction from Seawater materials reported so far (Supplementary Table 2 ). To further confirm the uranium extraction by the hierarchical porous membrane, the membrane surface was mapped using ToF-SIMS after the 28-day adsorption process in natural seawater. Figure 5d shows the ToF-SIMS ion image of uranium. This image clearly shows the uranium distribution, confirming the successful extraction of uranium by the hierarchical porous membrane. The signal had a scattered distribution and a low intensity compared with those of C, N and O due to the low concentration of uranium loaded on the membrane. Discussion Inspired by the high mass transfer efficiency of natural fractal networks, we prepared a hierarchical porous membrane based on amidoxime-functionalized PIM-1. The branched structure of the membrane helps avoid the restrictions on mass transfer caused by the decreases in pore size during adsorption in the subnanometre-sized pores. This allows the high specific surface area of the membrane to be maximally utilized. The adsorption capacity of the hierarchical porous membrane in a 32-ppm uranium-spiked solution was 20 times higher than that of a solution-cast membrane with only intrinsic micropores. Finite element method and MD simulations were performed to elucidate the effects of the surface charge and pH on the adsorption capacity. In natural seawater, the adsorption capacity of the membrane after one and four weeks was 6.63 mg g −1 and 9.03 mg g −1 , respectively. Considering the high performance of the membrane, it is a promising adsorbent for uranium extraction from seawater. Moreover, this method of forming hierarchical pores in membranes based on microporous polymers is ideal for reducing the mass transfer resistance of membranes. Methods Synthesis of PIM-1 Synthesis of PIM-1 powder was performed according to the published method 45 . 3,3,3′,3′-tetramethyl-1,1′-spirobisindane-5,5′,6,6′-tetrol (10.21 g, 30 mmol), 2,3,5,6-tetrafluoroterephthalonitrile (6.00 g, 30 mmol), anhydrous K 2 CO 3 (8.40 g, 60.8 mmol), dimethylacetamide (200 ml) and toluene (100 ml) were added to a 500-ml three-necked round-bottom flask with a Dean–Stark trap, and the mixture was heated in an oil bath pre-heated to 160 °C for 1 h under N 2 . When the reaction was stopped, the polymer was washed with water and methanol several times, and the solid was collected. The polymer was then dissolved in chloroform solution and re-precipitated by adding methanol for further purification. The bright-yellow product obtained after filtering was then dried at 110 °C for 12 h. Synthesis of AO-PIM-1 Amidoxime functionalization was carried out following a reported procedure 20 . PIM-1 powder (1.8 g) was dissolved in tetrahydrofuran (120 ml) and heated to 69 °C. Hydroxylamine solution (14 ml, 50 wt% in H 2 O) was added drop by drop, and the reaction was then refluxed for 23 h under N 2 . The yellow solution was poured into ethanol (500 ml) after being cooled to room temperature, and the precipitated polymer was then washed by ethanol several times. The white product was dried at 100 °C for 12 h. Membrane fabrication AO-PIM-1 powder was dissolved in N , N -dimethylformamide to prepare a yellow solution with a concentration of 10 wt%. For the hierarchical porous membrane, the polymer solution was tape cast on a clean glass plate and immediately precipitated in a water bath (~50 °C). The white membranes were soaked in water for 12 h, and the water was replaced twice during this time. For the solution-cast membrane, the polymer solution was tape cast on a clean glass plate and then evaporated at room temperature overnight. The transparent membrane was dried in an oven at 100 °C for 24 h to remove the residual solvent. For the solution-cast membrane with macropores, 100 μl of the polymer solution was drop cast on a 2 cm × 2 cm silicon template with a column array. The column diameter was 20 μm, and the column spacing was 150 μm. The other operations were kept the same as for the solution-cast membrane. Characterization The Fourier transform infrared spectra were determined by a spectrometer (Varian Excalibur 3100) in transmittance mode. The nuclear magnetic resonance spectra were collected at room temperature using a Bruker Avance 400 spectrometer. The pore characteristics of the polymer powder were evaluated according to N 2 sorption isotherms at 77 K determined by a Quantachrome gas sorption analyser (Quadrasorb SI-MP). The thermal properties of the polymer were characterized by a Diamond TG-DTA6300 thermogravimetric analyser. The elemental analyses were determined by X-ray photoelectron spectroscopy (Thermo Scientific Escalab 250Xi) and ToF-SIMS (Ulvac-Phi PHI nanoToF II). The morphologies of the membranes were determined by scanning electron microscopy (SEM HITACHI S-4800). The water contact angle of the hierarchical porous membrane was measured using a Dataphysics water contact angle system (Dataphysics OCA50). The UV–Vis spectra were determined by a UV–Vis spectrophotometer (Shimadzu UV-2600). The concentration of uranium ions was determined using the inductively coupled plasma-mass spectrometer (ICP-MS; PerkinElmer NexION 300X). The surface zeta potential of the hierarchical porous membrane was measured by a zeta potential analyser (Anton Paar Surpass 3). The electrochemical workstation used was CHI660B. Uranium adsorption in spiked water To test the uranium adsorption capacity of the hierarchical porous membrane, 2 mg of the membrane was immersed in 200 ml of uranium-spiked deionized water and shaken for a fixed time. Each membrane was treated with 1 mM NaOH solution for 30 min at 60 °C before adsorption 42 . The pH of the solution in kinetics and isotherms was adjusted to 5.5 using NaOH solution. The concentrations of uranium in the solution were determined via UV–Vis absorption spectra on the basis of the specific peak at 652 nm for the complexation between Arsenazo(III) chromogenic agent and uranyl 35 . The uranium loaded on the membrane can be calculated using: $${{m}}_{{{\mathrm{U}}}} = ({{C}}_0 - {{C}}_{{t}}) \times {{V}}$$ (1) where m U is the adsorption mass of uranium, C 0 is the original concentration of the solution, C t is the concentration of uranium detected at a specific point t during the whole adsorption and V is the total volume of the solution. The pH values of the solutions were adjusted to 3.0, 4.0, 5.0, 6.0, 7.0 and 8.0 by using NaOH and HNO 3 solution. Adsorption kinetics and isotherms The adsorption data were analysed by the Weber and Morris intraparticle diffusion model based on: $$q_t = k_{\mathrm{id}}t^{0.5}$$ (2) where q t is the sorption capacity at time t (mg g −1 ) and k id is the rate constant of intraparticle diffusion. The adsorption results were also analysed using the pseudo-second-order kinetics: $$\frac{{{t}}}{{{{q}}_{{t}}}} = \frac{1}{{\left( {{{k}}_2{{q}}_{{{\mathrm{e}}}}^2} \right)}} + \frac{{{t}}}{{{{q}}_{{{\mathrm{e}}}}}}$$ (3) where q e is the equilibrium adsorption capacity (mg g −1 ), t is the time of adsorption (min) and k 2 is the adsorption rate constant (g mg −1 min −1 ). The Langmuir equation shown below was also used to evaluate the uranium adsorption capacity of the membranes. The uranium uptake capacity of the hierarchical porous membrane was analysed by the Langmuir model: $$\frac{{{{C}}_{\mathrm{e}}}}{{{{q}}_{{{\mathrm{e}}}}}} = \frac{{{1}}}{{K_{\mathrm{L}}q_{\mathrm{m}}}} + \frac{{C_{\mathrm{e}}}}{{{{q}}_{{{\mathrm{m}}}}}}$$ (4) where C e (ppm) is the concentration of uranium at equilibrium, q m (mg g −1 ) is the theoretical maximum adsorption capacity and K L (1 mg −1 ) is the Langmuir constant. Determination of reusability and binding selectivity Five consecutive adsorption/desorption cycles were performed to study the reusability and selectivity of the hierarchical porous membrane in simulated seawater. Nine metal ions were added to natural seawater (from the Yellow Sea, Tsingtao, China) to prepare the solution. The concentrations of U, V, Co, Cu, Zn, Pb and Cd were 100 times those in natural seawater. The concentrations of Fe and Ba were about 10 times higher than those in natural seawater (Supplementary Table 1 ). The pH value was adjusted to 8.0 ± 0.3 using NaOH. Subsequently, 5 mg of the hierarchical porous membrane was placed in 1 l of the above solution for 24 h at 25 °C. The adsorption capabilities for different ions were determined by ICP-MS. The ion-loaded membrane was then regenerated by immersion in an eluent (20 ml of 0.5 M HCl solution) with stirring at 25 °C for 1 h. After elution, the membrane was regenerated in an alkaline solution (20 ml of 5 mM NaOH) for 15 min and then used for the next cycle. The elution efficiency was calculated according to the concentration of ions in the eluent determined by ICP-MS. Uranium adsorption in natural seawater The above-mentioned natural seawater was used to estimate the uranium extraction capacity. The pH of the natural seawater was 8.09. The environment temperature was 27 °C. The seawater was filtered through filter membranes (0.2 μm) to remove the impurities and microorganisms before the experiments. Then, 10 mg of the hierarchical porous membrane was placed between two pieces of sponge in a continuous flowing-through system in which 100 l of seawater circularly flowed through a dialysis tube for four weeks. The flowing speed of seawater was controlled at about 2 l min −1 . The concentration of uranium and other competitive metal ions in natural seawater were measured by ICP-MS. The capacity was calculated by using: $$q_t = \frac{{(C_0 - C_t) \times {{V}}}}{m}$$ (5) where q t (mg g −1 ) is the capacity changing with the contact time, C 0 (ppb) is the initial uranium concentration in natural seawater (3.32 ppb), C t (ppb) is the uranium concentration at time t , V (l) is the volume of seawater and m (g) is the weight of adsorbent. The adsorption capacity was also verified by digesting the uranium-loaded membrane in 10 ml of heated concentrated nitric acid. The solution was diluted 1,000-fold using deionized water for analysis. The capacity was calculated using the following equation: $$q_t = \frac{{{{C}} \times {{V}}_{\mathrm{d}}}}{m}$$ (6) where C (ppm) is the uranium concentration dissolved in concentrated nitric acid, V d (l) is the volume of the acid and m (g) is the mass of the membrane. Numerical simulation Numerical simulation was performed using a commercial finite-element software package, COMSOL (version 4.2) Multiphysics. The Poisson–Nernst–Planck model is composed of a Poisson equation determining the electrostatic potential induced by moving ions and fixed charges, and a set of convection–diffusion equations representing the ion migration caused by the concentration gradient and electric field in the electrolyte solution, which is given as equations ( 7 )–( 9 ): $$\triangledown^2{\varphi} = - \frac{F}{\varepsilon }\mathop {\sum }z_ic_i$$ (7) $$j_i = {{D}}\left( \triangledown{c_i + \frac{{z_iFc_i}}{{RT}}\triangledown\varphi } \right)$$ (8) $$\triangledown\cdot j_i = 0$$ (9) where φ and ε are the electrical potential and dielectric constant of the solution, c i is the ion concentration, z i is the valence number, j i is the ionic flux, D i is the diffusion coefficient, R is the universal gas constant, T is the absolute temperature and F is the Faraday constant. The numerical simulated model was set as a rectangular channel with a length of 6,000 nm and a width of 500 nm. To reduce the mass transfer resistances at the entrance and exit of the channel, two electrolyte reservoirs were set. The ion flux at the boundaries had the zero normal components: $$n\cdot j_i = 0$$ (10) where n is the unit normal vector to the wall surface. The potential φ on the channel walls was calculated as: $$n\cdot {\triangledown\varphi} = - \frac{\sigma }{\varepsilon }$$ (11) where σ is the surface charge density, which can be calculated through equations ( 12 ) and ( 13 ): $$\sigma = \frac{{\varepsilon \varepsilon _0\xi }}{{\lambda _{\mathrm{D}}}}$$ (12) $$\lambda _{\mathrm{D}} = \sqrt {\frac{{\varepsilon \varepsilon _0RT}}{{2n_{{\mathrm{bulk}}}Z^2F^2}}}$$ (13) where λ D is the Debye length, ξ is the zeta potential of the membrane, ε is the permittivity of water, ε 0 is the permittivity of a vacuum, n bulk is the solution concentration and Z is the valence number. The calculated values of σ at pH 4.0, 6.0 and 8.0 are 1.23 × 10 −3 C m −2 , −1.96 × 10 −4 C m −2 and −1.48 × 10 −3 C m −2 , respectively. MD simulations MD simulations were performed on three polymer AO-PIM-1 models: the neutral, anion and cation types. A single polymer chain included ten monomer units, and the terminal groups were capped with H atoms as in the previous work 46 . Polymer amorphous models were constructed with ten polymer chains per simulation box. The initial simulation model was put into a cubic box (10.0 nm per side) with an initial density of about 0.0874 g cm −3 , and then the equilibration simulation referred to the 21-step compression/relaxation scheme by Larsen et al. 47 . Finally, the polymer AO-PIM-1 obtained was solvated in the solution box including 21,276 water molecules and 13 uranyl ions. Na + or Cl − ions were added to make the solution neutral during the simulations. In this work, the uranyl concentration was set at 0.03 mol l −1 to reduce the simulation cost. The initial models were built using the Packmol package 48 . An all-atom OPLS-AA force field was then used with the GROMACS 2018.1 package in all the MD simulations 49 , 50 , 51 , 52 , 53 , 54 , 55 . The polymer chain topology file was done with the aid of GMXTOP 56 . The TIP3P water model was used. The partial charges for the neutral, anion and cation types of AO-PIM-1 monomers used the restrained electrostatic potential charges, which were obtained using the quantum mechanism calculation in the gas phase at the HF/6–31 G* level of theory using Gaussian 09 and fitted by the antechamber program using Ambertool 57 , 58 . The detailed simulation steps were as follows: (1) the initial model was energy-minimized to remove the system strain; (2) 200 ps NVT dynamics simulations were carried out, and the initial velocities were generated on the basis of a Maxwellian distribution at 300 K; (3) 2 ns equilibrium NPT dynamics simulations were carried out; and (4) the independent sampling analyses of 10 ns NPT MD simulations for each case were performed three times to obtain good statistics. All the covalent bond lengths were constrained by the LINCS algorithm, and a time step of 2.0 fs was used in all the simulations. The temperature was controlled using a velocity-rescaling thermostat with a relaxation time of 0.1 ps. The Berendsen pressure scaling was performed isotropically by coupling to a pressure bath of 10 5 Pa (with a time constant of 1.0 ps). The particle mesh Ewald summation method was used to calculate the electrostatic potential under periodic boundary conditions in all three spatial dimensions. Data availability The data that support the findings of this study are available in the paper and its Supplementary Information files. | Inspired by biological fractals, a team of researchers affiliated with multiple institutions in China has developed a new pore structure for a membrane used to separate uranium from seawater. In their paper published in the journal Nature Sustainability, the group describes their pore structure and how well it worked when tested. Alexander Wiechert and Sotira Yiacoumi with the Georgia Institute of Technology and Costas Tsouris with Oak Ridge National Laboratory, have published a News & Views piece on the work done by the team in China and the work that is left to do before the membrane can be commercialized. In the 1950s, scientists realized that the world's oceans held the potential for supplying the uranium needed to produce atomic weapons and electrical power. But it took another 30 years before a viable means of extracting uranium was developed. A team of researchers in Japan developed an amidoxime-grafted adsorbent that appeared able to do the job, but only in a limited way. In this new effort, the researchers have expanded on the work by the Japanese team to create a membrane for use in filtering uranium from seawater. The membrane created by the team in China is based on a hierarchical pore structure that was modeled on fractals found in nature. Seawater containing uranium enters the outer portion of the membrane through macropores. The molecules in the water then migrate into a branching matrix of smaller channels. From there, they are carried to a microporous inner portion of the membrane where the uranium is absorbed by an amidoxime-grafted adsorbent. Testing showed it capable of extracting 9 mg g−1 from a sample of seawater over four weeks. Notably, the membrane also extracts other molecules from a seawater sample, such as vanadium, iron, zinc and copper. Thus, before it could be used in a real-world context, a means for separating out the recovered materials is required. Also, as Wiechert, Yiacoumi and Tsouris note, the design of the membrane might need tweaking to allow for degradation of the amidoxime. | 10.1038/s41893-021-00792-6 |
Biology | NASA is studying fungi to keep space travelers safe on new worlds | A. Blachowicz et al, Human presence impacts fungal diversity of inflated lunar/Mars analog habitat, Microbiome (2017). DOI: 10.1186/s40168-017-0280-8 | http://dx.doi.org/10.1186/s40168-017-0280-8 | https://phys.org/news/2017-07-nasa-fungi-space-safe-worlds.html | Abstract Background An inflatable lunar/Mars analog habitat (ILMAH), simulated closed system isolated by HEPA filtration, mimics International Space Station (ISS) conditions and future human habitation on other planets except for the exchange of air between outdoor and indoor environments. The ILMAH was primarily commissioned to measure physiological, psychological, and immunological characteristics of human inhabiting in isolation, but it was also available for other studies such as examining its microbiological aspects. Characterizing and understanding possible changes and succession of fungal species is of high importance since fungi are not only hazardous to inhabitants but also deteriorate the habitats. Observing the mycobiome changes in the presence of human will enable developing appropriate countermeasures with reference to crew health in a future closed habitat. Results Succession of fungi was characterized utilizing both traditional and state-of-the-art molecular techniques during the 30-day human occupation of the ILMAH. Surface samples were collected at various time points and locations to observe both the total and viable fungal populations of common environmental and opportunistic pathogenic species. To estimate the cultivable fungal population, potato dextrose agar plate counts method was utilized. The internal transcribed spacer region-based iTag Illumina sequencing was employed to measure the community structure and fluctuation of the mycobiome over time in various locations. Treatment of samples with propidium monoazide (PMA; a DNA intercalating dye for selective detection of viable microbial populations) had a significant effect on the microbial diversity compared to non-PMA-treated samples. Statistical analysis confirmed that viable fungal community structure changed (increase in diversity and decrease in fungal burden) over the occupation time. Samples collected at day 20 showed distinct fungal profiles from samples collected at any other time point (before or after). Viable fungal families like Davidiellaceae , Teratosphaeriaceae , Pleosporales , and Pleosporaceae were shown to increase during the occupation time. Conclusions The results of this study revealed that the overall fungal diversity in the closed habitat changed during human presence; therefore, it is crucial to properly maintain a closed habitat to preserve it from deteriorating and keep it safe for its inhabitants. Differences in community profiles were observed when statistically treated, especially of the mycobiome of samples collected at day 20. On a genus level Epiccocum , Alternaria , Pleosporales , Davidiella , and Cryptococcus showed increased abundance over the occupation time. Background Planning future space explorations, involving potential human missions to Mars, would require constructing a safe closed habitat [ 1 , 2 ]. An inflatable lunar/Mars analog habitat (ILMAH) is a unique, simulated closed environment (isolated by HEPA filtration) that can be utilized to overcome challenges associated with both technical and scientific issues [ 3 ]. Because the ILMAH mimics International Space Station (ISS) conditions and is treated as a prototype habitat for future space explorations, microbiological characteristics of such a closed environment is of high interest to the National Aeronautics and Space Administration (NASA). The environmentally controlled ILMAH is an easily accessible system that enables samples to be collected and analyzed at multiple times at relatively low cost. Understanding the microbiome of a closed system and its association with human inhabitation will help to assess the correlation between human health and microbiome of the habitat as well as the influence of microorganism on the habitat deterioration [ 4 , 5 , 6 ]. The highly specialized structure of the simulated ILMAH keeps its inhabitants in isolation from the outside environment. Except for the exchange of the air between outdoor and indoor environments via an advanced environmental control system, the ILMAH mimics the ISS and other future habitats of human explorers on the other planets [ 3 ]. This unique feature of the ILMAH allows observing the changes in the microbiome during human occupation. The bacteriome of the ILMAH was recently reported [ 7 ], as in the case of most of the studies reporting on bacterial microbiomes [ 8 ]. The molecular fungal diversity of Japanese Experimental Module—Kibo, on the ISS, revealed abundance of fungi associated with astronauts, but succession of viable fungal population in their habitat was not addressed [ 9 ]. The skin fungal microbiota of 10 Japanese astronauts showed temporal changes before, during, and after their stay on the ISS. The molecular fungal diversity associated with various body parts was reduced during the spaceflight when compared to pre-flight data. However, the ratio of Malassezia genetic signatures to all fungal gene copies (including dead fungal cells) increased during their stay at the ISS—but the viability of these fungi was not confirmed [ 10 ]. This is the first report that thoroughly characterizes the mycobiome of a simulated habitat meant for the future human habitats on other planets. Utilization of next generation sequencing (NGS) techniques enables more in-depth analysis of indoor microbiomes [ 11 ]. Many studies focus on the bacterial microbiome of intensive care units [ 8 , 12 , 13 , 14 ], pharmaceutical clean rooms [ 15 , 16 , 17 ], or tissue banks [ 18 ] since their microbial composition has an impact on human health and life. Nosocomial infections acquired in hospitals and other health care facilities remain the sixth leading cause of death in the hospitals in USA [ 19 , 20 ]. Nosocomial infections are mostly caused by various fungal species that belong to the Candida genus and filamentous fungus Aspergillus fumigatus [ 21 , 22 , 23 ]. Therefore, it remains important to screen future closed habitats for the presence of opportunistic pathogens that can affect health of immunocompromised astronauts. So far, majority of the indoor microbiome studies have focused on the bacterial microbiome without analyzing the mycobiome. In addition, those few studies that characterized fungi of indoor environments focused on culture-based populations [ 24 , 25 , 26 , 27 ]. In those cases, where new molecular techniques were implemented [ 28 , 29 , 30 , 31 ] viable fungi were not differentiated from the total population (viable and dead) [ 32 ]. The internal transcribed spacer region-based iTag Illumina sequencing coupled with the propidium monoazide (PMA) treatment used in this study can determine the viable mycobiome. Fungi are extremophiles that can survive harsh conditions such as low nutrient [ 33 ], desiccation [ 34 ], high/low temperatures [ 35 , 36 ], acidic/alkaline [ 37 , 38 ], radiation [ 39 , 40 ], and other environments [ 41 , 42 ]. Fungal species not only have been isolated from all known environments on Earth, including barren lands like deserts, caves, or nuclear accident sites, but also are known to be difficult to eradicate from other types of environments including indoor and closed spaces [ 8 , 36 , 42 , 43 ]. Characterizing and understanding possible changes to, and succession of, fungal species in the ILMAH is of high importance since some of the fungi are extremophiles that are not only potentially hazardous to inhabitants but also can deteriorate the habitat itself [ 25 , 44 , 45 ]. It was previously reported that people spending a significant amount of time indoors might suffer from so called “sick building syndrome” (SBS). SBS is characterized by health- and comfort-related syndromes (e.g., headache, tiredness) that ease after leaving a building. Fatigue and discomfort might be caused not only by physical characteristics of the closed system (humidity, temperature, lighting) but also by biological contamination from both bacteria and fungi [ 46 , 47 ]. Fungal pathogens’ presence in indoor areas might pose health hazards for people exhibiting immunodeficiency [ 48 ]. Pathogenic fungi produce a range of secondary metabolites (SMs) that influence their virulence (e.g., melanins, siderophores, or species-specific toxins), induce allergies, and cause diseases (e.g., aspergillosis, candidiasis, or cryptococcosis) [ 48 , 49 ]. Prolonged stays in closed habitats (e.g., ILMAH, ISS, etc.) might be stressful for inhabitants and lead to a decrease in immune response; therefore, assessing the presence of any opportunistic pathogens is vital [ 50 ]. Previous reports on the mycobiome in NASA clean rooms and on the ISS documented NGS results from samples collected from various locations, but none of the studies focused on the analysis of the microbial succession of systematically collected samples [ 28 , 32 ]. The bacterial and archaeal microbiome succession of the ILMAH using NGS has been carried out [ 7 ]. This is the first report characterizing the succession of fungi in a simulated closed system meant for human habitation on other planets utilizing both traditional and state-of-the-art molecular techniques. In addition, attempts were made during this study to elucidate the temporal and spatial distribution of the fungal population and diversity in a closed human habitat. Methods The ILMAH habitat The physical characteristics of the ILMAH, along with detailed sampling procedures and periodicity, were previously described [ 7 ]. In brief, the ILMAH is located in Grand Forks, ND (47.9222 N, 97.0734 W) and has dimensions of 12 m by 10 m by 2.5 m. It contains a sleeping compartment, kitchen, toilet, and laboratory area. The ILMAH has its own ventilation system that pressurizes the habitat, provides breathing air for the crew, and removes unwanted material from the air stream. The ILMAH uses a blower that takes ambient air for pressurization and breathing air provision. It provides a positive pressure at 1 PSID (pressure differential) above normal atmosphere inside the habitat. The ambient air is pressurized by an industrial fan and sent to a standard HEPA flat panel particulate filter (Bryant GAPBBCAR2025, Honeywell, Morris Plains, NJ). This filter is replaced before each mission lasting up to 30 days. Maintenance personnel changes the filter from the outside. Since the humidity inside the habitat is not removed, water vapor during the months when the analog missions takes place range from 35 to 55%. Three student crews inhabited the ILMAH for 30 days. During that period of time, there was no exchange between the interior and exterior environment except pumping the air filtered via ILMAH’s advanced controlling system [ 3 ]. In addition, students did not leave the ILMAH at any time for 30 days, and nothing came in or out, including food, people, water, or any supplies, except filtered air. Surface samples were collected consecutively during four sampling events (day 0 [T 0 ], day 13 [T 13 ], day 20 [T 20 ], and day 30 [T 30 ]) from eight designated sampling locations. Prior to inhabitation, the ILMAH surfaces were cleaned with 10% bleach and during the experiment period, it was cleaned weekly with antibacterial wipes. Sampling materials and procedure Samples were collected using biological sampling kits (BiSKits, Quicksilver Analytics Inc., Abingdon, MD) previously documented as an efficient sampling device for surfaces [ 51 ]. All the BiSKits were prepared following the procedure described elsewhere [ 7 , 51 ]. Briefly, the sterile phosphate buffer saline (PBS) provided by the distributor was discarded from the bottle and replaced with 15 mL of sterile UltraPure DNA free PBS (MoBio Laboratories Inc, Carlsbad, CA). Each BiSKit was rinsed once with PBS that was later collected into a sterile 15-mL falcon tube and kept at 4 °C as a background measure for biological materials associated with macrofoam (sampling device control). This precautionary step was required to overcome, if encountered, microbial contamination associated with sampling devices and other processing reagents. BiSKits prepared as described above were then packed into sterile zip lock bags and shipped at 4 °C to the University of North Dakota where they were kept at 4 °C till the experiment was carried out (within 2 to 3 days). The ILMAH architecture was previously described [ 7 ] (see Additional file 1 : Figure SF1, Fig. 1 ). Surface samples were collected from eight locations at four consecutive samplings: day 0 (prior to inhabitation), day 13, day 20, and day 30 during the inhabitation and right before ending the experiment. Originally, sampling activities were scheduled at regular intervals days 0, 10, 20, and 30 of human occupation; however, day 10 sampling scheduled on Thursday was delayed to day 13 (Sunday) to avoid risk related to shipping the samples over the weekend. The work schedule of the ILMAH crew members was regulated and is detailed in Additional file 2 . Each location (surface area 1 m 2 ) was sampled with one BiSKit in three directions following the same steps, horizontally from the left to the right, vertically from the bottom to the top, and diagonally from the right bottom corner to the left upper corner. After sampling, the sampled BiSKit device was extracted with sterile PBS and sampling fluids were collected in sterile 50-mL falcon tubes. Each BiSKit was washed and extracted twice with the sterile PBS giving approximately 45 mL of unconcentrated sample. Likewise, the BiSKit left open in the sampling area for the time necessary for sampling one location was treated as a field control; the unopened BiSKit was treated as a sampling device (BiSKit) control. Collected samples along with the controls were stored at 4 °C and sent overnight to JPL via cold shipping for further analysis. Sample processing for the microbiological analyses was started within 24 h from the sample collection. Fig. 1 Picture of the closed habitat from outside Full size image Sample processing The extruded liquid samples from the BiSKit sampler (~45 mL per sample) were concentrated using an InnovaPrep Concentrating Pipette (Innova Prep LLC, Drexel, MO) to a final volume of ~4 mL. Appropriate aliquots of concentrated samples were further used for cultivation (200 μL) and molecular analyses (3 mL). a) Cultivable fungal burden and diversity For cultivation assay, samples were serially diluted by 10 and 100 times. One hundred microliters of each dilution was pour plated in duplicates on potato dextrose agar (PDA) and grown at room temperature (~25 °C). Colony-forming units (CFUs) were counted after 7 days of incubation, and cultivable fungal population was calculated per square meter of the sampling area. Simultaneously, up to 5 colonies exhibiting different morphologies were picked and stored as stab cultures in one-tenth semi-solid PDA medium. Cultivable isolates were identified using primers ITS 1F (5′-CTT GGT CAT TTA GAG GAA GTA A-3′) and Tw13 (5′-GGT CCG TGT TTC AAG ACG-3′) that target small and large subunit rRNA gene-coding regions of the small and large ribosomal subunit, including the internal transcribed spacers ITS1 and ITS2 [ 52 , 53 ]. DNA was extracted using freezing (−80 °C) and thawing (+80 °C) cycle of fungal suspension in PBS for 15 min. This process was repeated 3 times. In some cases when DNA was not extracted by the freezing-thawing method, a PowerSoil® DNA Isolation Kit (MoBio) was used according to manufacturer’s instructions. PCR conditions were as follows: 95 °C for 3 min followed by 25 cycles of 95 °C for 30 s, 58 °C for 30 s, 72 °C for 2 min, and final elongation at 72 °C for 10 min. Amplified products were visualized by gel electrophoresis. PCR products were then enzymatically purified by using 40 IU of Exonuclease I ( E. coli 20,000 IU/mL New England BioLabs, Inc. Ipswich, MA) and 8 IU of Antarctic Phosphatase (5,000 IU/mL, New England BioLabs, Inc.) per 20 μL amplification product. Heat reactions were carried out in a thermocycler as follows: 37 °C for 30 min, 80 °C for 15 min. Traditional Sanger sequencing was performed at Macrogen (Rockville, MD). The sequences were merged using DNAStar (Madison, WI), identified using UNITE fungal database [ 54 ], and aligned using ClustalW. A phylogenetic tree was constructed using MEGA6.06-mac applying neighbor-joining method [ 55 ]. Sequences from one representative of each strain and corresponding type strain were used to create the phylogenetic tree. b) DNA extraction The concentrated environmental samples (3 mL) were split into two equal parts. One half of the sample (1.5 mL) was treated with 12.5 μL of 2 mM PMA dye (Biotum, Inc., Hayward, CA), and the other half was left untreated. The final concentration of PMA in each treated sample was 25 μM. After PMA addition, both treated and untreated samples were kept in the dark for 5 min at room temperature and subsequently exposed to light in the PHaST Blue-Photo activation system for tubes (GenIUL, S.L, Terrassa, Spain) for 15 min. PMA-treated samples represent viable microorganisms whereas the non-PMA-treated samples represent the total number of viable and dead microorganisms [ 56 ]. After photo activation, each sample was split into two aliquots of 0.75 mL each. One aliquot of each sample was subjected to bead beating for 60 s at 5 m/s on the Fastprep-24 bead-beating instrument (MP Biomedicals, Santa Ana, CA). The solution after bead beating was combined with not bead-beated aliquots (1.5 mL), and then used for DNA extraction using the Maxwell-16 MDx automated system following manufacturer’s instructions (Promega, Madison, WI). Purified DNA was eluted into a final volume of 50 μL of Ultra Pure molecular water and divided into 4 aliquots that were stored at −80 °C. c) Molecular fungal community analysis using Illumina sequencing To determine fungal populations, a two-step amplification process was applied prior to MiSeq Illumina sequencing at Research and Testing Laboratory (RTL, Lubbock, TX). The forward primer was constructed with the Illumina i5 sequencing primer (5′-TCG TCG GCA GCG TCA GAT GTG TAT AAG AGA CAG-3′) and the ITS1F primer (5′-CTT GGT CAT TTA GAG GAA GTA A-3′) [ 57 ]. The reverse primer was constructed with the Illumina i7 sequencing primer (5′-GTC TCG TGG GCT CGG AGA TGT GTA TAA GAG ACA G-3′) and the ITS2aR primer (5′-GCT GCG TTC TTC ATC GAT GC-3′) [ 58 ]. Amplifications were performed in 25 μL reactions with Qiagen HotStar Taq master mix (Qiagen Inc, Valencia, CA), 1 μL of each 5 μM primer, and 1 μL of template. Reactions were performed on ABI Veriti thermocyclers (Applied Biosytems, Carlsbad, CA) under the following thermal profile: 95 °C for 5 min, then 25 cycles of 94 °C for 30 s, 54 °C for 40 s, 72 °C for 1 min, followed by one cycle of 72 °C for 10 min and 4 °C hold. Products from the first stage amplification were added to a second PCR based on qualitatively determined concentrations. Primers for the second PCR were designed based on the Illumina Nextera PCR primers as follows: Forward –AAT GAT ACG GCG ACC ACC GAG ATC TAC AC [i5index] TCG TCG GCA GCG TC and Reverse – CAA GCA GAA GAC GGC ATA CGA GAT [i7index] GTC TCG TGG GCT CGG. The second stage amplification was run the same as the first stage except for 10 cycles. Amplification products were visualized with eGels (Life Technologies, Grand Island, NY). Products were then pooled equimolar, and each pool was size selected in two rounds using Agencourt AMPure XP (BeckmanCoulter, Indianapolis, IN) in a 0.7 ratio for both rounds. Size-selected pools were then quantified using the Quibit 2.0 fluorometer (Life Technologies) and loaded on an Illumina MiSeq (Illumina, Inc. San Diego, CA) 2 × 300 flow cell at 10 pM. d) Bioinformatic and statistical analysis of fungal cultivable counts and Illumina sequences To assess the difference between fungal abundances in cultivable sample categories (based on time and location), the following univariate statistical analyses were carried out ( ). The normal distribution of the populations was tested using the Shapiro-Wilk normality test, and as most were not normally distributed ( p value <0.05), we used a Kruskal-Wallis test coupled to a Dunn’s test ( ) to investigate differences in the tested populations. Resulting p values were corrected using the Benjamini-Hochberg correction [ 59 ]. A total of 8,426,774 raw paired reads were processed with mothur v.1.36.1 [ 60 ]. The 250 bp paired reads were merged by aligning the reads and correcting discordant base calls by requiring one of the base calls to have a Phred quality score at least 6 points higher than the other. Sequences shorter than 200 bp, having more than 8 homopolymers or containing ambiguous base pairs were excluded from the dataset. Subsequently, reads were pre-clustered [ 61 ] by combining low-abundant sequences that differed by 3 or less bases of a more abundant sequence. Chimeric sequences in each sample were identified by UCHIME [ 62 ] and also excluded from the dataset. Reads were classified using the ribosomal database project (RDP) classifier II [ 63 ] implementation of mothur and UNITE fungal rDNA database [ 54 ]. Non-fungal sequences were subsequently removed. ITSx was used to exclude non-ITS sequences from the dataset prior to clustering unaligned sequences into operational taxonomic units (OTUs) with a distance of 0.03 [ 64 ] using mother v.1.36.1. The nearest neighbor (single-linkage) algorithm was used for this. OTUs were classified by using the consensus taxonomy of all sequences assigned [ 60 ]. An in-house R-script employing the libraries vegan, ape, gplots, mgcv, and GUniFrac was used to compare the fungal Illumina data (Additional file 3 ) [ 65 , 66 ]. Each dataset consisting of the OTU abundances per sample was rarefied 1000 times to the lowest number of reads, and an average Bray-Curtis distance was calculated. This distance was then utilized to calculate nonmetric multidimensional scaling (NMDS) or principal coordinate analysis (PCoA), PERMANOVA (Adonis test), and multi-response permutation procedure (MRPP). In addition, the OTU abundances per sample of each dataset were sum normalized and used to employ either an analysis of variance (ANOVA) or a Spearman rank correlation on the statistically significant changing parameters and to generate a heat map ( p value 0.05). The change of diversity was measured via the Shannon-Wiener diversity index. OTUs that were unclassified at phylum level were removed. Heat maps were presented at family level. Results Cultivable fungal burden and diversity The culture-based fungal abundance of the ILMAH was estimated for each sampling location with colony-forming unit (CFU) values ranging from below detection limit (BDL) to 10 4 CFU/m 2 (Additional file 4 : Table ST1a). The highest abundance of the cultivable fungi was observed during the first sampling (T 0 ) followed by ~1 log decrease in CFU during consecutive sampling events (T 13 , T 20 , T 30 ) for each living compartment. High fungal population was noticed in samples collected at T 0 (Fig. 2A ). When statistically treated, the change in fungal abundance within different time points was significant (T 0 –T 20 p = 0.008 and T 0 –T 30 p = 0.0125) (Additional file 5 : Table ST2a). The cultivable fungal population for the lab area was higher than observed in other compartments (Fig. 2B , Additional file 4 : Table ST1b). A statistically significant difference was observed between the lab area and the bedroom ( p = 0.0021) (Additional file 5 : Table ST2b). Fig. 2 Statistical analysis of cultivable fungal diversity detected through the 30-day habitation period at all the locations based on colony-forming unit (CFU) counts. To assess the difference between fungal abundances in cultivable sample categories (based on time— A and location— B ), we applied the following univariate statistics. The normal distribution of the populations were tested using Shapiro-Wilk normality test, and as most of them were not normally distributed ( p value <0.05), we used a Kruskal-Wallis test coupled to a Dunn’s test to investigate differences in the tested populations. Resulting p values were corrected using the Benjamini-Hochberg correction. A CFU counts before crew occupation T 0 — a were statistically different from CFU counts at T 20 and T 30 — b , but no statistical difference was observed between T 0 and T 13 counts— ab . Additionally, no statistical differences were observed between any other time points. B CFU counts in the bedroom— a , differed significantly from the CFU counts in lab— b , but no statistical differences were observed between bedroom and kitchen or toilet— ab . No statistical differences were observed between any other locations Full size image One hundred seventeen cultivable isolates were collected and identified targeting the ITS region. Screening sequences against the UNITE fungal database enabled identification of 32 species (Fig. 3 ). Among the cultivable isolates, only five strains had similarity lower than 97% that did not allow identification to the species level. All of the identified species but one— Hydnopolyporus fimbriatus —belong to the Ascomycota division (Fig. 3 ). The most abundant cultivable species were Cladosporium cladosporioides (16), Epicoccum nigrum (15), Aspergillus tubingensis (13), Aspergillus fumigatus (8), Alternaria tenuissima (7), and Pennicillium brevicompactum (7). P. brevicompactum was the only, out of the most commonly identified species, that was not present at multiple time points but only at T 13 sampling (Fig. 3 ). Abundance of C. cladosporioides colonies increased during the ILMAH occupation whereas the abundance of E. nigrum, A. tubingensis , and A. tenuissima decreased. Another commonly isolated species, A. fumigatus , was not isolated during the last sampling event. Neither field nor sampling device controls showed cultivable isolates. Fig. 3 Cultivable fungal diversity detected through the 30-day habitation period at all the locations based on internal transcribed spacer (ITS) sequences. The phylogenetic tree was constructed using neighbor-joining method (bootstrap 1000). In total, 117 isolates were collected 113 of which were successfully sequenced (4 strains either did not show growth or did not respond to the sequencing methods attempted). The numbering of the isolates is explained as follows: F = fungi, first number (0–4) will be the sample collection day (0 = T 0 , 2 = T 13 , 3 = T 20 , 4 = T 30 ), second number (1–8) will be sampling location, and the third number (1–5) is the replicate number of the isolate. For example, F23-02 will be a fungal strain, isolated from T 20 , at location number 3 and a second isolate. Frequency of isolates is given as a frequency bar after the name of fungus. Colors of the bars correspond to the collection time (single or multiple) Full size image Viable and total mycobiome (iTag Illumina-based analysis) The fungal richness of PMA-treated (viable) samples decreased when compared to untreated samples (dead and alive). In PMA-untreated samples, 98 families were detected, whereas OTUs belonging to 41 of these families were not viable. Moreover, both PMA-treated and PMA-untreated samples differed significantly in community relationships (NMDS analysis in Fig. 4 , Adonis p value = 0.006 and MRPP, significance of delta = 0.002; A = 0.0419) and their Shannon diversity index indicated a significant reduction (paired T test p = 0.0000012) in viable fungal diversity. Observed differences in the PMA-treated (Fig. 4a, c ) and PMA-untreated samples (Fig. 4b, d ) indicate that untreated samples are overestimating the observed fungi. This observation was further confirmed when alpha diversity of cultivable, viable, and total mycobiome was plotted over time (Fig. 5 ). In this research communication, only viable mycobiome was considered and discussed in detail. Illumina-based reads of sampling device and field controls showed negligible signal from DNA contamination and hence not included in the following analysis. Fig. 4 NMDS ordinations based on Bray-Curtis distances between all samples. a Ordination displaying the distance between non-PMA-treated samples taken at the different time points. b The distance between PMA-treated samples taken at the different time points. c The distance between non-PMA-treated samples taken at the different locations. d NMDS ordination displaying the distance between PMA-treated samples taken at the different locations. A “ P ” after the respective variable indicates that these are the samples treated with PMA. Plots a , c , and b , d represent the same data but differ in colors to underscore the focus on distribution over time and location, respectively Full size image Fig. 5 Linear representation of alpha diversity averages change over time for cultivable, viable and total mycobiome Full size image Viable fungal community structure The most abundant phylum that dominated the viable mycobiome of the ILMAH was Ascomycota (90% of all characterized OTUs) followed by Basidiomycota and unclassified fungi (~4 and ~5%, respectively). Incidence of the fungal OTUs at the family level for various time points and locations are presented in Table 1 and Table 2 , respectively. The dominant Pleosporaceae (75% of all OTUs) along with unclassified fungi (5%) and Davidiellaceae (4%) constituted 84% of all OTUs present in PMA-treated samples (Tables 1 and 2 and Fig. 6 ). A closer look into the genus level of Pleosporaceae family indicated the domination of Epicoccum (92.95% of all OTUs) and Alternaria (6.8%) sequences. The dominant fungal OTUs in the ILMAH biome correspond with the most frequently isolated cultivable fungi (Figs. 3 and 6 ). Table 1 Incidence of the fungal OTUs at the family level for various time points Full size table Table 2 Incidence of the fungal OTUs at the family level for locations Full size table Fig. 6 Dominant fungal population and succession patterns observed in 30-day occupation period of the ILMAH system. The OTUs presented in the bar graph are the most abundant. T 20 surface samples show different fungal profile when compared to other time points. T 30 samples show increase in fungal diversity when compared to other time point Full size image Differences in the fungal community between samples were analyzed by multivariate statistics using ordination analyses and Monte Carlo-based permutation tests. Viable fungal communities showed similar mycobiome profiles throughout the sampling period except from the samples collected at T 20 , which were distinct (Fig. 4 ; Adonis p value = 0.001 and MRPP, significance of delta = 0.001 and A = 0.1168, Additional file 6 : Figure SF2 MRPP: chance corrected within-group agreement A: 0.1626, significance of delta 0.001; Adonis: delta = 0.019). Interestingly, community profiles were similar between samples collected before crew occupation (T 0 ) and at T 13 and T 30 (Additional file 7 : Figure SF3; chance corrected within-group agreement A: 0.01802, significance of delta: 0.219, Adonis 0.228, Additional file 8 : Table ST3). Observed differences in multivariate statistics led to the investigation of mycobiome changes on a single-OTU level. First, an Anova test carried out on samples that did not cluster with the rest (Fig. 4 ) showed presence of representatives of Pleosporaceae , Pleosporales , Saccharomycetales , Tremellales , and Trichocomaceae families. Second, throughout the inhabitation, the level of Pleosporaceae showed significant fluctuation—from 96 to 47% and 70% at T 0 , T 20 , and T 30 , respectively (Fig. 7 ). Additionally, while Pleosporaceae presence decreased to 47%, a significant increase in the levels of Davidiellaceae (22%), Dothioraceae (11%), Saccharomycetales (8%), and Trichocomaceae (8%) was observed when compared to other time points (Fig. 6 , Additional file 9 : Table ST4). Interestingly, the presence of less abundant families observed before crew occupancy (T 0 ) increased at (T 30 ), i.e., Davidiellaceae (4.45%), Hypocreaceae (1.26%), Phaeosphaeriaceae (3.54%), Teratosphaeriaceae (5.17%), and Sporidiobolales (2.81%), as well as members of new fungal families were observed, i.e., Capnodiales (0.5%), Chaetomiaceae (3.81%), and Peniophoraceae (1.26%) (Table 1 and Fig. 6 , Additional file 9 : Table ST4). Fig. 7 Box plots of viable dominant fungal families and their succession patterns observed in 30-day occupation period of the ILMAH system. The OTU counts presented in the boxplots are the most abundant. Each time point is represented in a different color: T 0 — green , T 13 — orange , T2 0 — red , T3 0 — purple Full size image Spearman rank correlation was applied to each OTU’s abundance pattern and sampling event to determine significant correlation of viable fungal families with various time points. The results presented as a heat map contain 13 OTUs that showed a significant correlation with a p value of 0.01 and 32 OTUs with a p value of 0.05. All OTUs but Davidiellaceae , Teratosphaeriaceae , Tremellales_3 , Pleosporales , and Pleosporaceae were more abundant during the sampling before the crew inhabited the ILMAH (Fig. 8 ). Fig. 8 Heat map of the taxa that showed a significant correlation ( p value 0.01) with the factor time in the PMA-treated sample set. The color blue indicates a low abundance of the single OTU in the respective sample, and orange indicates a high abundance of the single OTU in the respective sample. Each column represents one sample collected throughout the study. The numbering pattern is explained as follows 30 means 30-day study. The first number (0–4) will be the sample collection day (0 = T 0 , 2 = T 13 , 3 = T 20 , 4 = T 30 ), second number (1–8) will be sampling location, and P stands for PMA-treated samples. For example, 30.06P will be a sample collected during the 30-day study from T 0 at location 6 Full size image Further investigation of OTU abundances on the genus level revealed changes in OTU counts over the course of time. The numbers of OTUs identified as Epiccocum , Alternaria , Pleosporales , and Cryptococcus fluctuated from high abundance at T 0 to significantly lower counts at T 20 and then counts increased again at T 30 . OTUs identified as Davidiella increased throughout the occupation time whereas OTU counts for Aspergillus , Aureobasidium , and Candida were increasing over time (peak at T 20 ) with a drastic decrease at T 30 . Location wise before the ILMAH occupation, all compartments exhibited high abundance of Epiccocum OTUs that decreased over time and again increased at T 30 . Alternaria OTUs were less abundant than the ones identified as Epiccocum genus. Nevertheless, they showed the same fluctuation pattern in all compartments but bedroom. While accumulation of Davidiella OTU counts was observed in bedroom and lab, the OTU counts in the bathroom did not differ between the T 0 and T 30 . At the same time, accumulation of Davidiella OTUs was observed in the kitchen with the highest abundance during the T 20 sampling event. To sum up, statistical analysis revealed differences in community structure between time points in particular between T 20 and any other previous or following time point. Davidiellaceae , Teratosphaeriaceae , Tremellales_3 , Pleosporales , and Pleosporaceae families were shown to increase over the occupation time period. On a genus level, Epiccocum , Alternaria , Pleosporales , Davidiella , and Cryptococcus showed increased abundance. Discussion Understanding microbial characteristic of a controlled habitat, like the ILMAH, may facilitate discerning the microbial population dynamics as well as the development of appropriate countermeasures. Knowledge about the viable mycobiome will not only allow the development of required maintenance and cleaning procedures in the closed habitat but also prevent it from deteriorating and being a potential health hazard for its inhabitants [ 67 ]. Multiple studies have shown that use of propidium monoazide (PMA), a dye that can penetrate the compromised cell walls, enables more accurate analysis of the viable microbiome [ 68 , 69 ]. It was also shown that PMA treatment might be successfully applied to determine dead and viable counts for various fungal species [ 70 , 71 ]. As in this study, other report proved that DNA from dead cells, when not removed from samples, before molecular analyses, might overshadow the actual diversity since presence of less abundant microbial species were masked [ 32 ]. PMA-treated and PMA-untreated samples varied significantly in the ILMAH fungal community structures ( p value = 0.006), and it had been reported before that low abundant species were detected in PMA-treated samples for bacteria, fungi, and viruses [ 66 ]. This approach validated the importance of PMA treatment to accurately determine the viable mycobiome of environmental samples. As a result, this study discussed only viable fungal communities to determine succession patterns and community structure over the course of time. Various studies showed the positive correlation between changes in the indoor bacteriome and human presence [ 7 , 28 , 30 , 66 , 72 ] whereas such observation was not confirmed for the mycobiome [ 29 , 30 , 73 ]. Major sources of indoor microbiome were reported to be associated with human skin commensals transmitted via shoes, clothes, coughing, and talking [ 74 , 75 ]. It was also shown that the indoor mycobiome was mostly influenced by airborne fungi rather than human presence and their shedding, despite the fact that fungi were associated with human skin, lungs, urogenital tract, oral nasal cavities, and the gut [ 30 , 76 , 77 , 78 , 79 ]. In addition, few studies demonstrated that key determinants for mycobiome indoors might be the age of the built-in environments and the relative humidity, which can enhance fungal growth [ 80 ]. A recent 6-month survey on Japanese astronauts on the ISS revealed that the most abundant (dead or alive) fungal genus was Malassezia [ 10 ] whereas in this study, the most frequent viable fungal genera of closed habitat were Epiccocum , Alternaria , and Pleosporales , which are environmental organisms rather than human commensals. The presence of fungi inside the closed habitat, houses, or any types of man-made buildings was correlated with the amount of the water (relative humidity; RH) present in the environment [ 80 ]. The environmental relative moldiness index (ERMI) increases with elevated amounts of water [ 81 ]. The most abundant genera in common households were Alternaria , Cladosporium , and Epicoccum whereas Aspergillus and Penicillium were predominant in water-damaged building [ 81 ]. Similarly, in this study (measured RH 32 to 55%), the most abundant fungal genera were Epiccocum , Alternaria , Pleosporales , and Cryptococcus , which could be compared with the previous observations for common households. Nevertheless, genera present in houses and closed habitats could have an impact on human health. These molds were associated with allergies and asthma [ 67 , 82 ]. Elevated level of fungal allergenic molecules, such as enzymes, toxins, cell-wall components, and cross-reactive proteins could induce type I hypersensitivity [ 67 ]. Additionally, because of their ability to colonize the human body and produce toxins, volatile organic compounds, and proteases, the common molds ( Alternaria , Cladosporium , Epicoccum ) could damage airways of immunocompromised occupants, which make them more dangerous than any other allergenic source [ 82 , 83 , 84 , 85 ]. In this study, accumulation of Alternaria and Epicoccum genera over time was observed, which, in combination with reported decreased immunity in occupants of confined spaces, e.g., astronauts [ 50 , 86 ] could lead to developing allergy and asthma symptoms. Throughout this study, the abundance of dominant Pleosporaceae family members decreased (94% at T 0 to 71% at T 30 ), while other fungal families were observed to increase, possibly as a result of human presence. However, positive correlation between increased fungal diversity and human presence requires studying the mycobiome of the occupants. It might be possible that the implemented cleaning procedures including weekly dusting, sweeping, wet mopping the floor, and antibacterial wipes resulted in suppressed growth of Pleosporaceae members over the time. The community structure observed at T 20 was distinct from other time points. Detailed logging data collected during 30-day mission did not show any abnormal accidents or cleaning activities preceding T 20 sampling. The recorded data indicated that cleaning was conducted between the sampling events (~4–5 days prior to sampling). However, the inhabiting crew might have inadvertently cleaned just prior to T 20 sampling and that might have removed both total and viable fungal populations (Table 1 ). The significant increase of Davidiellaceae (22%), Dothioraceae (11%), Saccharomycetales (8%), and Trichocomaceae (8%) sequences at T 20 might be due to the fact that under-represented members became available for PCR reaction among the competing dominant fungal DNA. Both Cladosporium sp. members of Davidiellaceae family and Aurobasidium sp. of Dothioraceae family have the capacity to survive extreme environments like the ice of the Antarctica [ 87 ] or radioactive explosion site of Chernobyl Power Plant accident [ 88 ]. Additionally, Cladosporium sp. and Aureobasidium pullulans were isolated from hypersaline waters with NaCl concentration reaching 25% indicating high osmotolerance [ 89 , 90 ]. Penicillium sp. of Trichocomaceae family isolated from high-altitude soil in Indian Himalaya has been shown to tolerate a wide range of pH from 2 to 14 and a salt concentration between 10 and 20% [ 91 ]. Most of the Apergillus sp. of Trichocomaceae family is soil fungi or saprophytes [ 92 ], but there has been a recent report of isolation of A. fumigatus from ISS [ 32 ]. In-depth analysis of isolate ISSFT-021 and IF1SW-F4 revealed increased UV resistance (in preparation) and virulence in neutrophil-deficient larval zebrafish model of invasive aspergillosis [ 93 ]. All in all, the capacity to survive in such extreme environments might help the observed in the study Davidiellaceae , Dothioraceae , and Trichocomaceae species to adjust and survive hostile conditions of the ILMAH. Conclusions Accumulation of viable mycobiome during the experiment duration and differences in fungal community profiles were observed. On a genus level, Epiccocum, Alternaria, Pleosporales, Davidiella , and Cryptococcus showed increased abundance over the occupation time. Unlike results of molecular analyses, cultivable fungi counts decreased over time. Epicoccum nigrum was the most dominant cultivable isolate; however, most of the fungal species detected via molecular approach were not cultured. During human presence, the overall fungal diversity has changed, that in the long run, may lead to a conclusion that proper maintenance protocols of a closed habitat may be required to preserve it from deteriorating and keep it safe for its inhabitants. | Human presence in closed habitats that may one day be used to explore other planets is associated with changes in the composition of the fungal community - the mycobiome - that grows on surfaces inside the habitat, according to a study published in the open access journal Microbiome. Dr Kasthuri Venkateswaran, Senior Research Scientist at the NASA Jet Propulsion Laboratory, Caltech, and corresponding author of the study said: "Our study is the first report on the mycobiome of a simulated habitat meant for the future human habitation of other planets. We used the Inflatable Lunar/Mars Analog Habitat (ILMAH), a unique, simulated closed environment that mimics the conditions found on the International Space Station and possible human habitats on other planets. We showed that the overall fungal diversity changed when humans were present." The researchers found that certain kinds of fungi - including known pathogens that can colonize the human body and cause allergies, asthma and skin infections - increased in number while humans were living inside the ILMAH. Prolonged stays in closed habitats might be stressful for inhabitants and thus lead to decreased immune response, making people more vulnerable to opportunistic pathogens like fungi. Dr Venkateswaran said: "Fungi are extremophiles that can survive harsh conditions and environments like deserts, caves or nuclear accident sites, and they are known to be difficult to eradicate from other environments including indoor and closed spaces. Characterizing and understanding possible changes to, and survival of, fungal species in environments like the ILMAH is of high importance since fungi are not only potentially hazardous to the inhabitants but could also deteriorate the habitats themselves." Knowing how fungal communities change in the presence of humans is thus necessary for the development of appropriate countermeasures to maintain habitats like the ILMAH or the ISS and to protect the health of the people who live there. The primary goal of the ILMAH was to understand the physiological, psychological, and behavioral changes in humans in a confined environment. Three student crews were housed inside the ILMAH for 30 days. In order to determine which fungal species were present and how the composition of the mycobiome changed during human habitation, samples collected at various time points in a 30-day period were characterized. The ILMHA was completely isolated from the outside world, except for the exchange of filtered air between the indoor and outdoor environments. Crew members were given a weekly work schedule which included cleaning the habitat and collecting surface samples. Samples were collected from eight sampling locations at four time-points; just before habitation and at 13, 20 and 30 days of habitation. The habitat was cleaned weekly with antibacterial wipes. The researchers gene sequenced the samples to show which species of fungus were present and to determine the total (alive and dead) and viable (alive and able to reproduce) fungal populations. They showed that the diversity of the mycobiome and the levels of different fungal populations changed during the duration of the experiment. For example, populations of Cladosporium cladosporioides - a common outdoor fungus - increased. While C. cladosporioides rarely causes infections in humans, it could cause asthmatic reactions especially in individuals with weakened immune systems, such as astronauts. Dr Venkateswaran said: "In-depth knowledge of the viable mycobiome will allow the development of required maintenance and cleaning procedures in a closed habitat like ILMAH and also prevent it from deteriorating and becoming a health hazards to its inhabitants. However, to be able to show that increased fungal diversity is a result of human presence, the mycobiome of the occupants will also need to be studied." | 10.1186/s40168-017-0280-8 |
Earth | Historical climate fluctuations in Central Europe overestimated due to tree ring analysis | Josef Ludescher et al, Setting the tree-ring record straight, Climate Dynamics (2020). DOI: 10.1007/s00382-020-05433-w Journal information: Climate Dynamics | http://dx.doi.org/10.1007/s00382-020-05433-w | https://phys.org/news/2020-09-historical-climate-fluctuations-central-europe.html | Abstract Tree-ring chronologies are the main source for annually resolved and absolutely dated temperature reconstructions of the last millennia and thus for studying the intriguing problem of climate impacts. Here we focus on central Europe and compare the tree-ring based temperature reconstruction with reconstructions from harvest dates, long meteorological measurements, and historical model data. We find that all data are long-term persistent, but in the tree-ring based reconstruction the strength of the persistence quantified by the Hurst exponent is remarkably larger ( \(h\cong 1.02\) ) than in the other data ( \(h=\) 0.52–0.69), indicating an unrealistic exaggeration of the historical temperature variations.We show how to correct the tree-ring based reconstruction by a mathematical transformation that adjusts the persistence and leads to reduced amplitudes of the warm and cold periods. The new transformed record agrees well with both the observational data and the harvest dates-based reconstructions and allows more realistic studies of climate impacts. It confirms that the present warming is unprecedented. Working on a manuscript? Avoid the common mistakes 1 Introduction One of the important questions in global change research is which impact the increase of the recent warming will have on societies and human beings (Stocker et al. 2013 ). One possibility to study this question is to look at historical developments in relation to annually resolved and absolutely dated temperature reconstructions. Presently, the main source for these kinds of data are tree-ring chronologies that in some cases cover more than two millennia (Esper et al. 2016 ). However, the reliability of their high-frequency variability (Matalas 1962 ; Cook et al. 1999 ) as well as, their low-frequency variability (von Storch et al. 2004 ; Moberg 2012 ; Franke et al. 2013 ; Bunde et al. 2013 ) is debated and it is not yet clear to which extent tree-ring chronologies reflect historical reality. The purpose of this article is to assess and improve tree-ring based reconstructions of temperatures. We focus on central Europe, where the temperatures of the past can be obtained not only from tree-rings (Büntgen et al. 2011 ), but also from long meteorological measurements (1753-present) (Berkley Earth; Czech Hydrometeorological Institute) as well as from grape and rye harvests date records (Chuine et al. 2004 ; Labbé and Gaveau 2011 ; Meier et al. 2007 ; Wetter and Pfister 2011 ) that date back up to 1370 and from historical model data (850-1849) (Jungclaus et al. 2010 ; Earth System Grid Federation). Figure 1 a shows the tree-ring based reconstruction (TRBR) of central European summer temperatures (Büntgen et al. 2011 ), together with its 30 year moving average that reveals the long-term temperature variations in the record. Particularly large temperature increases occurred between 1340 and 1410 and between 1820 and 1870 that even are comparable in amplitude with the recent warming trend since 1970, indicating that the recent (anthropogenic) warming may not be unprecedented. To assess if the amplitudes of the warm and cold periods in the reconstructed temperature record reflect reality, we first performed a persistence analysis of the TRBR record and compared it with the results from the other three sources. We find that all records are long-term persistent. The strength of the long-term persistence is quantified by the Hurst exponent h. This exponent was originally introduced by Harold E. Hurst to quantify the persistence in streamflow data of the Nile (Hurst 1951 ) and has since then been applied to a large number of records in hydrology, climate, financial markets and biology (see, e.g., Bunde et al. 2012 ). Here we show, that the strength of the long term persistence is remarkably larger in the TRBR record than in the other time series. We demonstrate that it is this overestimation of the long-term persistence that leads to the enormous climate variations in the tree-ring based reconstruction and discuss its potential origin. Finally, we show how the TRBR record can be corrected by a mathematical transformation that reduces the Hurst exponent and leads to reduced amplitudes of the warm and cold periods while leaving their positions unchanged. The transformed record agrees well with both the observational data and the harvest dates based temperature reconstructions and also confirms unprecedented recent warming in central Europe. Fig. 1 Tree ring-based reconstruction of the central European temperatures in the last millennium. a The reconstructed June-August temperatures in units of the records standard deviation. The red line depicts the moving average over 30 years. b , c The DFA2 fluctuation functions F ( s ) and the WT2 fluctuation functions G ( s ), respectively, for the reconstructed data from a , for monthly observational data (Swiss temperatures from Berkeley Earth, station data from Prague) and the MPI-ESM-P-past1000 model output for central European summer temperatures, from top to bottom. For the TRBR and model data, the time scale s is in years, while for the two observational records, it is in months. Note that in the double logarithmic presentation, the asymptotic slopes (Hurst exponents h ) for the reconstruction data ( \(h\cong 1\) ) and the observational and model data ( \(h\cong 0.6\) ) differ strongly Full size image 2 Data and methods 2.1 Data Tree-ring data: We consider the tree ring-based reconstruction (TRBR) of central European summer temperature variability by Büntgen et al. ( 2011 ), which reconstructs the June-August temperature over nearly the past 2500 years. A description of the methodology used can be found in Büntgen et al. ( 2011 ). The TRBR data set can be obtained from the NOAA paleoclimatology datasets (National Centers for Environmental Information). Here we focus on the last millennium, where also temperature reconstructions based on harvest dates are available. The data are displayed in Fig. 1 a. Harvest dates: The 4 harvest date records (Burgundy, Dijon and Swiss grape harvest data as well as rye harvest data) considered here specify, in each year, the calendar date where the harvest begins. From the records one can reconstruct the temperatures during the growth season in central Europe (Chuine et al. 2004 ; Labbé and Gaveau 2011 ; Meier et al. 2007 ; Wetter and Pfister 2011 ). (i) The Burgundy data set (Chuine et al. 2004 ) ranges from 1370 to 2003 and reconstructs the April–August temperatures. There exists one gap (1978) that we interpolated linearly. (ii) The French Dijon grape data set (Labbé and Gaveau 2011 ) is between 1385 and 1906 and represents the April-September temperatures. Since the Dijon data set has large gaps before 1448 we concentrated on the time span between 1448 and 1906. There are only short gaps that we filled by linear interpolation. (iii) The Swiss grape data set (Meier et al. 2007 ) is from the Swiss Plateau region and north-western Switzerland and ranges from 1480 to 2006 and reconstructs the April–August temperatures. (iv) The rye harvest data set (Wetter and Pfister 2011 ) is based on harvest dates in northern Switzerland and southwestern Germany. The data are between 1454 and 1970 and reconstruct the March–July temperatures. Observational and model temperature data: We consider the long observational monthly temperature records for France and Switzerland that have been established by the Berkeley Earth analysis (Berkley Earth). The records range from 1753 to 2013. We also consider a long observational record from Prague (1775–2018) (Czech Hydrometeorological Institute). From the monthly data, we also obtain the mean June–August temperatures. Since these summer temperature time series are too short for a reliable DFA2 persistence analysis, which requires more than about 400 data points, we analyzed the complete monthly time series, which provide upper bounds for the seasonal Hurst exponents (see below). We also consider the central European summer temperatures from the output of the past 1000 (850–1849) run of the MPI-ESM-P general circulation earth system model (Jungclaus et al. 2010 ). We obtained the data from (Earth System Grid Federation). 2.2 Persistence analysis It is well known that hydroclimatic records like temperature data and river flows are long-term persistent (Hurst 1951 ; Mandelbrot and Wallis 1969 ; Salas et al. 1980 ; Pelletier and Turcotte 1997 ; Koscielny-Bunde et al. 1998 ; Eichner et al. 2003 ; Koscielny-Bunde et al. 2006 ; Mudelsee 2007 ; Franzke 2010 ; Mudelsee 2013 ; Yuan and Fu 2014 ; Blesic et al. 2019 ; Yuan et al. 2019 ). The long-term persistence of temperature and river flow data represents an efficient testbed for model outputs, as well as reconstructions (Meko and Graybill 1995 ; Govindan et al. 2002 ; Livina et al. 2007 ; Ault et al. 2013 ). In long-term persistent records \(\lbrace {y_i}\rbrace , i=1,\dots ,N\) with zero mean the power spectral density S ( f ) decays with increasing frequency f as \(S(f)\sim f^{-\beta }\) (see, e.g. Turcotte 1997 ). For white noise, \(\beta =0\) . Records with \(0<\beta <1\) are stationary and can be characterized by an autocorrelation function C ( s ) with time lag s that decays by a power law, \(C(s)\sim (1-\gamma )s^{-\gamma }\) , with \(\gamma =1-\beta\) . As a consequence of the long-term persistence, long-lasting deviations from the mean appear that lead to a characteristic mountain-valley structure in the record. With an increasing persistence, this mountain-valley structure becomes more pronounced, and the likelihood of encountering large mountains and deep valleys increases. Accordingly, it depends on the strength of the persistence, if a trend may be considered as natural or not (Lennartz and Bunde 2009b ; Tamazian et al. 2015 ), and the prerequisite for comparing trends in different records is that both records have the same persistence characterized by the same \(\gamma\) -value. Since both S ( f ) and C ( s ) exhibit large finite-size effects and are affected by external deterministic trends, one cannot use C ( s ) nor S ( f ) to reliably estimate the persistence of short records where N is of the order of \(10^3\) . Here, to reveal the memory in the data, we followed (Bunde et al. 2013 ) and used wavelet (WT2) and detrended fluctuation analysis (DFA2), where fluctuation functions F ( s ) and G ( s ) are calculated. In long-term persistent data both fluctuation functions show power-law behavior \(\sim s^h\) , \(h=(2-\gamma )/2\) , but on complementary time scales s : WT2 on short time scales and DFA2 on longer time scales. For \(\gamma =1\) (white noise), \(h=1/2\) . Accordingly, the deviation of the (Hurst) exponent h from 1/2 quantifies the strength of the long-term persistence of a time series: h well above 1/2 signifies stronger long-term persistence with a pronounced mountain-valley structure, while \(h=1/2\) corresponds to white noise or short memory processes, e.g., an autoregressive process of first order, where the autocorrelation function decays exponentially. Thus, the question if an observed trend in a record is natural or not depends essentially on the Hurst exponent h of the record. In both DFA2 and WT2 one measures the variability of a record by studying the fluctuations in segments of the record as a function of the segment length s . Accordingly, one first divides the record into non-overlapping windows \(\nu\) of lengths s . In WT2 (see, e.g., Koscielny-Bunde et al. 1998 ; Bunde et al. 2013 ) one determines, in each segment \(\nu\) , the mean value \(\bar{y}_{\nu }\) of the data and considers the linear combination \(\varDelta _\nu ^{(2)} = \bar{y}_{\nu }(s) - 2\bar{y}_{\nu +1}(s)+\bar{y}_{\nu +2}\) . Then one averages \((\varDelta _\nu ^{(2)})^2\) over all segments \(\nu\) , takes the square root and multiplies by s to arrive at the desired fluctuation function G ( s ). For white noise data, \(G(s)\sim s^{1/2}\) . For long-term persistent records, we have $$\begin{aligned} G(s)\sim s^h, 1<s<N/20. \end{aligned}$$ (1) The exponent h can be associated with the Hurst exponent and is related to the correlation exponent \(\gamma\) and the spectral exponent \(\beta\) by \(h = 1-\gamma /2 = (1+\beta )/2\) . For spectral exponents \(\beta\) above 1, corresponding to Hurst exponents above 1, the data are non stationary. A well-known example is a simple random walk, where \(\beta =2\) and \(h=1.5\) . In DFA2 (Kantelhardt et al. 2001 ) one focuses, in each segment \(\nu\) , on the cumulated sum \(Y_i\) of the data and determines the variance \(F_{\nu }^2(s)\) of the \(Y_i\) around the best polynomial fit of order 2. After averaging \(F_{\nu }^2(s)\) over all segments \(\nu\) and taking the square root, we arrive at the desired fluctuation function F ( s ). One can show that for long-term persistent records, $$\begin{aligned} F(s)\sim s^h, 8<s<N/4. \end{aligned}$$ (2) Accordingly, F ( s ) and G ( s ) show the same power-law dependence on complementing time scales.By construction, linear (deterministic) trends in the data are eliminated, which is particularly important when determining long-term persistence in the presence of climate change. By evaluating both functions, we can measure long-term persistence from very short ( \(s=1\) ) towards very large ( \(s=N/4\) ) time scales. We estimated the error bars for DFA2 and WT2 by creating long ( \(N=2^{21}\) ) synthetic records (Turcotte 1997 ) with prescribed Hurst exponents between \(h=0.2\) and \(h=1.5\) and analyzing by DFA2 and WT2 short subrecords with the same lengths as the original records. We identified the cases when the subrecords’ Hurst exponents were within \(\pm 0.01\) of the empirical (observational) data’s Hurst exponent and determined the distribution of the prescribed Hurst exponents, which correspond to these cases. The standard deviation of this distribution is taken as the error bar. For the shortest record ( \(N=459\) ) we obtain \(\varDelta h=0.06\) , while for the longest record ( \(N=3132\) ) \(\varDelta h=0.03\) . For seasonal records comprised only of a subset of a year, e.g., summer temperatures June–August as in the tree-ring data or April–August as in the Burgundy data, the obtained Hurst exponent constitutes only a lower bound for the full years Hurst exponent. For an estimate of this effect, we have created synthetic full year records and analyzed the corresponding seasonal subrecords. For example, for the tree-ring data where \(N=1000\) and \(h=1.02\) the annual Hurst exponent is around \(h=1.1\) , while for the Burgundy data with \(N=634\) and \(h=0.55\) the annual Hurst exponent is around \(h=0.64\) . We like to note that the Hurst exponent was first determined via a rescaled range (R/S) analysis (Hurst 1951 ) and is also referred to as Hurst coefficient. In a (R/S) analysis (for a review see, e.g., (Salas et al. 1980 )) one considers how the range R of the cumulated sum Y , divided by the standard deviation of the data, scales with the considered segment length (period) s . For long-term persistent records, (R/S) scales as, $$\begin{aligned} {(R/S)(s)\sim s^h, s>>1.} \end{aligned}$$ (3) Compared with DFA2, drawbacks of the (R/S) analysis are (1) strong finite-size effects and (2) the dependence on external linear trends. DFA2 can also be generalized easily to study nonlinear correlations in a record (Kantelhardt et al. 2001 ). 3 Results 3.1 Long-term persistence The result of the DFA2 and WT2 analysis of the tree ring-based summer temperatures of central Europe in the last millennium is shown in Fig. 1 b, c. In the double logarithmic presentation, the asymptotic part of the DFA2 fluctuation function F ( s ) and the WT2 fluctuation function G ( s ) are straight lines, confirming the power-law behavior of F ( s ) and G ( s ), with slope \(h\cong 1.02\) . Accordingly, the reconstructed temperature reveals a very high long-term persistence at the border towards non-stationarity. We compare this result with long monthly observational temperature records from Switzerland (1753–2013) and Prague (1775–2018) as well as with a long climate model simulation output (850–1849). For these records, the DFA2 analysis yields \(h=0.58, 0.61, 0.57\) , respectively. The WT2 analysis yields \(h=0.58,0.58,0.56\) , with the differences to the DFA2 analysis being within the error bars ( \(\varDelta h\approx \pm 0.03\) ). These values are consistent with numerous observational temperature records from the last century, where the Hurst exponent typically ranges between 0.55 and 0.75 (see, e.g., Koscielny-Bunde et al. 1998 ; Eichner et al. 2003 ). Thus the long-term persistence of the tree-ring data is considerably above the long-term persistence of the observational records. However, since the observational data cover only a relatively short period, between 1753 and present, we cannot exclude a priori that the persistence of the central European temperatures might have changed over time or might depend on the regarded time scale, as suggested for precipitation records in Markonis and Koutsoyiannis ( 2016 ). To study this possibility, we expanded the time window by studying several harvest date based temperature reconstructions from central Europe, which go back up to 1370 (see Data and Methods section). Figure 2 a–c show the graphs of the three harvest date proxy temperatures we considered. Figure 2 d shows the negative harvest date anomalies for the Dijon grapes where a temperature reconstruction has not been performed. We include the Dijon record in our study since for the other 3 harvest data sets the negative harvest date anomalies and the reconstructed temperature anomalies based on them show the same structure, when the anomalies are plotted in units of their standard deviation from the mean. As in Fig. 1 , the red curves denote the 30-year moving average revealing the long-term summer temperature variations. Figure 2 e, f show the result of our DFA2 and WT2 persistence analysis. The figure shows that the fluctuation functions are characterized by Hurst exponents between \(h=0.52\) and \(h=0.69\) , in agreement with the observational data and considerably below the tree-ring data. 3.2 Climate variations since 1753 For a more direct evaluation of the long-term summer temperature variations suggested by the tree-ring and harvest-date based proxies, we next compare the 30-year moving averages of the proxy data and the observed temperature records. For obtaining a coherent picture, before determining the moving average, we normalized each data set with respect to the period 1753–2000 by subtracting its mean and dividing by its standard deviation. Fig. 2 Harvest date based temperature reconstructions. Temperature anomalies based on a rye harvest dates from northern Switzerland and southwestern Germany (1454–1970), b grape harvest dates in Switzerland (1480–2006) and c grape harvest dates in Burgundy (1370–2003). d The inverted harvest date anomalies in Dijon (1385–1906). All anomalies are shown in units of their standard deviation from the mean. The red lines are moving averages over 30 years. e , f The DFA2 fluctuation functions F ( s ) (with Hurst exponents) and WT2 fluctuation functions G ( s ) of the records from ( a – d ). The WT2 Hurst exponents are \(h=0.59,0.63,0.54\) and 0.52 from top to bottom Full size image Fig. 3 Long-term summer temperature variations in Switzerland and France. 30-year moving averages of the observational temperatures (Berkeley Earth) of Switzerland (red), France (orange) and the temperature reconstructions based on tree rings (black), Swiss grape harvest dates (brown) and Burgundy harvest dates (blue). Note that the moving averages of both observational records are nearly identical; indeed, their correlation coefficient is close to 1 Full size image Figure 3 shows that between 1870 and 1988, the TRBR data reflect quite well (up to some offset) the observed long-term summer temperature variations, overestimating only slightly the depth of the valley around 1910. In this time window, the temperature reconstruction based on the tree rings appears to be superior to the harvest date based reconstructions. Between 1790 and 1870, however, the agreement is not so good. The TRBR curve runs through a valley as the observational curve does, but considerably exaggerates the depth of the valley. In contrast, the harvest date proxies only run through a shallow minimum, remarkably close to the observational data. The figure suggests that the TRBR data describes the appearance of warm and cold periods in the past properly, but, due to its considerably enhanced long-term persistence, overestimates the strengths of the cold and warm periods considerably. 3.3 Correction of the long-term persitence To correct the enhanced long-term persistence in the TRBR, we are interested in a mathematical transformation of the data, which lowers the natural long-term persistence while leaving the gross features of the record, the positions of the warm and cold periods, unchanged. We performed the following mathematical transformation to change the original TRBR Hurst exponent \(h_0=1.03\) to \(h_1=0.60\) and thus to be in line with the observational, harvest and model data. Since this transformation is only suitable for altering a record’s natural long-term persistence, i.e., in the absence of external trends, we transformed the TRBR data between 1000 and 1990, before the current anthropogenic trend became relevant. The transformation starts with calculating the Fourier transform \(y_0(f)\) of the TRBR data \(y_0(i)\) . By definition, y ( f ) is related to the power spectrum S ( f ) by \(S(f)= \vert y(f)\vert ^2\sim f^{-(2h_0-1)}\) . Next, we define \(y_1(f)=y_0(f) f^{(h_0-h_1)}\) . Finally, we transform \(y_1(f)\) back to time space. The resulting record \(y_1(i)\) has the desired properties: It is long-term correlated with Hurst exponent \(h_1\) since its power spectrum scales as \(S_1(f)= \vert y_1(f)\vert ^2\sim \vert y_0(f)\vert ^2 f^{2(h_0-h_1)}\sim f^{-(2h_0-1)}f^{2(h_0-h_1)} \sim f^{-(2h_1-1)}\) . This transformation reduces the height of warm periods and depths of cold periods but leaves their position unchanged. Figure 4 a compares the transformed TRBR data (blue) with \(h_1=0.6\) with the original TRBR data (black). The bold lines are the 30-year moving averages. The figure shows that by the transformation the structure of the original TRBR data is conserved, but the climate variations characterized by the depths of the minima and the heights of the maxima are reduced. Fig. 4 Original and transformed tree-ring proxy temperature record. a Compares the original TRBR record for the period 1000–1990, where the Hurst exponent h is 1.03 (black), with the transformed TRBR record, where \(h\equiv h_1=0.6\) (blue). For better visibility, the transformed TRBR record has been shifted downward by 5 units of its standard deviation. b How the magnitudes of the cold periods in the transformed TRBR record decrease with decreasing Hurst exponent \(h_1\) . The magnitudes are quantified by the differences of the 30 year moving averages between the beginning and the end of the respective periods. c Compares the 30-year moving averages of the original and the transformed TRBR record ( \(h=0.6\) ) with the 30-year moving average of the observational temperatures from Switzerland. The comparison shows that the transformed TRBR record fits quite nicely with the observational data Full size image To see how the strength of the long-term variations in the transformed TRBR data depends on their Hurst exponent \(h_1\) , we have determined, in the 30-year moving average, the temperature differences in 4 periods (1415–1465, 1515–1536, 1562–1595, 1793–1824) where the greatest changes between 1350 and 1950 occur. The result is shown in Fig. 4 b. The figure shows that the temperature difference between the beginning and the end of each period decreases continuously with decreasing h . For h around 0.6, the temperature differences are roughly halved. Finally, we have compared the natural long-term variations in the transformed TRBR data with those in the observational data of Switzerland in the time interval between 1753 and 1990 (red curve) where anthropogenic effects have not yet become relevant. Figure 4 c shows that the overall agreement between both data is very good, in particular in the period between 1790 and 1870, where the original TRBR data failed to reproduce the quite shallow minimum in the observational data. It is evident from Figs. 3 and 4 c that the transformed TRBR curve fits best to both kinds of reconstructed and observed data, even though the period between 1870 and 1930 is not as well represented by the transformed TRBR data as the other periods. Interestingly, also the Burgundy and Swiss harvest data show a deviation in the same direction in this period. 4 Discussion We suggest three possible reasons (or a combination thereof) for too much persistence in tree ring-based temperature reconstructions (i–iii): (i) If the applied standardization method, i.e., tree-ring detrending, is insufficient in removing all of the age-dependent biological growth trends, the resulting chronologies will be characterized by too much persistence. This methodological constraint will be strongest when using RCS-type detrending techniques on composite datasets for which samples are not randomly distributed over time. (ii) Since ring formation partly depends on previous year resources, carbon allocation from former year(s) results in too high persistence. (iii) Since tree growth always depends on soil moisture availability, inertia effects of soil property and water storage increase persistence. All of these factors (i–iii) are particularly strong when using ring width compared to wood density. Our study is geographically limited to central Europe since long observational temperature records and long harvest date based temperature reconstructions are currently only available for this region. However, the analysis and, if necessary, the adjustment of a record’s long-term persistence is not limited to temperatures and generally applicable to other tree-ring based reconstructions of long-term persistent quantities, e.g., river flows. In the case of streamflow reconstructions, the problem of persistence biases has gained a lot of attention, since the risk of multi-decadal droughts that can empty reservoirs, depends strongly on the assumed persistence properties (Ault et al. 2013 , see also Woodhouse et al. 2006 ). So far, approaches to address persistence bias in this domain have been limited to short-term persistence. For instance, in Meko et al. ( 2001 ) the chronologies have been adjusted to have the same AR(1) coefficient as the observed streamflow. In contrast, the mathematical transformation suggested here adjusts a record’s long-term persistence, to provide a more realistic reconstruction of the record’s low-frequency part, i.e., the climate variations characterized by, e.g., the 30-year moving average, but not of its high-frequency part. A simultaneous adjustment of a record’s long-term, as well as short-term persistence, might be achieved by a combined application of a fractional autoregressive integrated moving average (FARIMA) and an autoregressive integrated moving average (ARIMA) filtering. This remains for future work. Since tree ring-based reconstructions play an important role in the understanding of past temperature variability, we suggest the use of the Hurst exponent as a standard practice to assess the reconstructions’ low-frequency properties and to compare the determined values with the Hurst exponents of other respective time series (observational, harvest dates, models). If deviations from the expected values are detected, the data should be transformed to adjust the Hurst exponent. This will lead to a more realistic reconstruction of the record’s low-frequency signal and thus to a better understanding of the climate variations of the past. | "Was there a warm period in the Middle Ages that at least comes close to today's? Answers to such fundamental questions are largely sought from tree ring data," explains lead author Josef Ludescher of the Potsdam Institute for Climate Impact Research (PIK). "Our study now shows that previous climate analyses from tree ring data significantly overestimate the climate's persistence. A warm year is indeed followed by another warm rather than a cool year, but not as long and strongly as tree rings would initially suggest. If the persistence tendency is correctly taken into account, the current warming of Europe appears even more exceptional than previously assumed." To examine the quality of temperature series obtained from tree rings, Josef Ludescher and Hans Joachim Schellnhuber (PIK) as well as Armin Bunde (Justus-Liebig-University Giessen) and Ulf Büntgen (Cambridge University) focused on Central Europe. Main reason for this approach was the existing long observation series dating back to the middle of the 18th century to compare with the tree ring data. In addition, there are archives that accurately recorded the beginning of grape and grain harvests and even go back to the 14th century. These records, as well as the width of tree rings, allow temperature reconstructions. A warm summer is indicated by a wide tree ring and an early start of the harvest, a cold summer by a narrow tree ring and a late start of the harvest. The trees studied are those from altitudes where temperature has a strong influence on growth and where there is enough water for growth even in warm years. Medieval archives confirm modern climate system research "It turned out that in the tree ring data the climatic fluctuations are exaggerated. In contrast, the temperatures from the harvest records have the same persistence tendency as observation data and also the computer simulations we do with climate models," says co-author Hans Joachim Schellnhuber of PIK. "Interestingly, medieval archives thus confirm modern climate system research." To eliminate the inaccuracies of the tree ring data, the scientists used a mathematical method to adjust the strength of the persistence tendency to the harvest data and the observation data. "The adjustment does not change the chronological position of the respective cold and warm periods within the tree rings, but their intensity is reduced," explains co-author Armin Bunde from the University of Gießen. "The corrected temperature series corresponds much better with the existing observations and harvest chronicles. In its entirety the data suggests that the medieval climate fluctuations and especially the warm periods were much less pronounced than previously assumed. So the present human-made warming stands out even more." | 10.1007/s00382-020-05433-w |
Biology | Premiere: Watch the development of a larva into an adult worm | Nicola Gritti, Simone Kienle, Olga Filina en Jeroen Sebastiaan van Zon, Long-term time-lapse microscopy of C. elegans post-embryonic development, Nature Communications, DOI: 10.1038/NCOMMS12500 Journal information: Nature Communications | http://dx.doi.org/10.1038/NCOMMS12500 | https://phys.org/news/2016-08-premiere-larva-adult-worm.html | Abstract We present a microscopy technique that enables long-term time-lapse microscopy at single-cell resolution in moving and feeding Caenorhabditis elegans larvae. Time-lapse microscopy of C. elegans post-embryonic development is challenging, as larvae are highly motile. Moreover, immobilization generally leads to rapid developmental arrest. Instead, we confine larval movement to microchambers that contain bacteria as food, and use fast image acquisition and image analysis to follow the dynamics of cells inside individual larvae, as they move within each microchamber. This allows us to perform fluorescence microscopy of 10–20 animals in parallel with 20 min time resolution. We demonstrate the power of our approach by analysing the dynamics of cell division, cell migration and gene expression over the full ∼ 48 h of development from larva to adult. Our approach now makes it possible to study the behaviour of individual cells inside the body of a feeding and growing animal. Introduction Recent advances in microscopy have made it possible to follow the dynamics of many, if not all cells in the development of entire zebrafish and fruit fly embryos 1 . However, in these model organisms time-lapse microscopy is typically restricted to early stages of embryonic development. Owing to their small and transparent anatomy, nematodes such as Caenorhabditis elegans are currently the only animals in which the entire development from embryo to adult can in principle be studied with single-cell resolution 2 , 3 , 4 , 5 . This also makes C. elegans uniquely suited to study the interplay between development and environmental cues such as diet, food availability and pheromones 6 , 7 , 8 . However, long-term time-lapse microscopy is currently rarely used to study C. elegans post-embryonic development. This is because C. elegans larvae are highly motile and thus are difficult to image at high magnification. Immobilizing larvae either mechanically or by paralysis-inducing drugs allows time-lapse microscopy only for limited time periods, as it prevents the animal from feeding, resulting in developmental arrest within hours 9 , 10 . Microfluidics has been used to immobilize nematodes for microscopy by mechanical clamping 11 , 12 , flow 13 , 14 or changes in the physicochemical environment 15 , 16 , 17 ; however, most of these devices are geared towards immobilizing adult nematodes and are not designed to support sustained development. Experiments that did support normal larval growth so far lacked the resolution to study development at the single-cell level 18 , 19 , 20 . To perform time-lapse microscopy of C. elegans post-embryonic development we instead use a different approach ( Fig. 1 ): first, we constrain larval movement to the field of view of the microscope using microfabricated hydrogel chambers containing bacteria as food. Next, we use fast image acquisition to capture sharp images of larvae as they move inside each microchamber, precluding the need for immobilization altogether. Finally, we use image analysis to track the dynamics of cells inside the animal’s body. Microchambers have two main advantages over active microfluidics: first, they are simple to use, requiring no moving parts or flow. Second, in contrast to microfluidics, microchambers do not require using liquid culture. Instead, animals move and feed under conditions similar to standard C. elegans culture on agar plates and the established microscopy protocols for studying nematode development 2 . Hydrogel microchambers have been used to constrain nematode movement for studying behaviour 21 , but so far not development. Figure 1: Imaging development of nematodes in polyacrylamide microchambers. ( a ) Schematic cross-section of a single microchamber. The hydrogel layer is clamped between a coverslip (blue, bottom) and a cover glass (blue, top). ( b ) Imaging set-up. To image animals moving within microchambers, we used LED and laser illumination to achieve short (1–10 ms) exposure times and a fast piezo Z -stage to move rapidly between imaging planes. To image multiple animals in parallel, an X–Y motorized-stage cycled between individual microchambers in a microchamber array (green). ( c ) Image analysis. For each image, the body axis and positions of fluorescently labelled cells (red) are manually annotated. Cell positions are then converted to body axis coordinates to allow systematic comparison between time points. Finally, cell divisions, cell movements or changes in gene expression are recorded. ( d ) Images of a single growing animal in a 250 μm × 250 μm × 20 μm microchamber. Time is indicated in hours after hatching. Old cuticles, which are shed at the end of each larval stage (L1–L4), are also visible. Scale bar, 50 μm. ( e ) Body length (grey lines for individual animals and black line for population average) and fraction of animals in ecdysis (blue bars) as a function of time for N =16 animals. Red markers correspond to the body length of the animal shown in d . Time of ecdysis is defined by the appearance of a newly shed cuticle in the microchamber. Full size image Here we show that, using arrays of microchambers, we can perform fluorescence microscopy of developmental dynamics in 10–20 animals simultaneously, with 20 min time resolution for the full ∼ 48 h of post-embryonic development. To demonstrate the power of our approach we measured, in single animals, the dynamics of (i) seam cell divisions, (ii) distal tip cell (DTC) migration and (iii) molting cycle gene expression oscillations—three processes that because of their ∼ 30–40 h duration were so far inaccessible for immobilization-based time-lapse microscopy. The control of cell division, cell migration and gene expression is the hallmark of development, and our analysis shows that the dynamical information captured by our approach can provide new insight into the mechanisms that control these processes. In general, we expect that the ability to follow individual cells in freely moving and growing animals will provide an unprecedented view on development. Results Larval development in microchambers To constrain C. elegans larvae to the field of view of the microscope, we microfabricated 250 μm × 250 μm × 20 μm chambers in a 10% polyacrylamide hydrogel ( Fig. 1a ). We created 10 × 10 microchamber arrays from a master mold created with standard soft-lithography techniques (Methods). To fabricate chambers we used polyacrylamide rather than agarose hydrogels, as used previously 21 , because in our hands thin polyacrylamide layers were less brittle and easier to handle 22 . We filled the chambers with a single C. elegans egg and Escherichia coli OP50 bacteria as food (Methods). Subsequently, we clamped the polyacrylamide microchamber array between a standard microscope slide and a glass coverslip to prevent the sample from drying during the experiment. Upon hatching, larvae moved and fed inside the microchambers ( Fig. 1d and Supplementary Video 1 ). Larval movement can be fast (peaking at 50 μm s −1 for reversals). To minimize larval movement during image acquisition, we optimized our microscopy set-up for short acquisition times ( Fig. 1b ). First, we used light-emitting diode (LED) trans-illumination and laser epi-illumination to reduce exposure times to 1–10 ms. Second, we used a fast piezo Z -stage to scan the microchamber in the axial direction. Together, this enables us to acquire Z -stacks of 20–30 Z-slices with two imaging channels in <500 ms. By combining a sCMOS camera with a large camera chip (2,048 × 2,048 pixel, 6.5 μm × 6.5 μm per pixel) with a high numerical aperture (NA) 40 × objective, we could image the entire microchamber ( ∼ 250 μm) while still resolving subcellular features ( ∼ 0.3 μm for 1.3 NA). By moving between individual chambers in the microchamber array using an X–Y motorized stage, we routinely imaged 10–20 larvae in a single imaging session. We observed that after hatching individual larvae developed into adult animals over the course of ∼ 40 h, without leaving the chamber ( Fig. 1d ). To confirm that the observed growth corresponded to normal development, we measured two markers of developmental progression. C. elegans development is divided into four larval stages, labelled L1–L4. Animals molt at the end of each larval stage, an event called ecdysis. We first measured, for each animal, the time of all ecdyses, marked by the appearance of an old cuticle inside the chamber ( Fig. 1d ). The observed duration of each larval stage (average and standard deviation (s.d.) are 11.1±0.2, 7.3±0.2, 7.1±0.3 and 10.2±0.4 h for L1–L4) agreed with established values under standard culturing conditions 23 . Second, we measured body-length extension as a function of time ( Fig. 1e ). We found that body length varied between individual time points, likely reflecting the deformability of the animal’s body. However, on average, we observed that body length increased with a fixed, larval stage-dependent rate, with pauses in growth observed before molts, as observed previously 20 , 23 . Body-length extension in microchambers agreed well with growth as observed on standard agar plates and occurred with limited compression of the animal in the vertical direction ( Supplementary Fig. 1 ). Moreover, the body length at the start of each larval stage agreed well with previous measurements 23 . Together, this showed that C. elegans larvae developed normally inside our microchambers. We found significant animal-to-animal variability, both in timing of ecdysis and body-length extension ( Fig. 1e ), even in animals imaged simultaneously. Similar variability was observed recently in C. elegans larvae developing in liquid culture 20 . However, to exclude that this variability was because of insufficient food in the microchambers, we also quantified timing of ecdysis and body-length extension in animals contained in larger microchambers (290 μm × 290 μm × 25 μm; Supplementary Fig. 1a ). We observed no changes in the dynamics of development nor a decrease in animal-to-animal variability. This suggested that the observed variability is intrinsic to C. elegans development. Seam cell lineage Because of its invariant cell lineage, C. elegans is uniquely suited to study the genetic control of cell lineages 2 , 24 . However, obtaining lineages remains laborious as it relies on manual observation over extended time periods. To test whether our set-up could simplify lineage analysis, we used it to study the seam cell lineage. Seam cells form a row of cells along the left and right sides of C. elegans animals ( Fig. 2a ) that divide with a complex pattern of asymmetric and symmetric cell divisions over an ∼ 40 h period 2 . Asymmetric divisions result in a new undifferentiated seam cell and a differentiated hypodermal (H1, H2, V1–V6, T) or neuronal/glial cell (H2, V5, T). In the L2 stage, this is preceded by a symmetric division that doubles the seam cell number (H1, V1–V4, V6). At the L4 molt, the remaining seam cells terminally differentiate. Because of the long duration, full seam cell lineages have never been imaged in a single animal. As a consequence, it remains poorly understood how the timing of seam cell divisions is controlled. Figure 2: Seam cell lineage. ( a ) Position of seam cells (in red) along the body axis. ( b ) Example of seam cell lineages measured in a single animal. Black lines represent seam cells and grey lines differentiated cells. Dashed lines indicate time of ecdysis, separating the different larval stages, L1–L4. Divisions are indicated at the exact time of occurrence, with 20 min resolution. ( c ) Image sequence of the V1 lineage in a single animal carrying a wIs51[SCMp::GFP] nuclear seam cell marker. Seam cell nuclei are indicated by red arrows. Other nuclei belong to hypodermal cells. Images were computationally straightened and aligned to the posterior-most seam cell. Scale bar, 10 μm. ( d ) Analysis of cell division timing. For each seam cell division i , we plot the relative division time (black markers), where T i is the cell division time and is the division time averaged over all nine lineages, H1-T, on both sides of the animal. Also shown is the relative division time averaged over all animals (red markers). ( e ) Animal-to-animal variability in cell division time in the first three divisions of the V3 lineage. ( f ) Examples of V1 lineages in different MBA48-mutant animals. Red lines represent lineage errors. The final number of seam cells is indicated for each lineage. ( g ) Occurrence of lineage errors. For each lineage and larval stage, colour represents the probability of errors. Also shown are the mean probability of errors for each larval stage (right-most column) and each lineage (bottom row). Full size image To visualize the seam cells, we used a strain, wIs51[SCMp::GFP] , that carries a nuclear seam cell marker 25 . This marker is sufficiently bright that we could visualize seam cells on both sides of the body over all four larval stages ( Fig. 2c and Supplementary Video 2 ). We detected cell divisions by the first appearance of two daughter nuclei at the position previously occupied by the mother cell. We could unambiguously assign the fate of each daughter cell, as only the seam cell daughter retained nuclear fluorescence. In this way, we could reconstruct the full lineage for all seam cells ( Fig. 2b ). Our analysis extends the standard lineaging approach 2 by providing the exact time of each division relative to the time of hatching, allowing us to study variation in timing both between seam cell lineages and within each lineage between animals. We first compared the average cell division timing between the different lineages ( Fig. 2d ). We measured, at each larval stage, the division time of each individual seam cell with respect to the average division time of all seam cells in the same animal. We found that seam cell divisions followed a particular sequence, with seam cells at the centre of the body (V2–V4) on average dividing before those closer to the head and tail (H1, H2, V6, T). This difference in timing is most pronounced in the earlier larval stages. The main deviation from this sequence was V5, which typically divided first in the L1 and L2 larval stages. However, we observed significant variability around the mean division times ( Fig. 2d ), leading to deviations from the average sequence. For instance, in 3/16 animals other seam cells divided before V5 at the L1 stage ( Supplementary Fig. 2 ). We also observed significant animal-to-animal variability in division time within seam cell lineages ( Fig. 2e ), with typical s.d. of ∼ 0.3 h. Such quantitative measurements of average timing and variability can contribute towards understanding the cues that trigger seam cell divisions. Next, we tested whether our set-up could aid the analysis of lineage mutants. We focused on an uncharacterized mutant strain, MBA48 (a gift from M. Barkoulas, Imperial College London), that exhibited variable seam cell numbers in adult animals, as observed using the wIs51 seam cell marker. Variable lineage mutants are difficult to study as they require obtaining lineages in multiple animals. To find the origin of the variability in seam cell number in MBA48, we determined full seam cell lineages for multiple animals. We observed two deviations from the wild-type lineage: (i) conversion of asymmetric divisions into symmetric divisions that yielded two seam cells (62/456 divisions, Fig. 2f and Supplementary Video 3 ) and (ii) seam cells failing to divide (9/456 divisions). Note that in the latter case the number of hypodermal cells is reduced, but the final seam cell number is unaffected. Hence, such deviations cannot be identified when seam cells are counted only at a single developmental stage. Both types of deviations occurred stochastically, predominantly in the L3 and L4 stages and were distributed unequally over the different lineages, most strongly impacting the H1 and V6 lineages ( Fig. 2g ). Hence, our results show that stage- and lineage-specific differences exist in the regulation of seam cell divisions. Cell migration Cell migration is an essential aspect of development and, given its highly dynamical nature, time-lapse microscopy is a powerful tool to study it. Q neuroblast migration has been imaged using time-lapse microscopy for short (<3–4 h) periods in immobilized C. elegans larvae 9 . However, so far, no post-embryonic cell migration in C. elegans has been visualized in full. To test our set-up, we therefore attempted to image DTC migration, which is the longest cell migration process in C. elegans development 26 . The two DTCs are born at the L1 molt and guide the shape of the gonad by migrating along a stereotypical path over an ∼ 30 h period ( Fig. 3a ): during the L2 and L3 stages the DTCs migrate outwards, one moving anteriorly and the other posteriorly. In the late-L3 stage, both DTCs turn and move from the ventral to the dorsal sides of the body. Finally, in the L4 stage the DTCs move back to the centre of the body. DTC migration is an important model system for understanding the genetic regulation of complex migratory trajectories 27 . However, this is mostly studied by observing DTC positions at a single developmental stage and information about dynamics is very limited. In particular, it is unclear how kinetic parameters such as speed and direction are controlled to produce the correct migration path. Figure 3: DTC migration. ( a ) Overview showing the gonad (grey) and the DTC (red) migration path. ( b ) Computationally straightened microscopy images of a single animal carrying the qIs56[lag-2p::GFP] DTC body marker at different times after the L1 ecdysis. Dotted line indicates the central body axis. Orientation of A–P and D–V axes is as in a . Scale bar, 20 μm. ( c ) A–P position as function of time in qIs56 animals ( N =10). Dashed line corresponds to the midbody. DTC trajectories for the animal in b are shown in black. Red bars show the fraction of animals in ecdysis as a function of time. ( d ) Same as c but for unc-6(ev400);qIs56 mutants ( N =9). Coloured lines indicate animals in which DTCs moved inwards ventrally (red), DTCs failed to turn inwards (green) or a single DTC migrated dorsally (blue). ( e ) D–V position as a function of time. Dashed line corresponds to the central body axis. Trajectories for the animal in b are shown in black. ( f ) Same as e , but for unc-6;qIs56 mutants. Coloured lines correspond to the animals highlighted in d . ( g ) Average migration velocity in wild-type animals as a function of time after L3 ecdysis. v A–P >0 indicates outward movement and v A–P <0 inward movement. Error bars are s.e.m. ( h ) Average A–P velocity for DTCs in wild-type animals (black), DTCs in unc-6;qIs56 mutants that moved inwards after the L3 ecdysis (cyan) and anterior DTCs in unc-6;qIs56 mutants that only moved outwards (magenta). Full size image We visualized DTC migration in qIs56[lag-2p::GFP] animals, where the DTC bodies are fluorescently labelled 28 . We could follow the full DTC migration in individual animals ( Fig. 3b and Supplementary Video 4 ). To compare the DTC trajectories between animals, we quantified both the positions along the anteroposterior (A–P) axis ( Fig. 3c ) and the dorsal–ventral (D–V) axis ( Fig. 3e ). We found that the dynamics of outward migration in the L2 and L3 stages was highly reproducible, but observed increased variability during the inward migration in the L4 stage, which was more pronounced in the anterior DTC (s.d. in position σ A–P =27 μm at the L4 ecdysis) compared with the posterior DTC ( σ A–P =12 μm; Fig. 3c ). The movement from the ventral to the dorsal sides was rapid, occurring in ∼ 3 h (Fig. 3e ). On average, we found that the anterior DTC crossed the midline of the body first, although we observed significant variability in the difference in crossing times between the two DTCs ( Supplementary Fig. 3 ). D–V migration occurred at the end of the L3 stage and was accompanied by a slowdown of A–P migration ( Fig. 3c ). To study the coordination between A–P and D–V migration in more detail, we measured the average migration velocity and as a function of time, using the L3 ecdysis as a reference point ( Fig. 3g ). We found that changes in migration dynamics were tightly coordinated: A–P migration ceased and D–V migration started simultaneously ∼ 3h before the L3 ecdysis and resumption of A–P migration occurred exactly after the L3 ecdysis. The latter observation is particularly striking as the exact time of L3 ecdysis varied significantly between animals ( Fig. 3c , red bars). Previous studies already indicated that DTC migration is controlled by developmental timing cues 29 , 30 ; however, our analysis of the kinetics of DTC migration provides strong evidence for a tight link specifically between the turning events and the molting cycle. To test whether our measurements of DTC migration dynamics could also give insight into the mechanisms that control the migration direction, we followed DTC migration in an unc-6(ev400); qIs56 mutant 31 . D–V migration in DTCs is controlled by Netrin signalling, with the ligand UNC-6/Netrin proposed to form a D–V gradient 32 . In unc-6 mutants, DTCs often fail to migrate dorsally 31 . We observed that DTC migration in unc-6;qIs56 mutants was highly variable, with 11/18 DTCs migrating exclusively on the ventral side ( Fig. 3d,f and Supplementary Video 5 ). Independent of errors in D–V migration, DTCs also failed to turn inwards and continued towards the head or tail (10/18 DTCs). D–V migration of DTCs is thought to be controlled by modulating their sensitivity to UNC-6 at the appropriate time 30 . However, it is unknown how the cessation of A–P movement during D–V migration is controlled. To examine this we measured the average migration velocity in unc-6;qIs56 mutants ( Fig. 3h ). We found that those DTCs that turned inwards at the L3 ecdysis stopped A–P movement, similar to wild-type animals. However, anterior DTCs that failed to turn inwards showed no reduction in A–P velocity. This result suggested that for anterior DTCs there is no temporal cue that specifically inhibited A–P movement during D–V migration. Rather, the decrease in A–P movement seemed linked to the reversal in direction of A–P migration. In contrast, in the small number of posterior DTCs that only migrate outwards, A–P movement did appear to cease at the normal time of D–V migration ( Supplementary Fig. 3 ), suggesting differences in the control of migration between the two DTCs. Oscillatory gene expression So far, we used fluorescence only to determine cell position. We next tested whether we could also quantify fluorescence intensity as a measure of gene expression dynamics, focusing on the oscillatory expression of molting cycle genes. Dynamic regulation of gene expression is essential for development. A striking example is provided by the molting cycle genes in C. elegans , whose expression peaks once every larval stage 33 . Moreover, recent RNA-sequencing experiments found that many genes exhibited oscillatory expression in phase with the molting cycle 34 , 35 . However, so far, such gene expression oscillations were characterized at the population level, but not in single animals or single cells. Hence, it is not known with what precision the period of the oscillations is controlled and how strongly they are synchronized within the body. First, we characterized the expression dynamics of the molting cycle gene mlt-10 , which is essential for molting and expressed in the hypodermis only during the molt. We studied a transcriptional reporter strain, mgIs49[mlt-10p::GFP-PEST] , used previously to characterize mlt-10 expression dynamics at the population level 33 . We could follow expression dynamics in the mlt-10 reporter for all four larval stages, with a clear pulse in fluorescence intensity observed close to each ecdysis ( Supplementary Video 6 ). To study the spatiotemporal mlt-10 expression dynamics, we quantified both the average fluorescence intensity along the A–P axis ( Fig. 4a,b ) and the total fluorescence intensity ( Fig. 4c ) as a function of time. We found that the oscillatory dynamics was uniform, that is, with a phase independent of the A–P position ( Fig. 4a,b ). We observed significant animal-to-animal variability in the exact timing of the mlt-10 expression peak ( Fig. 4d , 28±1 h for the L3 peak). However, mlt-10 expression dynamics was tightly correlated with the subsequent ecdysis ( Fig. 4e , peaking 1.1±0.1 h before L3 ecdysis). Figure 4: Oscillatory gene expression. ( a ) Kymograph of mlt-10 expression along the A–P axis as a function of time in a single mgIs49[mlt-10p::GFP-PEST] animal. Dotted lines represent the position of head and tail, and horizontal dashed lines represent ecdysis. Coloured lines indicate the regions evaluated in b . Scale bar, 100 μm. ( b ) mlt-10 expression oscillations at different A–P positions for the animal in a . ( c ) mlt-10 expression integrated over the entire animal as a function of time for N =15 animals. The mlt-10 expression dynamics (black line) and time of ecdysis (dashed lines) are indicated for the animal in a . ( d , e ) Time distribution of the L3 peak in mlt-10 ( N =15) and wrt-2 ( N =23) expression relative to ( d ) time of hatching and ( e ) time of L3 ecdysis. ( f ) wrt-2 expression oscillations in the posterior-most V2 seam cell in a heIs63[wrt-2p::H2B::GFP, wrt-2p::PH::GFP] animal. Time is hours after hatching. The label indicates whether the cell is the left (L) or right (R) V2 cell. Scale bar, 5 μm. ( g ) Single animal wrt-2 expression oscillations. White markers correspond to the images in f . The black line represents a sliding average with 2 h window size over V1–V5. ( h ) Correlation in wrt-2 expression between the V1 and V2 (red) and V5 (grey) cells. Markers and correlation coefficient R are for N =23 animals over all larval stages. ( i ) wrt-2 expression oscillations in the V2 seam cell lineage in different animals. Lines correspond to sliding average with 2 h window size. Full size image The mlt-10 reporter was expressed in many cells. To test whether we could follow expression dynamics with single-cell resolution, we also measured wrt-2 expression dynamics. The gene wrt-2 is expressed exclusively in seam cells 36 and was recently found to exhibit oscillatory gene expression at the population level in the L3–L4 stage 35 . We followed wrt-2 expression in heIs63[wrt-2p::H2B::GFP, wrt-2p::PH::GFP] animals, in which green fluorescent protein (GFP) is targeted both to the seam cell nucleus and membrane 37 . We found that the fluorescence signal was bright enough to visualize expression in seam cells on the side of the animal closest to the objective, but not on the opposite side, likely because of light scattering in the intervening tissue ( Supplementary Fig. 4 ). As animals sometimes flip from one side to the other around the molt 38 , we could not follow single seam cells over the entire course of development. However, focusing only on the seam cells closest to the objective, we could clearly observe oscillations in wrt-2 expression in single seam cells over all four larval stages ( Fig. 4f and Supplementary Video 7 ). To quantify wrt-2 expression, we measured the total nuclear fluorescence intensity in the seam cells V1–V5 as a function of time ( Fig. 4g ). We found that both the period and phase of wrt-2 oscillations agreed with previous measurements of wrt-2 mRNA dynamics 35 . Moreover, wrt-2 oscillations were strongly correlated even between seam cells such as V1 and V5 that reside in different parts of the body ( Fig. 4h ). Similar to mlt-10 expression oscillations, we observed that, while there existed significant animal-to-animal variability in the exact time of the wrt-2 expression peaks ( Fig. 4d,i , 26±1 h for the L3 peak), the expression peaks were nevertheless precisely timed with respect to the ecdysis ( Fig. 4e , peaking 1.5±0.4 h before L3 ecdysis). In general, for both mlt-10 and wrt-2 we find that, despite clear animal-to-animal variability, the timing of expression peaks was tightly coupled to ecdysis and was strongly correlated between cells at different positions. This suggests that molting cycle gene expression and ecdysis are under strong global control. Discussion Here we describe a technique to perform time-lapse microscopy in moving C. elegans larvae, with single-cell resolution and for the full duration of post-embryonic development. We achieved this by using hydrogel microchambers that confine animal movement to the field of view of the microscope while containing sufficient food for development. Owing to our use of high NA objectives, our approach is comparable in spatial resolution to existing immobilization-based time-lapse microscopy techniques, although the requirement for short exposure times to image moving animals makes it more challenging to combine with confocal microscopy to increase the axial resolution. We can envision several ways in which to expand on the design of our set-up. First, it could be combined naturally with RNA interference by filling microchambers with bacteria that express the desired double stranded RNA 39 . Similarly, the effect of diet on development could be studied at the single-cell level by selecting different bacterial strains as food source 40 . Finally, a separate microfluidic layer on top of the hydrogel chambers could be used to change the local chemical environment 22 in <10 min to deliver environmental cues, for example, dauer pheromone, at precisely timed developmental stages. We used our technique to study three processes that, because of their long duration (30–40 h), had been inaccessible for time-lapse imaging: seam cell divisions, DTC migration and molting cycle gene expression oscillations. We were able to perform detailed analysis of the timing, kinetics and variability in these processes, both in wild-type animals and mutants, an approach that can be extended to many other developmental processes in C. elegans . We typically imaged 10–20 animals per imaging session, even though, in principle, the combination of short (<1 s per animal) acquisition time and 20 min interval between acquisitions allows for hundreds of animals to be imaged simultaneously. This is because the bottleneck was formed by the manual annotation of the animal’s body axis and cells. It should be possible to optimize this by improved automation and image analysis. We believe that our set-up could make long-term time-lapse microscopy a routine tool to study C. elegans post-embryonic development, with the potential to significantly increase our understanding of lineage control, morphogenesis and regulation of gene expression. Methods C. elegans culture and strains All C. elegans strains were cultured at 20 °C on Nematode growth medium (NGM) agar plates with OP50 bacteria. Wild-type nematodes were strain N2. The following mutations and integrated transgenic arrays were used: LGIV: mgIs49 [mlt-10p::GFP-PEST] 33 , LGV: wIs51 [SCMp::GFP] 25 , qIs56 [lag-2p::GFP] 28 , heIs63 [wrt-2p::GFP::PH; wrt-2p::GFP::H2B] 37 and LGX: unc-6 (ev400) 31 . The mutant strain MBA48 was a gift from Michalis Barkoulas, Imperial College London, and will be described in detail elsewhere. Microchamber fabrication Microfabricated arrays of chambers were made from a master mold as described in ref. 22 . Master molds were created using standard soft-lithography techniques. Briefly, SU-8 2025 epoxy resin (MicroChem) was first spin-coated on a silicon wafer to form a 20 μm layer. The SU-8 layer was exposed with ultraviolet-light through a foil mask (SELBA S.A.) containing the micropattern ( Supplementary Fig. 5 ). Microchamber dimensions are 250 × 250 × 20 μm for all experiments, unless specified otherwise. To create polyacrylamide microchamber arrays from the master mold, a 10% dilution of 29:1 acrylamide/bis-acrylamide was mixed with 0.1% ammonium persulfate (Sigma) and 0.01% TEMED (Sigma) as polymerization initiators. The resulting aqueous solution was then poured in a cavity placed on top of the micropatterned silicon wafer. The cavity was closed with a silanized coverslip and sealed by mechanical clamping, allowing the solution to polymerize for 2 h. To remove the toxic unpolymerized acrylamide monomers, the resulting polyacrylamide microchamber arrays were washed at least three times for at least 3 h each in distilled water. Fewer or shorter washing steps often resulted in developmental arrest. Microchamber arrays could be stored in distilled water for ∼ 15 days. Single microchamber arrays were placed in M9 buffer for 4 h directly before time-lapse imaging. Sample preparation To prepare the sample, a glass spacer with the same height as the polyacrylamide membrane was attached to a 76 × 26 × 1 mm microscope slide using high vacuum grease (Dow Corning). A single microchamber array was positioned with tweezers on the microscope slide, with the openings of the microchambers facing up. Excess liquid was removed with a tissue. With a pipette, drops of M9 buffer ( ∼ 40 μl in total) were placed on the side and on the surface of the microchamber array, while preventing the liquid from filling the chambers. To load C. elegans embryos into the microchambers we followed the approach in ref. 21 . Under a dissection microscope, a drop of bacterial suspension containing a single late-stage embryo was transferred from a NGM agar plate seeded with OP50 bacteria into a microchamber, using an eyelash attached to a Pasteur pipette. To facilitate the release of the bacteria and embryo into the chamber, the eyelash was dipped briefly into the M9 drop before touching the microchamber. Once the egg was transferred, more bacterial suspension was added to the microchamber using the eyelash, until completely filled. For each experiment, ∼ 10–20 chambers were loaded. Subsequently, tissue paper was used to remove all excess M9 buffer. Finally, a 25 × 75 mm #1 coverslip was lowered on the chambers to seal the sample, slow enough to avoid forming large air bubbles. The sample was placed on a custom fabricated holder and clamped to seal the chambers and avoid liquid evaporation during the duration of the experiment ( Supplementary Fig. 5 ). Microscopy imaging We performed time-lapse imaging on a Nikon Ti-E inverted microscope. Using a large chip camera (Hamamatsu sCMOS Orca v2), it was possible to fit single microchambers in the field of view of the camera while using a 40 × magnification objective (Nikon CFI Plan Fluor 40 × , NA=1.3, oil immersion). Transmission imaging was performed using a red LED (CoolLED pE-100 615 nm), while fluorescence images were acquired using a 488 nm laser (Coherent OBIS LS 488–100). The laser beam was expanded from 0.7 to 36 mm through a telescope composed of two achromatic lenses of 10 and 500 mm focal lengths (Thorlabs AC080-010-A-ML and AC508-500-A). The expanded beam was then aligned through additional dielectric mirrors (Thorlab BB2-E02) to enter the back aperture of the microscope. A tube lens (300 mm focal length, Thorlabs AC508-300-A) was used to focus the beam in the back focal plane of the objective. For fluorescence microscopy, we used a dual band filter set (Chroma, 59904-ET). An XY -motorized stage (Micro Drive, Mad City Labs) was used to move between chambers, while a piezo Z-stage (Nano Drive 85, Mad City Labs) was used to move the sample in the Z direction. To optimize acquisition speed, we synchronized the camera, laser illumination and stage movement as follows: to operate the rolling-shutter camera in the global exposure mode, the laser beam was switched on (rise time <3 μs) once all the lines on the camera chip were active and switched off once the camera started reading out the chip. In order to rapidly acquire Z-stacks, we synchronized the piezo Z-stage and the camera, so that the stage moved to the new Z position during the 10 ms that the camera read out the chip to its internal memory. The microscope and all its components were controlled with custom software implemented using a National Instruments card (PCIe-6323) installed on a computer with a solid-state drive (Kingston V300-120GB), an Intel Core i7 processor (3.50 GHz) and 16 GB of RAM. By using sufficiently high laser power (80–100 mW), we could use exposure times that were short enough (1–10 ms) that animal movement during acquisition was negligible. Acquiring a single imaging volume, typically consisting of 20 Z-slices in two channels, took <0.4 s. Some animal movements along the A–P axis was observed between Z-slices, particularly in L3–L4 larvae directly after the molt. Each chamber was imaged every 20 min for ∼ 48 h. We found that shorter time intervals sometimes led to larvae arresting in the L1 larval stage. During imaging intervals, we used the Perfect Focus system of the microscope to prevent sample drift. Images were acquired in a temperature-controlled room at 22 °C. Image analysis Custom Python software was used to analyse the acquired images. For all experiments, we used the transmitted light images to record the time of hatching and ecdysis as well as the body length and position of the gonad (L2–L3) or vulva (L3–L4). We obtained the body axis by manually selecting 10–20 points on the animal’s centre line and subsequently fitting a spline curve to these points, with s being the arc length of the spline curve along the A–P axis. The body length was then given by the length of . We measured the position of the gonad or vulva, as markers of the ventral side of the body axis, to establish the left–right orientation of the animal’s body. For all cells of interest, we manually obtained their position in the coordinate system of the camera chip. To obtain the cell position in the body’s coordinate system, we calculated s , the position along the A–P axis, and t , the position along the D–V axis as follows ( Supplementary Fig. 6 ): the A–P position was given by the arc length s that minimized the distance between and . Next, the magnitude of the D–V position t was given by this minimal distance and the sign of t was defined so that t <0 for the ventral side. This coordinate transformation was also used to create the computationally straightened images of animals in Figs 2 , 3 , 4 . All image and data analysis software are available at . Seam cells The seam cells were identified by their position along the A–P axis. Initiation of cell division could be observed by the loss of nuclear fluorescence due to nuclear envelope breakdown. The division time was given by the first appearance of two smaller daughter nuclei at the old position of the mother cell. In the seam cell mutant MBA48, we defined a lineage error as the first point at which the (sub-)lineage deviated from the wild-type lineage. For instance, if an additional seam cell was erroneously generated, we did not score for errors in the sublineage produced by that seam cell. To achieve that, we calculated the probability P ( l , s ) of a lineage error occurring in seam cell lineage l at larval stage s as follows. For each seam cell i , for example, V2L.pp, in animal w , we assign a division class d i =0,1,2 for no division, symmetric division and asymmetric division, respectively. We did so for all seam cells in wild-type (WT) and mutant (M) animals. The error probability is then given by , where δ n , m is the Kronecker delta and is the list of seam cells that are present in both the wild-type lineage and the mutant animal w . Distal tip cells For both DTCs we calculated their position ( s , t ) along the A–P and D–V axes. To correct for small movements between Z-slices, we measured the A–P displacement Δ s of anatomical markers such as the pharyngeal bulbs, vulva and anus between the two Z-slices that contained a DTC. We then corrected the A–P positions of the DTCs by subtracting the measured offset Δ s . However, this correction for the animal’s A–P movement resulted only in minor quantitative changes to the DTC position data. For the DTC analysis, s =0 corresponded either to the midbody (L1–L3), defined as the A–P position exactly between the posterior pharyngeal bulb and the anus, or the A–P position of the invagination of the vulva (L3–L4), with these two measures coinciding in most animals. To calculate the DTC velocities v A–P and v D–V , we first applied a sliding average with 1 h window size to the measured ( s , t ) trajectories and then calculated velocities as the time derivative of the average ( s , t ) trajectories ( Supplementary Fig. 3 ). Molting cycle genes For mlt-10 expression, the mean fluorescence intensity as function of A–P position s was obtained by averaging the fluorescence over a D–V window of | t |<60 μm. The expression dynamics at different A–P positions was determined by integrating fluorescence intensity over a region of 5% of body length, centred at positions at 25, 50 and 75% of body length. To measure wrt-2 expression in single seam cells, we manually labelled the nuclei of the V1–V5 seam cells on the side closest to the objective. As the size and shape of the nuclei changed over time, we used an image segmentation algorithm (Otsu’s method) on a 5 μm × 5 μm region around each nucleus to obtain a mask of the nucleus. We then computed the mean fluorescence intensity of the pixels within this mask. To detect the time of peaks in mlt-10 and wrt-2 gene expression, we applied a Gaussian filter with width of 1 h to each fluorescence intensity time series and obtained for each larval stage the time at which the averaged time series exhibited its maximum. Data availability The data that support the findings of this study are available from the corresponding author upon request. All analysis software are available at Additional information How to cite this article: Gritti, N. et al . Long-term time-lapse microscopy of C. elegans post-embryonic development Nat. Commun. 7:12500 doi: 10.1038/ncomms12500 (2016). | Researchers from FOM institute AMOLF have developed a microscopy technique for the live tracking of development in the individual cells of a growing, eating and moving organism, the worm Caenorhabditis elegans. The next step is to find out how environmental factors affect development. The researchers published their findings in Nature Communications on August 25, 2016. In humans and animals, the period immediately following birth is all about growth and development. Until now, however, little was known about the details of this process at the cellular level. The new technique now makes this process accessible, by making it possible to track the entire process of development from the larva to the adult animal, at the level of individual cells. From larva to adult worm An important element of the new microscopy technique is the use of time-lapse photography to speed up effects that take place very slowly. Group leader Jeroen van Zon explains: "When using time-lapse photography to study post-embryonic development, the mobility of organisms poses a real challenge. This makes it difficult for us to keep them in the right spot under the microscope at this stage of development." The worm C. elegans proved to be the right organism for this purpose. This animal is very small and completes its development from larva to adult worm in just two days. Van Zon and his team placed C. elegans larvae in microchambers, no wider than the diameter of a human hair, which kept them within the microscope's field of view while they completed their development into adult worms. Using a microscope camera with a very fast shutter speed (1 to 10 milliseconds), the researchers were able to capture sharp images, even when the larvae were moving quickly. The result is a time-lapse film of a newborn larva – just one hundred micrometers long – developing into a one-millimeter-long adult worm. Biological clock This new microscopy technique can be used to expand our knowledge of post-embryonic development in complex organisms, including humans. There is, for instance, the still unanswered question of how the body knows when to initiate major changes, such as the shedding of milk teeth. We still know very little about the biological clock that determines the timing of these processes. Development in C. elegans, as in humans, unfolds in accordance with a certain rhythm. The researchers observed how the larvae produced certain proteins - made fluorescent for the purposes of the experiment - every ten hours. The production of this protein is controlled by a biological clock. It is the first time that this process has been captured on film. The role of the environment in development Having developed the technology to track development at the level of individual cells, in a growing and moving organism, it is time to take the next step. Jeroen van Zon wants to use the time-lapse microscopy technique to find out how environmental factors, such as diet, influence C. elegans' developmental clock. "As in humans, this interaction is governed by signaling proteins, such as insulin," says Van Zon. "We want to explore this further, and this new technique has enormous potential in this regard!" 'Time-lapse' film of the C. elegans worms' biological clock. | 10.1038/NCOMMS12500 |
Biology | Student gives possible explanation for female mating preferences that decrease male survival chances | Pavitra Muralidhar. Mating preferences of selfish sex chromosomes, Nature (2019). DOI: 10.1038/s41586-019-1271-7 Journal information: Nature | http://dx.doi.org/10.1038/s41586-019-1271-7 | https://phys.org/news/2019-06-student-explanation-female-decrease-male.html | Abstract The evolution of female mating preferences for harmful male traits is a central paradox of sexual selection 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 . Two dominant explanations for this paradox 8 , 10 are Fisher’s runaway process, which is based on genetic correlations between preference and trait 1 , 3 , 4 , and Zahavi’s handicap principle, in which the trait is an honest costly signal of male quality 2 , 6 , 8 , 11 . However, both of these explanations require the exogenous initial spread of female preferences before harmful male traits can evolve 1 , 2 , 3 , 4 , 6 , 8 , 11 . Here I present a mechanism for the evolution of female mating preferences for harmful male traits that is based on the selfish evolutionary interests of sex chromosomes. I demonstrate that female-biased genetic elements—such as the W and X sex chromosomes—will evolve mating preferences for males who display traits that reduce their fitness and/or that of their male offspring, but increase fitness in female offspring. In particular, W-linked preferences can cause nearly lethal male traits to sweep to fixation. Sex-linked preferences can drive the evolution of traits such as ornamental handicaps and male parental care, and can explain variation in ornamentation and behaviour across taxa with divergent sex-determining mechanisms. Main Female mating preferences should evolve to maximize total offspring fitness 7 . Intra-genomic conflict complicates this picture, because females can carry multiple genetic elements that have sex-biased transmission 12 , 13 . This is clearest for the W chromosome in female-heterogametic (ZW) species, such as birds: autosomes spend as many generations in males as in females, but the W chromosome is only ever carried by females 12 , 13 , 14 . A preference encoded on the W chromosome should therefore evolve to maximize the total fitness of daughters, with no regard for the fitness of sons (to whom it is not transmitted). Traits that increase the fitness of daughters at the expense of the fitness of fathers or sons can take many forms. One major category is sexually antagonistic traits, which increase fitness in one sex but reduce it in the other 13 , 15 . Such traits are common in natural populations 16 , 17 , 18 , 19 . Usually, to avoid elimination by natural selection, a sexually antagonistic trait must either confer a fitness advantage when averaged across the sexes or be sex-linked 13 , 15 . In previously studied scenarios, these conditions limit the fitness cost that can be imposed on the sex for which the trait is deleterious; here, I show that this is not true when mating preferences for sexually antagonistic traits are encoded on a sex chromosome. Previous theoretical work has separately considered the roles of sexual antagonism 9 , 20 , sex linkage 21 , sex determination 22 , 23 and reinforcing female preferences 5 , 20 in sexual selection. However, to my knowledge, no previous model has examined the co-evolution of sex-linked female preferences for autosomal, sexually antagonistic traits. To examine this process, I considered a two-locus population genetic model of a ZW species, with an autosomal ‘trait’ locus and a W-linked ‘preference’ locus (for full details, see Methods). Z-linked and X-linked preferences are discussed below. In this model, two alleles segregate at the trait locus: the wild-type allele ( t ) and the mutant allele ( T ), which increases female viability (by a factor 1 + s f for TT homozygotes and 1 + h T s f for Tt heterozygotes) but reduces male viability (by 1 − s m for TT and 1 − h T s m for Tt ). s f is the strength of the viability advantage of the T allele in females; s m is the strength of its viability disadvantage in males. h T is the dominance of the T allele with respect to the t allele. The alleles mutate from one to the other at a symmetrical rate u per replication. I assume that s m > s f , so that T is selected against in the absence of other forces. Two alleles segregate at the W-linked preference locus: the wild-type allele p and the mutant allele P , the bearers of which (always female) have a greater propensity to mate with trait-expressing males (by a factor α > 1 for TT males and \({\alpha }^{{h}_{T}}\) for Tt males, where α is the strength of the preference). Here I assume h T = 1/2 (co-dominance), although the qualitative features of the results do not depend on this assumption (see Extended Data Fig. 1 ). It can be proven ( Supplementary Information ) that the P allele increases in frequency as long as the trait locus is polymorphic. Therefore, the P allele will fix if there is a source of persistent trait polymorphism, such as recurrent mutation or migration from a population with reduced selection against the trait (Fig. 1 ). This positive selection arises indirectly. The P allele generates a positive genetic correlation between itself and the T allele by inducing its bearers to preferentially mate with males that bear the T allele. Because the T allele increases fitness in females (and the P allele is present only in females), this positive association causes the frequency of the P allele to rise. Fig. 1: Evolution of W-linked preferences for sexually antagonistic traits. Long-run frequencies of the W-linked P allele and the autosomal sexually antagonistic T allele after 5 × 10 6 generations, each having started at 1% frequency. The T and t alleles mutate from one to the other at a rate of 10 −3 per replication. a , When the P allele induces no preference ( α = 1), the sexually antagonistic T allele reaches high frequency only when it increases viability on average across the sexes (that is, when (1 + s f )(1 – s m ) > 1). b , Even when the preference encoded by the P allele is weak ( α = 1.05), the P allele is positively selected for, and fixes in a large region of the parameter space in which the sex-averaged viability effect of the T allele is negative (that is, where (1 + s f )(1 − s m ) < 1). Fixation of the P allele pushes the T allele to high frequency over a small region of parameter space, in which the cost of the trait to males ( s m ) is not too large compared to the benefit of the trait to females ( s f ). c , For slightly higher strengths of the preference encoded by the P allele ( α = 1.5), the allele always fixes and the T allele attains high frequency in regions of parameter space where male costs are very high. d , When the preference is strong ( α = 7.5), the T allele attains high frequency even when it is nearly lethal in homozygous male bearers, imposing an 80% survival cost on them. Full size image The strength of positive selection acting on the P allele depends on several factors. For example, it increases with the strength of the preference induced by the P allele, and with the fitness advantage conferred by the T allele in females. To investigate the strength of selection in favour of the P allele, I compared the strengths observed in several configurations of the model to those observed in the standard two-locus autosomal model of Fisherian sexual selection 4 . In this model, selection for low-frequency W-linked preferences is consistently stronger—often by orders of magnitude—than selection for analogous autosomal preferences, even when the latter start at the high frequencies required for the trait to spread ( Supplementary Information ). Selection on the T allele depends on its cost to males, its benefit to females and the proportion of females that carry the P allele. If the strength of the preference is sufficiently large ( α ≳ 1/[(1 – s m )(1 + s f )], Supplementary Information ), selection favours the T allele for frequencies of the P allele above a certain threshold. Because the P allele inevitably rises to fixation, this threshold is eventually exceeded and the T allele spreads. The resultant equilibrium is one in which many males exhibit a trait that severely impairs their survival, and all females exhibit a strong mating preference for these low-viability males (Fig. 1 , Extended Data Fig. 2b ). This can occur even for traits that are nearly lethal to males but that confer only a small advantage to females (Fig. 1d ). If α ≲ 1/[(1 – s m )(1 + s f )] instead, the T allele remains at low frequency, even after the P allele has fixed. In this equilibrium, all females prefer low-viability males despite these males being nearly absent from the population (Fig. 1 , Extended Data Fig. 2a ). In this model, the spread of the harmful male trait does not require initial neutral drift of—or exogenous selection for—the mutant preference, unlike in analogous two-locus models of Fisher’s runaway process 4 , 8 and Zahavi’s handicap model 6 , 8 , 11 . By extension, preferences that impose fitness costs on females (for example, by reducing their probability of finding a mate) can invade from low frequency in this model, unlike in comparable major-effect runaway and handicap models (which are very sensitive to costs of female preferences 8 ). One way to resolve sexual antagonism is to restrict the expression of a trait to the sex it benefits 13 , 15 , 19 . Counterintuitively, this is not necessarily the expected outcome for sexually antagonistic traits when they are subject to sex-linked mating preferences. For instance, the presence at high frequency of the W-linked P allele can select against modifiers that restrict expression of the T allele to females, because female-specific expression, although it increases the viability of males that bear the T allele, also decreases their mating success. Sex-linked preferences can thus impede the evolution of sex-specific expression and, by extension, sexual dimorphism 15 . I have thus far limited the discussion to classical sexually antagonistic traits. However, the model applies more generally to three categories of costly male-specific traits: those that (i) increase the fitness of daughters; (ii) have no effect on the fitness of daughters; or (iii) act as an indicator of ‘good genes’ (for example, classical handicap traits). For category (i), costly male traits that increase offspring fitness are functionally identical to sexually antagonistic traits in my model. Such traits include male parental care 24 , which is more common in ZW than XY species 25 . For category (ii), W-linked preferences for traits with no effect on females ( s f = 0) but large costs in males \(\left({s}_{{\rm{m}}}\gg 0\right)\) evolve neutrally. Such preferences can therefore drift to high frequency, which could possibly drive the evolution of exaggerated male-specific phenotypes that have previously been assumed to be the result of Fisherian runaway processes 2 , 6 , 7 . For category (iii), if a male-specific handicap signals intrinsic sex-independent quality 2 , 6 , then a W-linked mating preference for handicapped males is favoured irrespective of the costs of the handicap, because daughters enjoy higher quality without suffering the handicap 22 . An analogous autosomal preference is transmitted to sons half of the time, so the higher quality of the offspring of its bearers must be offset by fitness costs in their handicapped sons. If the handicap is too costly, an autosomal mating preference for it will not spread—although a W-linked preference will. The handicap then signals a ‘sexually antagonistic genome’: good in females (because of the high quality it imparts) but bad in sons (because of the severe cost of the handicap). Formal modelling of this process ( Supplementary Information ) reveals: (i) that the W-linked preference is always favoured under the standard ‘Spence condition’ 6 , 26 that the viability cost of the handicap is proportionally lower in higher-quality males; (ii) that more stringent conditions are required for the analogous autosomal preference to be favoured; and (iii) that the handicap must be heritable for these differences to hold. In the above model, the selfish W-linked P allele can drive to high frequency a trait that severely impairs male survival. This might create selection for autosomal suppression of the preference encoded by the P allele. To study this possibility, I considered an augmented model with a third locus that is autosomal but is not linked to the trait locus. At this locus, there segregates a mutant allele S that suppresses the effect of the P allele, such that its female bearers are indiscriminate in mate choice (see Methods). Simulations reveal that the S allele invades only when the strength of the preference that it suppresses is weak, and when the trait carries a high net cost (Extended Data Figs. 3 , 4 ). Thus, strong W-linked preferences appear to be robust to suppression. Sex-specific chromosomes (the W or Y chromosomes) are often stereotyped as degraded and gene-poor, which would seem to diminish the possibility of their carrying preference genes. However, although the sex-specific chromosomes of therian mammals and neognath birds are indeed gene-poor, in other clades the sex-specific chromosome can vary widely in size and gene content 14 , 27 . In addition, sex-specific chromosomes usually contain a non-degraded ‘pseudo-autosomal region’ that recombines in the heterogametic sex 28 . Simulations reveal that preferences similar to those modelled above can fix in the pseudo-autosomal region, although only if they arise close to the border between this region and the sex-determining region (Extended Data Fig. 5 ). The logic articulated above for the W chromosome applies to other genetic elements with exclusive or predominantly maternal transmission. These include mitochondria and other cytoplasmic factors 12 , 13 , intracellular parasites such as Wolbachia 29 as well as microbiota, which often show vertical maternal transmission 30 and are known to influence behaviour—including mate choice—in a number of taxa 31 . Although the W chromosome is sex-specific, the Z and X chromosomes are only partially sex-biased, as they are borne twice as often by one sex (males for the Z chromosome and females for the X chromosome). These transmission biases—together with recent discoveries of X- and Z-linked genes that influence mate choice ( Supplementary Information )—raise the possibility that the Z and X chromosomes can shape the evolution of preferences for sexually antagonistic traits; the Z chromosome for male-beneficial, female-costly traits and the X chromosome for male-costly, female-beneficial traits. The evolution of X- and Z-linked preferences for costly male-limited traits has previously been considered 21 . Modifying the model for X- and Z-linked preferences (Methods), I find that—in both cases—preference and trait alleles can co-evolve to high frequency (Fig. 2 ). This effect is stronger for the Z chromosome, despite the ‘biases’ of the X and Z chromosomes being symmetric. To understand this, consider sex-chromosome transmission from ZW and XX females to offspring. A Z-linked allele that encodes a mating preference for a male-beneficial trait is passed on by a mother only to her sons, and thus gains an immediate advantage. By contrast, an X-linked preference allele is transmitted equally to sons and daughters, and thus immediately experiences both the cost and benefit of the trait. In fact, the pedigree transmission profiles of X- and Z-linked preference alleles, starting in females, are symmetric, except for the initial sons-only generation of the Z-linked allele ( Supplementary Information ), which explains why Z-linked mating preferences for sexually antagonistic traits evolve more readily. As expected, the effect is weaker for both the Z and X chromosomes than for the W chromosome (Fig. 2 ). Fig. 2: Relative propensities of the W, Z and X chromosomes to evolve female mating preferences for males that exhibit sexually antagonistic traits. a , b , The preference strength is α = 5 in all cases. Each line is a frontier between a parameter region in which the preference ( a ) or trait ( b ) allele attains high frequency (region to the left of the line), and a parameter region in which it does not (region to the right of the line). The frontier for autosomes (labelled A) is displayed for reference. Note that Z-linked preferences are for male-beneficial, female-costly traits (contrary to the W- and X-linked preferences), so the axes are reversed for the Z chromosome. W-linked preferences for males displaying female-beneficial, male-costly traits fix for any degree of sexual antagonism. Z-linked preferences for males displaying male-beneficial, female-costly traits fix even with substantially female-costly traits, although the parameter range over which they fix is smaller than for W-linked preferences. X-linked preferences for female-beneficial, male-costly traits fix only when the degree of sexual antagonism is relatively small, although they nonetheless fix in regions in which autosomal preferences cannot. Note that Z- and X-linked preferences (unlike W-linked preferences) fix only when they also drive their preferred traits to high frequency. Full size image I have considered a population in which mate choice is practised exclusively by females, but the model also applies to male mate choice, which recent work has suggested is more common than has previously been recognized 32 . To investigate the empirical possibility of sex-linked preferences, I collected a list of known genomic locations of mate-preference genes ( Supplementary Information ). Sex chromosomes are substantially over-enriched for preference genes across a variety of heterogametic species. Sex-specific chromosomes do not feature prominently, probably because they are highly degenerate in the majority of species in the list. Indeed, one of the major goals of the theoretical work presented here is to point genomic research on mate preferences towards species with gene-rich sex-specific chromosomes. The model described here predicts different outcomes for XY and ZW systems when mate choice is practised predominantly by females. In ZW species, the female-specific W chromosome is a very strong attractor of preferences for male-costly, female-beneficial traits, whereas the male-biased Z chromosome attracts preferences for male-beneficial, female-costly traits. By contrast, XY species have no female-specific chromosome and the X chromosome attracts preferences more weakly than does the Z chromosome (Fig. 2 ). Therefore, ZW species are particularly prone to the evolution of sex-linked preferences for sexually antagonistic traits. This is consistent with the phylogenetic association between ZW heterogamety and greater male ornamentation in vertebrates 23 , although this relationship is ambiguous within some clades 33 . Further comparative research—especially in clades with rapid heterogametic transitions—would be useful in clarifying this relationship 14 . Methods In all versions of the model considered here, the population is assumed to be infinite, with non-overlapping generations in which the order of events is: viability selection, mating, reproduction and death, followed by viability selection among the offspring, and so on. The organism is diploid with heterogametic sex determination. Mendelian segregation operates among all loci. The mate choice model is one of fixed relative preferences 4 , 20 . In general, if there are n types of male (each expressing a different degree of some trait) in proportions p 1 , p 2 … p n at the time of mating (after viability selection), and a given female has relative preference strengths α 1, α 2 … α n over the male types, then the probability that her next mate is of type i is \({\alpha }_{i}{p}_{i}/\sum _{k=1}^{n}{\alpha }_{k}{p}_{k}\) . If this female is of type j among m female types (each expressing a different set of preferences over the male types), with female types in proportions q 1 , q 2 … q m after viability selection, then the fraction of all mating events in the population that are between type j females and type i males is \({q}_{j}{\alpha }_{i}{p}_{i}/\sum _{k=1}^{n}{\alpha }_{k}{p}_{k}\) . In the case of W-linked preferences for autosomal traits, at the W-linked preference locus there segregate the wild-type p allele and the mutant P allele, while at the autosomal trait locus there segregate the wild-type t allele and mutant T allele. The T allele encodes a trait that is costly in males but beneficial in females: tt males and females have a baseline relative viability of 1; Tt males and females have viabilities of 1 − h T s m and 1 + h T s f , respectively; and TT males and females have viabilities of 1 − s m and 1 + s f , respectively. A female bearing the p allele has equal preferences over the three male genotypes, whereas a female bearing the P allele has relative preferences 1, \({\alpha }^{{h}_{T}}\) and α over the male genotypes tt , Tt and TT , respectively. The results discussed in the main text (Figs. 1 , 2 ) assume h T = 1/2; results for h T = 0 and h T = 1 are given in Extended Data Fig. 1 . The justification for the specific form of the relative preference of the females bearing the P allele for Tt males \(({\alpha }^{{h}_{T}})\) is as follows: when h T = 0, such that the T allele is recessive and the trait is not expressed by Tt males, a female that bears the P allele cannot distinguish tt and Tt males—her relative preference for Tt males should therefore be 1 ( α 0 ). When h T = 0, such that that the T allele is dominant, the female cannot distinguish between Tt and TT males; her relative preference for Tt males should therefore be α ( α 1 ). Finally, in the case of exactly intermediate dominance of the T allele ( h T = 1/2), the preference of a female that bears the T allele for TT males over Tt males should equal the strength of her preference for Tt males over tt males; this requires that her relative preference for Tt males be √ α (that is, α 1/2 ). A similar logic will govern the choice of intermediate relative preferences in the case of Z-linked and X-linked preferences. In the case of X-linked preferences, the viability effects of the T and t alleles in males and females are as for the case of W-linked preferences described above. The dominance of the P allele in females is denoted by h P : pp females have equal preferences for the three male genotypes tt , Tt and TT ; Pp females have relative preferences 1, \({\alpha }^{{h}_{T}{h}_{P}}\) and \({\alpha }^{{h}_{P}}\) ; and PP females have relative preferences 1, \({\alpha }^{{h}_{T}}\) and α . The results discussed in the main text (Fig. 2 ) assume h T = h P = 1/2; the results for other possibilities are displayed in Extended Data Fig. 1 . For Z-linked preferences, the mutant T allele encodes a trait that is beneficial in males but costly in females: tt males and females have baseline relative viability 1; Tt males and females have viabilities 1 + h T s m and 1 − h T s f , respectively; and TT males and females have viabilities 1 + s m and 1 − s f , respectively. The Z-linked mutant P allele encodes a mating preference for males that bear the T allele in the same way as the W-linked preference described above. Finally, for the case in which the preference locus is pseudo-autosomal in a ZW system, the viability effects of the T allele and preference effects of the P allele (now at a diploid locus) are as in the case of X-linked preferences, and the preference locus recombines with the sex-determining locus in a fraction r of gametes. In the simulations, the results of which are displayed in Figs. 1 , 2 and Extended Data Fig. 1 , the population starts off with initial low frequencies of the mutant P and T alleles (1% each), with the loci in Hardy–Weinberg equilibrium when diploid, and in linkage equilibrium with each other. I assume that the two alleles at the trait locus mutate from one to the other at a symmetrical rate of u = 10 −3 per replication; there is no mutation at the preference locus (see Supplementary Information for a discussion of the effects of different mutation rates). From this starting configuration in each case, the population model was simulated for 5 × 10 6 generations (Figs. 1 , 2 ) or 10 6 generations (Extended Data Fig. 1 ), and the final frequencies of the mutant P and T alleles recorded. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability No datasets were generated or analysed in this study. Code availability Simulation code is available upon request. | Pavitra Muralidhar, a Ph.D. student in the Department of Organismic and Evolutionary Biology at Harvard University, has developed a theory to explain why females of some species are more attracted to some males who have a lesser chance of survival. In her paper published in the journal Nature, she outlines her theory of selfish sex chromosomes and how it might work in nature. Mark Kirkpatrick, with the University of Texas has published a News and Views piece in the same journal issue, outlining the work done by Muralidhar. Prior research has shown that females of some species are more attracted to males who posses certain impressive features, such as bright plumage. But such research has also shown that males who sport brighter plumage are more likely to be seen and eaten by a predator—so why do the females prefer them? Scientists have been unable to answer this question. But Kirkpatrick notes that there are two main theories. The first is called direct selection, and it is what it sounds like—genes that affect mating preferences are direct targets of selection. An example might be a male that is good at helping tend a nest, and is therefore highly prized by females looking for a mate. The other theory, quite naturally, is called indirect selection, and it involves females choosing mates who have one trait, but get another trait as part of the deal. A female who chooses a male based on plumage, for example, might wind up with a male that also has a good immune system, which is responsible for his great plumage. Unfortunately, neither of these theories explains why a female would go for a male that is clearly less likely to survive; doing so will decrease her male offspring's chances of survival. Muralidhar suggests this phenomenon is due to what she describes as "selfish sex chromosomes." Muralidhar's theory is based on the indirect selection theory, and also sexually antagonistic selection, in which a gene benefits one gender, but could mean harm to the other. She also considers the possibility of an exception in situations where one gender has a pair of same chromosomes and the other has two that are different. She suggests that in cases in which a gene involved in mating is located on the chromosome that is present in only one gender, it can result in only offspring of the same gender carrying that gene, which could benefit them, but it could also bring harm to the offspring of the other gender. She has also carried out a mathematical analysis of her ideas to prove her theory and carried out a study of 36 species, half of which aligned with her "selfish sex chromosome" hypothesis expectations. | 10.1038/s41586-019-1271-7 |
Medicine | Rapid agent restores pleasure-seeking ahead of other antidepressant action | Anti-anhedonic effect of ketamine and its neural correlates in treatment-resistant bipolar depression. Lally N, Nugent AC, Luckenbaugh DA, Ameli R, Roiser JP, Zarate CA. Transl Psychiatry. 2014 Oct 14;4:e469. DOI: 10.1038/tp.2014.105 . PMID: 25313512 Journal information: Translational Psychiatry | http://dx.doi.org/10.1038/tp.2014.105 | https://medicalxpress.com/news/2014-10-rapid-agent-pleasure-seeking-antidepressant-action.html | Abstract Anhedonia—which is defined as diminished pleasure from, or interest in, previously rewarding activities—is one of two cardinal symptoms of a major depressive episode. However, evidence suggests that standard treatments for depression do little to alleviate the symptoms of anhedonia and may cause reward blunting. Indeed, no therapeutics are currently approved for the treatment of anhedonia. Notably, over half of patients diagnosed with bipolar disorder experience significant levels of anhedonia during a depressive episode. Recent research into novel and rapid-acting therapeutics for depression, particularly the noncompetitive N -Methyl-D-aspartate receptor antagonist ketamine, has highlighted the role of the glutamatergic system in the treatment of depression; however, it is unknown whether ketamine specifically improves anhedonic symptoms. The present study used a randomized, placebo-controlled, double-blind crossover design to examine whether a single ketamine infusion could reduce anhedonia levels in 36 patients with treatment-resistant bipolar depression. The study also used positron emission tomography imaging in a subset of patients to explore the neurobiological mechanisms underpinning ketamine’s anti-anhedonic effects. We found that ketamine rapidly reduced the levels of anhedonia. Furthermore, this reduction occurred independently from reductions in general depressive symptoms. Anti-anhedonic effects were specifically related to increased glucose metabolism in the dorsal anterior cingulate cortex and putamen. Our study emphasizes the importance of the glutamatergic system in treatment-refractory bipolar depression, particularly in the treatment of symptoms such as anhedonia. Introduction Over half of patients diagnosed with bipolar disorder (BD) suffer from significant levels of anhedonia, 1 defined as loss of enjoyment in, or desire to engage in, previously pleasurable activities. 2 Notably, anhedonic patients with affective disorders have a poorer treatment prognosis than their non-anhedonic counterparts. 3 , 4 , 5 Indeed, accumulating evidence suggests that standard treatments for depression do little to alleviate anhedonia 6 and may even cause reward and emotional blunting, 7 , 8 , 9 sexual anhedonia 10 and anorgasmia. 11 , 12 Furthermore, the presence of anhedonia in a major depressive episode (MDE) is a predictor of proximal suicide completion. 13 Critically, no US Food and Drug Administration-approved treatment currently exists specifically for anhedonia. The Diagnostic and Statistical Manual-5 (ref. 14 ) identifies anhedonia as one of two cardinal symptoms in the diagnosis of an MDE in both major depressive disorder (MDD) and BD. Anhedonia can be subdivided into consummatory (subjective pleasure, for example, enjoying food) and motivational components (anticipation of and drive towards rewarding stimuli, for example, planning and looking forward to a vacation) that have distinct biological bases. 15 Indeed, research suggests that currently depressed MDD and BD patients may possess a substantial deficit in motivational, but not consummatory, reward behaviors. 16 , 17 Studies using the sweet taste test—a task that mirrors preclinical assessments of consummatory anhedonia in rodents—found that patients with MDD demonstrated the same preference for sucrose water concentrations as healthy volunteers. 18 , 19 Furthermore, Sherdell et al. 20 found that MDD patients experienced the same levels of pleasure as healthy volunteers while viewing humorous cartoons in a computer task, but were not willing to exert as much effort to gain access to these stimuli; the results suggest intact consummatory processes, but attenuated motivational ones. In another study, Etain et al. 21 found no evidence for consummatory anhedonia in BD patients. Although research pertaining to BD patients in particular is lacking, overall, the extant evidence suggests that anhedonia in depression is primarily associated with a deficit in non-consummatory reward behaviors. Understanding the neural pathways that mediate anticipatory pleasure is thus a critical step towards successful treatment of anhedonia. Dopaminergic signaling has been consistently correlated with the anticipation, motivation and learning related to pleasurable stimuli, but not to their consumption. 22 , 23 , 24 Phasic bursts in dopaminergic neurons in the ventral tegmental area (VTA) have reliably been shown to co-occur with violations in reward expectancy, 25 underscoring the evidence for dopaminergic signalling in reward learning. Furthermore, dopamine signalling in the nucleus accumbens, an area of dense dopaminergic projections from the VTA, has been strongly associated with reward motivation in rodents. 26 Functional neuroimaging in humans indicates that structures such as the VTA, 27 , 28 substantia nigra, 28 amygdala, 28 putamen, 28 caudate, 28 ventral striatum, 29 , 30 and orbitofrontal cortex 28 , 31 —all of which receive innervation from or project to dopaminergic nuclei—are recruited during reward anticipation. Despite the abnormalities in motivational behaviors seen in affective disorders, there is a dearth of robust evidence pertaining to any direct dopaminergic signaling deficit in patients with depression. 32 The strongest indirect evidence for dopaminergic dysfunction in depression comes from pharmacological treatment studies. For instance, the dopamine D2 receptor agonist pramipexole improved levels of depression in both MDD 33 and BD 34 , 35 patients after several weeks of daily administration. A study of Parkinson’s disease patients with co-occurring depression used the Snaith–Hamilton Pleasure Scale (SHAPS 36 ), and found that pramipexole treatment decreased anticipatory anhedonia levels by 25% across the entire sample; 37 however, it is unknown whether this reduction occurred as a function of improvement in mood, Parkinson’s symptomatology or hedonic capacity. It is presently unclear whether anhedonic symptoms in depression improve faster with dopaminergic-enhancing medications than with standard treatments. Furthermore, because improvements in self-reported levels of anhedonia are reportedly the last symptom to improve with selective serotonin reuptake inhibitors, 38 there is a critical unmet need for rapid-acting treatments for anhedonia. The fact that standard antidepressants lack any proven anti-anhedonic efficacy, particularly in conjunction with the deleterious side effects associated with these agents, requires novel pharmacotherapeutic approaches. The noncompetitive N -Methyl-D-aspartate (NMDA) receptor antagonist ketamine has shown remarkable consistency in rapidly ameliorating depressive symptoms in both MDD 39 , 40 , 41 , 42 , 43 and BD. 44 , 45 However, it is unknown whether ketamine also possesses any specific anti-anhedonic efficacy. Given the likely mechanistic heterogeneity of depression, it is critical to understand the specific targets of treatment response at both the clinical and neural levels, as outlined in the research domain criteria. 46 Ketamine acts directly on the glutamatergic system, which appears to be critical in depression; 47 , 48 however, little is known about the specificity of the relationship between commonly occurring symptoms during an MDE (for instance, anhedonia, anxiety) and particular biological phenotypes. In a small sample investigation, Walter et al. 49 found that MDD patients with high levels of anhedonia had lower levels of glutamine, but not glutamate, than healthy controls, but only a trend towards lower glutamine than MDD patients with low levels of anhedonia, who did not differ from controls. Preclinical evidence suggests that blockade of astrocytic glutamate reuptake in rodents can induce anhedonia-like behaviors, 50 particularly in the prefrontal cortex. 51 In addition, ketamine 52 , 53 and other NMDA antagonists (for example, MK-801 54 and memantine 55 ) have been shown to reverse anhedonic phenotypes in rodents. However, preclinical evidence typically classifies anhedonia as a decrease in consummatory behavior; although anticipatory and consummatory behaviors likely interact, extrapolating from rodent behaviors to clinical patient symptoms is not straightforward. Sub-anesthetic doses of ketamine acutely increase glutamatergic and dopaminergic signaling in the prefrontal cortex of rats. 56 Given the apparent role of dopaminergic and glutamatergic signaling in mediating anhedonia and the reported pharmacological effects in both rodents and humans, ketamine may be ideally suited to specifically ameliorate anticipatory anhedonia in currently depressed patients. This randomized, double-blind, placebo-controlled, crossover study assessed the anti-anhedonic efficacy of ketamine in treatment-resistant BD patients currently experiencing an MDE. Regional neural metabolism following both placebo and ketamine infusions were also measured in a subsample of these patients using [ 18 F] fluorodeoxyglucose (FDG) positron emission tomography (PET). 18 FDG-PET measures glucose metabolism, which is primarily determined by glial uptake of glucose in response to glutamate release from neurons, and principally reflects glutamatergic neurotransmitter release and cycling. 57 It is important to note that approximately all glucose entering the central nervous system is transformed into glutamate. 58 Thus, 18 FDG-PET provides an indirect quantitative measure of cerebral glutamate metabolism throughout the entire brain. The effects of ketamine on general depressive symptoms in the majority of BD subjects presented here (33/36, 92%) were previously reported, 44 , 45 together with neurobiological correlates of mood improvement derived from 18 FDG-PET. 59 The present study specifically explores the previously unaddressed issue of ketamine’s anti-anhedonic effects and its neural correlates in this sample. Materials and methods Participants Subjects aged 18–65 were recruited through local and national media. The sample comprised 36 (21F) treatment-refractory individuals with BD I or II without psychotic features who were currently experiencing an MDE (see Table 1 ). All subjects were inpatients at the National Institutes of Health (NIH), Bethesda, MD, USA. A Montgomery–Åsberg Depression Rating Scale (MADRS) 60 score ⩾ 20 at the time of screening was an inclusion criterion. All subjects were required to be currently experiencing an MDE lasting at least 4 weeks and to have failed to respond to at least one adequate antidepressant trial before hospital admission, as assessed by the Antidepressant Treatment History Form. 61 Diagnosis was confirmed using the Structured Clinical Interview for Axis I Diagnostic and Statistical Manual-IV Disorders-Patient Version. 62 Primary psychiatric categorization, including comorbid Axis I disorders, was corroborated via evaluation by three clinicians using all available clinical information. During their time as inpatients at the NIH, and before study entry, participants had also not responded to a prospective open-label trial of a mood stabilizer (either lithium or valproic acid) administered at treatment levels (serum lithium, 0.6–1.2 mEq l −1 ; or valproic acid, 50–125 μg ml −1 ). Table 1 Demographic information for BD patients and the subgroup that underwent 18 FDG-PET imaging Full size table All patients were in good physical health as determined by medical examination, medical history, chest x-ray, blood laboratory tests, urinary analysis and toxicology. Exclusion criteria, as outlined previously, 44 , 45 included pregnancy, nursing, serious suicidal ideation, comorbid substance abuse or dependence within the past 3 months, and previous use or treatment with ketamine. The co-occurrence of Axis I anxiety disorders was permitted if this was not the primary cause of illness within the previous 12-month period. All participants continued treatment with a mood stabilizer (either lithium or valproic acid) administered at therapeutic levels during the course of the study, although this was not adequate to alleviate depressive symptoms. Aside from monotherapy with a mood stabilizer, no psychotropic medication or psychotherapy was permitted in the 2 weeks before study randomization (5 weeks for fluoxetine) or during the 4-week study period. Written informed consent was obtained from all participants, and the NIH Combined Neuroscience Institute Review Board approved the study. Design The study was designed as a randomized, double-blind, placebo-controlled, crossover study to assess the antidepressant efficacy of ketamine in patients with treatment-refractory BD. All participants received one intravenous infusion of ketamine hydrochloride, administered at a subanesthetic dose of 0.5 mg kg −1 , and one infusion of placebo (0.9% saline solution) in a randomized order over a 4-week study period, and with 2 weeks between each infusion. A MADRS score ⩾ 20 on the morning of each infusion was required for study continuation. An anesthesiologist administered each infusion over 40 min using a Baxter infusion pump (Deerfield, IL, USA). The two solutions and the appearance of the drug syringe were identical and all study team members were blind to the drug condition. Clinical ratings were acquired 60 min before the infusion and thereafter at 40, 80, 120, 230 min, and days 1, 2, 3, 7, 10 and 14 following each infusion. The primary outcome variable for antidepressant efficacy was the MADRS 44 , 45 score. However, the main symptom of interest for this report is anhedonia, and levels of anticipatory anhedonia were evaluated using the SHAPS. The SHAPS is a 14-item, self-administered, user friendly and state-sensitive psychometric scale that overcomes limitations associated with other scales that specifically assess anhedonia, such as length and cultural idiosyncrasies. Importantly, the SHAPS has been validated in a number of independent samples since its publication. 63 , 64 , 65 The SHAPS was scored on a scale of one to four, with higher scores indicating greater anhedonia (range 14–56) and was administered with reference to either the past 24 h or the time between the present and the previous rating. The presence or absence of anhedonia was judged on the basis of the original scoring guidelines (range 0–14), 36 where a score >3 indicated clinically significant anhedonia (see Table 1 ). Other secondary outcome variables were also assessed and have been reported previously. 44 , 45 Positron emission tomography acquisition and analysis In addition, 21 of the 36 patients (see Table 1 ) underwent two resting-state 18 FDG-PET imaging scans, which began 2 h post infusion and ended (brain emission scan) ~1.5 h later. Immediately before both scans, patients completed psychometric rating scales; identical procedures were followed for both scans. PET imaging was performed on a GE Advance PET scanner (GE Medical Systems, Waukesha, WI, USA) in three-dimensional mode (35 contiguous slices, 4.25 mm plane separation) following an intravenous infusion of 4.5 mCi 18 FDG over 2 min. According to the method used by Brooks, 66 quantitative images of regional cerebral metabolic rate for glucose metabolism (rCMRGlu) were calculated using a cardiac input function derived from a dynamic left ventricular scan collected before both the brain emission scan and venous sampling (reconstructed resolution=6 mm full-width at half-maximum in all planes). Magnetic resonance imaging (MRI) images were acquired on a 3-Tesla scanner (Signa, GE Medical Systems) using a three-dimensional MPRAGE sequence (echo time=2.982 ms, repetition time=7.5 ms, inversion time=725 ms, voxel size=0.9 × 0.9 × 1.2 mm) to allow anatomic localization of PET activity. PET analyses comprised both a region of interest approach (ROI) and a whole-brain investigation. The ROI analysis pipeline used here has been previously described. 59 Briefly, the Analysis of Functional NeuroImages (AFNI; Bethesda, MD, USA) function 3dSkullStrip was used to remove non-brain tissue from the anatomical MRI image. These images were segmented into gray and white matter, and cerebrospinal fluid (separate binary mask images were computed for each component) using the FSL (Oxford, UK) automated segmentation tool. Anatomical images were spatially normalized to the Montreal Neurological Institute 152 template. ROIs (ventral striatum and orbitofrontal cortex) were selected on the basis of extant literature implicating these neuroanatomical structures in depression 67 , 68 , 69 , 70 and reward anticipation. 16 , 31 The ventral striatum ROI comprised only the nucleus accumbens and olfactory tubercle. Template-defined ROIs were transferred to the individual anatomical MRIs; to accommodate interindividual anatomical variation, ROI placement was adjusted per subject. ROIs were transferred back to the native MRI space, multiplied by a binary gray matter mask and applied to the rCMRGlu PET images. Mean glucose metabolism rate values, normalized by total gray matter, were then calculated. For the whole-brain investigation, 18 FDG-PET images were preprocessed and analyzed using Statistical Parametric Mapping software (SPM; Wellcome Trust Centre for Neuroimaging, London, UK) version 5 within the MATLAB (MathWorks, Natick, MA, USA) environment. Post-placebo and post-ketamine rCMRGlu images were separately co-registered to the anatomical image; the anatomical image was then normalized to Montreal Neurological Institute space and this transformation was applied to the co-registered PET images. A Gaussian smoothing kernel (8 mm full-width at half-maximum) was applied to the PET images. To create difference images for each individual, PET images were first normalized by the global mean (as calculated in SPM) and the post-placebo image was subtracted from the post-ketamine image. A binary mask was applied to all whole-brain investigations to limit the number of multiple comparisons to only intracerebral voxels. Statistical analyses Symptom rating scale analysis included all available data and was conducted using IBM SPSS (Armonk, NY, USA; version 21). Linear mixed models, using a heterogeneous first-order autoregressive covariance structure and restricted maximum-likelihood estimation, were performed to assess the effects of ketamine versus placebo on SHAPS scores over the 4-week period in this crossover design. Fixed main effects of time and drug and their interaction were included along with a random effect for subject. To correct for baseline levels of anhedonia, the SHAPS score at baseline on each infusion day (time point −60) was entered as a covariate into the model. In addition, a linear mixed model with total MADRS score (at each individual time point) entered as a covariate was conducted to evaluate whether ketamine infusion was associated with a change in SHAPS score independent of the effect on other depressive symptoms. MADRS item 8 (inability to feel) was removed from the total MADRS score for this analysis due to the strong conceptual overlap between this item and the SHAPS. Post hoc Bonferonni-corrected comparisons were conducted for each model for all post-baseline time points to determine the specific timing of the anti-anhedonic effects. All significance values were two-tailed, with a significance threshold of P <0.05. The time point of 230 min was a priori selected for both the ROI and whole-brain analyses on the basis of three factors: previous studies indicating that 230 min is a sensitive time point for detecting antidepressant effects of ketamine; 44 , 45 lack of psychotomimetic effects at this time point; and the proximity to the time of the PET scan. Relationships between glucose metabolism in our ROIs (post-placebo and post-ketamine and their difference) and SHAPS score were investigated using Pearson product moment correlation coefficients. First, percentage improvement in SHAPS score at 230 min post infusion ((post-ketamine 230−post-placebo 230)/post-placebo 230) was correlated with difference in mean ROI rCMRGlu metabolism (post-ketamine–post-placebo). Second, because we previously demonstrated an association between ventral striatum metabolism changes and overall depression score change following ketamine, 59 we conducted a multiple linear regression analysis to parse the variance associated with anhedonia and total depression score and explore which variable predicted change in ventral striatum metabolism. Finally, as a control analysis, we also assessed whether state-dependent anhedonia levels, as measured by raw SHAPS score, were associated with ROI rCMRGlu, both variables post-ketamine and post-placebo. Complementing the ROI analysis, the whole-brain investigation comprised the following multiple regression analyses. First, percent improvement on the SHAPS at 230 min (as above) was regressed onto the difference images (post-placebo–post-ketamine). Second, to assess the specificity of the results to anhedonia, and not depressive symptoms per se, we recomputed the whole-brain analyses with percentage change anhedonia scores orthogonalized to the corresponding percentage change in MADRS score (minus item 8) using the SPM spm_orth function within the MATLAB environment, entering a single regressor (SHAPS score orthogonalized to total MADRS score minus item 8). Orthogonalization of one variable against another, in this instance the SHAPS against the MADRS score, results in the removal of shared variance; the output variable represents the residual SHAPS score when the variance associated with the MADRS has been accounted for. Here, we report only those analyses that survived stringent Gaussian random field theory cluster correction for multiple comparisons at P <0.05, or at trend level at P <0.1, with an initial voxel-level threshold of P <0.05 uncorrected. In the case of extremely large clusters, the uncorrected threshold was increased so that all surviving clusters were <5000 contiguous voxels. For clusters surviving correction or trending, the three strongest peaks ( t statistics) are reported along with the corresponding Montreal Neurological Institute coordinates. Results Subjects Patient demographic details are presented in Table 1 . One patient was excluded from the PET analyses due to a failure to measure the cardiac input function, and another subject was excluded because there was no SHAPS scale score measurement at 230 min post-ketamine and post-placebo infusions. Behavioral response Main effects of drug ( F (1,110) =24.71, P <0.001) and time ( F (9,194) =2.69, P =0.006) were observed, as was a trend towards an interaction between these two variables ( F (9,218) =1.90, P =0.053); this indicates that ketamine caused a greater reduction in levels of anhedonia across time than placebo, ( Figure 1a ). Bonferonni-corrected post hoc comparisons demonstrated that ketamine, compared with placebo, significantly decreased levels of anhedonia at multiple times throughout the 14-day period following a single ketamine infusion ( Figure 1a ). Figure 1 Anti-anhedonic effect of ketamine and corresponding regression analyses with cerebral glucose metabolism. ( a ) Snaith–Hamilton Pleasure Scale (SHAPS) estimated scores from linear mixed model 1 (M1) indicating a significant reduction in anhedonia levels following ketamine (red) compared with placebo (blue). ( b ) Model 2 (M2) is the same as model 1 (M1) but has total depression score (as assessed by the Montgomery–Åsberg Depression Rating Scale (MADRS) minus item 8) entered as a covariate and still reveals a main effect of drug, thus underscoring the unique anti-anhedonic effect of ketamine administration. Asterisks indicate Bonferonni-corrected comparisons at P <0.05 for both A and B. ( c ) Region of interest analysis with ventral striatum (VS) demonstrating a significant association between anti-anhedonic response to ketamine and increased glucose metabolism in the VS. Changes in anhedonia levels no longer significantly predicted VS change when change in overall depressive symptoms were controlled for. ( d , e ) Whole-brain corrected relationship between the anti-anhedonic effects of ketamine and ( d ) dorsal anterior cingulate cortex (dACC), cerebellum, ( e ) right putamen, VS and medial posterior orbitofrontal cortex increases in glucose metabolism. ( f ) Whole-brain corrected relationship between SHAPS score orthogonalized against MADRS score indicating that a significant increase in dACC metabolism was associated specifically with anti-anhedonic response to ketamine independent of overall change in depressive symptoms. Error bars represent standard errors. PET images are presented ( d and e , P <0.025 uncorrected; f , P <0.05 uncorrected) such that only clusters surviving family-wise error correction are shown. Color bars indicate positive t -values associated with increasing glucose metabolism. ket, ketamine; pla, placebo; rCMRGlu, regional cerebral metabolic rate for glucose metabolism. Full size image Consistent with previous reports, 1 levels of anhedonia (as measured by the SHAPS) and depressive symptoms (as measured by total MADRS score) were positively correlated ( r (36) =0.53, P <0.001) at the first pre-infusion baseline. Critically, the main effect of drug on anhedonia levels was found when levels of depression (MADRS total score minus item 8 at each time point) were entered as a covariate in the model ( F (1,123) =7.713, P =0.006), indicating that anhedonia levels in BD respond to ketamine treatment over-and-above its effects on other depressive symptoms. Neither the main effect of time ( F (9,176) =1.42, P =0.18) nor the drug-by-time interaction ( F (9,219) =1.49, P =0.15) were significant for this model. However, post-hoc exploratory simple effects tests (Bonferonni-corrected) revealed that anti-anhedonic effects of ketamine were significant at days 1, 3, 7 and 14 following ketamine infusion ( Figure 1b ). This suggests that for some BD patients, ketamine may have specific benefits in reducing anhedonia levels, and that these benefits can last up to 2 weeks following a single infusion. A subsequent model included mood stabilizer as an additional covariate; a trend towards significance, with greater anti-anhedonic response associated with lithium than valproate was observed ( F (1,57) =3.57, P =0.06). No interaction effect was noted between drug (ketamine or placebo) and mood stabilizer ( F (1, 114) =0.08, P =0.78), nor was there a three-way interaction between drug, mood stabilizer and time ( F (9, 225) =0.41, P =0.93). Given our previous findings that a positive family history of alcohol use disorder (1st degree relative) was associated with an enhanced response to ketamine, 71 , 72 we examined whether this factor was also associated with the specific anti-anhedonic response demonstrated above; baseline SHAPS score and total depression score were entered as covariates into a subsequent model. There was a significant interaction between drug and family history of alcohol use disorder ( F (1,109) =4.12, P =0.045), but no main effect for family history of alcohol use disorder ( F (1,51) =0.27, P =0.60), nor an interaction between this variable and time ( F (9,163) =0.94, P =0.49), nor a three-way interaction between family history, drug and time ( F (9,205) =0.60, P =0.79). The significant interaction between drug and family history of an alcohol use disorder indicated, in comparison with placebo, a specific enhanced anti-anhedonic response to ketamine in those with ( F (1,116) =11.13, P =0.001), in comparison with those without (F (1,111) =1.22, P =0.27), a family history of an alcohol use disorder. In addition, we explored whether a personal history of alcohol abuse, dependence or illicit substance abuse contributed significantly to the specific anti-anhedonic effect of ketamine. No significant main effects or interactions with drug, time or drug and time were found. PET: ROI analyses Ketamine-induced change in ventral striatum rCMRGlu was significantly related to percent change in SHAPS score at 230 min post infusion ( r (19) =−0.52, P =0.02; Figure 1c ). Relative to placebo, individuals with the largest increase in glucose metabolism in the ventral striatum tended to have the highest anti-anhedonic response to ketamine. However, orbitofrontal cortex rCMRGlu activity was not significantly related to anti-anhedonic response to ketamine ( r (19) =−0.37, P =0.12). Because we had previously demonstrated a relationship between ventral striatum change in glucose metabolism and improvement in MADRS score following ketamine, 59 we tested whether this relationship was specific to anhedonia. Multiple regression, including both SHAPS and MADRS (minus item 8), indicated that change in MADRS ( t =−2.18, P= 0.045), but not SHAPS ( t =−0.05, P =0.96), significantly predicted change in ventral striatum rCMRGlu. Examining each session separately, we found no significant correlations between absolute SHAPS scores and ventral striatum or orbitofrontal cortex rCMRGlu, respectively, neither post placebo ( r (19) =0.03, P =0.91; r (19) =0.10, P =0.68) nor post ketamine ( r (20) =0.06, P =0.81; r (20) =0.06, P =0.81). PET: Whole-brain analyses Due to the presence of a large cluster, the statistical threshold was raised to 0.025 uncorrected, as opposed to 0.05, for the initial difference image contrast only. Whole-brain analyses revealed a significant relationship between percent improvement (decrease) in SHAPS scores and rCMRGlu increases in the dorsal anterior cingulate cortex (dACC; [x=−6, y=40, z=43]; t (17) =4.39, P corr =0.016; Figure 1d ). Furthermore, we found a significant increase in cerebellum ([x=−40, y=−48, z=−58]; t (17) =3.94, P corr =0.019; Table 2 ), and a trend towards an increase in striatal, rCMRGlu ([x=14, y=4, z=10]; t (17) =4.28, P corr =0.051; Figure 1e ), in relation to the percent amelioration in SHAPS scores. Importantly, the striatal cluster extended from the caudate to the putamen and into the ventral striatum ( Table 2 ), corroborating our ROI analysis. The inverse contrast (the relationship between decreases in rCMRGlu and change in SHAPS score) did not yield any whole-brain corrected results. Table 2 18 FDG-PET imaging results Full size table Because our ROI results indicated that metabolic increases in the ventral striatum appear to be driven by change in total depression score but not levels of anhedonia, we conducted a subsequent whole-brain analysis. In this model, change in SHAPS score was orthogonalized to change in MADRS score, both at 230 min, and this orthogonalized change in SHAPS score was regressed onto the rCMRGlu difference image. This regression revealed that the specific changes in anhedonia levels following ketamine—which were not related to general changes in depressive symptoms—were in fact associated with increased dACC metabolism ([x=−8, y=40, z=28]; t (17) =4.89, P corr =0.027; Figure 1f ) that extended into the pregenual cingulate/callosal region and the right dorsolateral prefrontal cortex. In addition, a trend was noted towards significantly increased metabolism in the fusiform gyrus ([x=34, y=−28, z=−20]; t (17) =3.74, P corr =0.065) that extended into the claustrum and the putamen, but not the ventral striatum. Finally, we assessed whether change in rCMRGlu was associated with longer-term SHAPS response by correlating the difference in dACC (peak voxel; ketamine-placebo) metabolism with change in SHAPS scores at day 14. There was no significant relationship between change in dACC glucose metabolism and the magnitude of the change in SHAPS score at day 14 ( r (18) =0.11, P =0.66). Discussion Several notable findings emerged from this study investigating the effects of the rapid-acting antidepressant ketamine on anhedonia in currently depressed treatment-resistant BD patients. Foremost among these findings is that ketamine, compared with placebo, rapidly reduced the levels of anhedonia in these patients; this reduction occurred within 40 min of a single ketamine infusion and lasted up to 14 days. Furthermore, we found that anti-anhedonic effects of ketamine remained significant even when controlling for level of depressive symptoms, suggesting that ketamine has a unique role in ameliorating anhedonia levels independent of other depressive symptoms. This study also used 18 FDG-PET to examine a subgroup of these patients and quantify the rCMRGlu correlates of change in anhedonia levels associated with ketamine treatment. Our PET results demonstrated that the neurobiology of this specific anti-anhedonic effect was mediated in part by increases in glucose metabolism in the dACC, and tentatively the putamen, but not the ventral striatum as originally hypothesized. These results are particularly important from a public health perspective. No approved treatments for anhedonia currently exist despite its prevalence across multiple psychiatric disorders. Thus, ketamine’s rapid effects (within 1 h) on anhedonia levels are a crucial clinical finding. Currently available standard treatments for depression, such as selective serotonin reuptake inhibitors, have few positive effects on levels of anhedonia in MDD patients and, indeed, have occasionally deleterious effects; 6 , 7 , 8 , 9 , 10 , 11 , 73 no research to date has assessed the effects of mood stabilizers on levels of anhedonia in individuals with BD. Our results lend credence to the notion that similar compounds (for example, other NMDA receptor antagonists) should be explored for their clinical relevance in treating this debilitating symptom of depression. The results further suggest that more typically anhedonic subtypes of depression (both MDD and BD)—for instance, melancholic depression—may be particularly suited to treatment with ketamine and its analogs. Given our previous finding 59 that improvement in total depression score on the MADRS was associated with increased ventral striatum rCMRGlu following ketamine, we hypothesized that this region would also be strongly related to anhedonia and its amelioration. Unexpectedly, we found that ventral striatal glucose metabolism was not associated with relative changes in anhedonia levels after controlling for levels of depression. Possible reasons for this result include the severity of illness in these treatment-resistant BD subjects, the underlying biology of the ventral striatum or the distinct symptom assessed by the SHAPS in comparison with the MADRS. Alleviating depressive symptoms is intensely relieving, and thus also rewarding, for treatment-refractory patients. Intriguingly, opioid receptors in the medial accumbens shell are believed to be solely responsible for ‘liking’ or consummatory pleasure behaviors, 74 whereas motivational hedonic behaviors are thought to arise from a wide array of receptors within the accumbens—the primary structure in the ventral striatum—as well as other neural structures; 18 FDG-PET cannot currently differentiate these structures. Tentatively, we hypothesize that patients showing the greatest antidepressant response to ketamine, as measured by the MADRS, may be experiencing high levels of pleasure, a construct not assessed by the SHAPS. Further investigations are needed to disentangle specific symptoms from the systems level antidepressant mechanisms of action of ketamine in humans. However, our study did find that individuals with the largest increase in glucose metabolism (post-ketamine–post-placebo) in the dACC and the putamen had the greatest clinical reduction in anhedonia levels; notably, this was true with and without controlling for total depression score. Findings from both human and animal studies indicate that both the dACC and putamen are highly involved in reward processing, learning and decision-making. 75 , 76 , 77 , 78 Shidara and Richmond 75 demonstrated that reward expectancy, or motivation, was highly correlated with single neuron signals in the monkey ACC (area 24c), a proximal region to the dACC rCMRGlu changes found here. Intriguingly, the dACC has also been strongly linked with the anticipation of rewarding events in humans. 79 The dACC rCMRGlu increases seen in the present study may reflect changes in the motivation and anticipation of, or ability to anticipate, pleasurable events; this is reflected in items of the SHAPS (‘I would enjoy a…’). Deficits in the ability to imagine future events, particularly positive ones, have been reliably identified in MDD patients experiencing an MDE; 80 , 81 thus, attenuation of depressive symptoms may be accompanied by an improved ability to anticipate pleasure. In a functional MRI study in humans, O’Doherty et al. 28 found that anticipating a primary reward (glucose) elicited heightened activity in the right putamen (as found here) compared with anticipating a punishment (salt water). Keedwell et al. 82 found that anhedonia levels, as measured by the Fawcett–Clark Pleasure Scale, 83 were positively correlated with activity in the right putamen during an emotional face-processing task that required MDD patients to compare sad and neutral faces; this result was not observed with depression or anxiety scales, though it should be noted that the Fawcett–Clark Pleasure Scale indiscriminately measures both consummatory and anticipatory anhedonia. Taken together, the results of the imaging portion of this study indicate that decreased anticipatory anhedonia is associated with increased glucose metabolism in the dACC and putamen, and that this in turn may reflect increased motivation towards or ability to anticipate pleasurable experiences. Another interesting finding was that, after the ketamine infusion, individuals taking lithium experienced greater anti-anhedonic effects than those receiving valproate when the antidepressant effect was controlled for. This result could be interpreted in two ways: either valproate caused a deficit in the anti-anhedonic effect of ketamine, or lithium enhanced the anti-anhedonic effect of ketamine. A recent preclinical report by Liu et al. 84 found that a single subanesthetic dose of ketamine in conjunction with lithium resulted in enhanced plasticity and antidepressant-like effects (increased struggling time in the forced swim test) in rodents compared with a single dose of ketamine plus vehicle. In light of this result, we suggest that lithium may enhance the anti-anhedonic effects of ketamine in individuals with BD. Future studies in patients both taking and not taking mood stabilizers will need to be conducted to correctly evaluate this possibility. Ketamine acts primarily by blocking the glutamatergic NMDA receptor. It may also upregulate α-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor throughput. 85 Although glutamate is the major excitatory neurotransmitter, its role in anhedonia has yet to be fully explored. Preliminary evidence suggests that glutamate may have an integral role in both anhedonia and depression. 47 , 50 Ketamine is also a partial agonist of the dopamine D2 receptor 86 , 87 and has been found to increase dopamine levels in the striatum, including the caudate and the putamen. 88 Intriguingly, Meyer et al. 89 found that MDD patients with co-occurring motor retardation symptoms exhibited lower extracellular dopamine in the putamen compared with healthy volunteers. Taken together, these findings suggest that the glutamatergic system and its downstream modulation of dopaminergic activity may be one potential route of the anti-anhedonic efficacy of ketamine. The present work has several limitations that should be noted. First, the lack of a baseline PET image meant that we could not exclude carryover effects in this crossover design. Second, the experiential difference between receiving ketamine and saline can be dramatic, and subjects likely realized what infusion they were receiving in each session, potentially invalidating the placebo arm of this study (however, see the recent article by Murrough et al. 42 ). Third, all participants continued to receive either lithium or valproate, which may have masked or enhanced the effect of ketamine. It is unknown whether the rapid-acting anti-anhedonic effects of ketamine would also occur in unmedicated BD patients. Finally, we do not know how applicable these findings may be to MDD patients. Thus, we recommend that future studies attempting to replicate and extend the promising findings presented here incorporate additional PET or functional imaging scans to evaluate changes from baseline; study medication-free patients; and examine both MDD and BD patients. Furthermore, exploring the precise underlying reward processing improvements found here, for example, via cognitive testing, is essential to our ability to appropriately characterize the anti-anhedonic effects of ketamine. Due to the prevalence of anhedonia in psychiatric and neurological illnesses, particularly in schizophrenia, Parkinson’s disease, drug addiction and both mood and anxiety disorders, anhedonia has been suggested as a tractable endophenotype. As outlined in the research domain criteria, identifying symptom-based etiologies and their treatment may help to clarify specific biological mechanisms mediating mental illnesses, aid to rectify classification and diagnostic issues currently inherent in psychiatry and provide sustainable stepping stones toward treatment improvements in mental health. In sum, this study demonstrated that ketamine exerts rapid-acting anti-anhedonic effects in treatment-refractory BD patients. These anti-anhedonic effects remained even when the variance associated with depression score changes was removed, suggesting that ketamine ameliorates anhedonia independent of its already considerable antidepressant effects. Furthermore, these anti-anhedonic effects appeared to be mediated by increased glucose metabolism in the dACC and putamen. Our results underscore the putative utility of NMDA receptor antagonists to treat all facets of depression. | A drug being studied as a fast-acting mood-lifter restored pleasure-seeking behavior independent of—and ahead of—its other antidepressant effects, in a National Institutes of Health trial. Within 40 minutes after a single infusion of ketamine, treatment-resistant depressed bipolar disorder patients experienced a reversal of a key symptom—loss of interest in pleasurable activities—which lasted up to 14 days. Brain scans traced the agent's action to boosted activity in areas at the front and deep in the right hemisphere of the brain. "Our findings help to deconstruct what has traditionally been lumped together as depression," explained Carlos Zarate, M.D., of the NIH's National Institute of Mental Health. "We break out a component that responds uniquely to a treatment that works through different brain systems than conventional antidepressants—and link that response to different circuitry than other depression symptoms." This approach is consistent with the NIMH's Research Domain Criteria project, which calls for the study of functions – such as the ability to seek out and experience rewards – and their related brain systems that may identify subgroups of patients in one or multiple disorder categories. Zarate and colleagues reported on their findings Oct. 14, 2014 in the journal Translational Psychiatry. Although it's considered one of two cardinal symptoms of both depression and bipolar disorder, effective treatments have been lacking for loss of the ability to look forward to pleasurable activities, or anhedonia. Long used as an anesthetic and sometimes club drug, ketamine and its mechanism-of-action have lately been the focus of research into a potential new class of rapid-acting antidepressants that can lift mood within hours instead of weeks. Based on their previous studies, NIMH researchers expected ketamine's therapeutic action against anhedonia would be traceable—like that for other depression symptoms—to effects on a mid-brain area linked to reward-seeking and that it would follow a similar pattern and time course. To find out, the researchers infused the drug or a placebo into 36 patients in the depressive phase of bipolar disorder. They then detected any resultant mood changes using rating scales for anhedonia and depression. By isolating scores on anhedonia items from scores on other depression symptom items, the researchers discovered that ketamine was triggering a strong anti-anhedonia effect sooner—and independent of—the other effects. Levels of anhedonia plummeted within 40 minutes in patients who received ketamine, compared with those who received placebo—and the effect was still detectable in some patients two weeks later. Other depressive symptoms improved within 2 hours. The anti-anhedonic effect remained significant even in the absence of other antidepressant effects, suggesting a unique role for the drug. Next, the researchers scanned a subset of the ketamine-infused patients, using positron emission tomography (PET), which shows what parts of the brain are active by tracing the destinations of radioactively-tagged glucose—the brain's fuel. The scans showed that ketamine jump-started activity not in the middle brain area they had expected, but rather in the dorsal (upper) anterior cingulate cortex, near the front middle of the brain and putamen, deep in the right hemisphere. Boosted activity in these areas may reflect increased motivation towards or ability to anticipate pleasurable experiences, according to the researchers. Depressed patients typically experience problems imagining positive, rewarding experiences—which would be consistent with impaired functioning of this dorsal anterior cingulate cortex circuitry, they said. However, confirmation of these imaging findings must await results of a similar NIMH ketamine trial nearing completion in patients with unipolar major depression. Other evidence suggests that ketamine's action in this circuitry is mediated by its effects on the brain's major excitatory neurotransmitter, glutamate, and downstream effects on a key reward-related chemical messenger, dopamine. The findings add to mounting evidence in support of the antidepressant efficacy of targeting this neurochemical pathway. Ongoing research is exploring, for example, potentially more practical delivery methods for ketamine and related experimental antidepressants, such as a nasal spray. However, Ketamine is not approved by the U.S. Food and Drug Administration as a treatment for depression. It is mostly used in veterinary practice, and abuse can lead to hallucinations, delirium and amnesia. | 10.1038/tp.2014.105 |
Medicine | Staying in education linked to lower risk of heart disease | Education and coronary heart disease: mendelian randomisation study www.bmj.com/content/358/bmj.j3542 Editorial: Does staying in school protect against heart disease? www.bmj.com/content/358/bmj.j3849 | http://www.bmj.com/content/358/bmj.j3542 | https://medicalxpress.com/news/2017-08-linked-heart-disease.html | Abstract Objective To determine whether educational attainment is a causal risk factor in the development of coronary heart disease. Design Mendelian randomisation study, using genetic data as proxies for education to minimise confounding. Setting The main analysis used genetic data from two large consortia (CARDIoGRAMplusC4D and SSGAC), comprising 112 studies from predominantly high income countries. Findings from mendelian randomisation analyses were then compared against results from traditional observational studies (164 170 participants). Finally, genetic data from six additional consortia were analysed to investigate whether longer education can causally alter the common cardiovascular risk factors. Participants The main analysis was of 543 733 men and women (from CARDIoGRAMplusC4D and SSGAC), predominantly of European origin. Exposure A one standard deviation increase in the genetic predisposition towards higher education (3.6 years of additional schooling), measured by 162 genetic variants that have been previously associated with education. Main outcome measure Combined fatal and non-fatal coronary heart disease (63 746 events in CARDIoGRAMplusC4D). Results Genetic predisposition towards 3.6 years of additional education was associated with a one third lower risk of coronary heart disease (odds ratio 0.67, 95% confidence interval 0.59 to 0.77; P=3×10 −8 ). This was comparable to findings from traditional observational studies (prevalence odds ratio 0.73, 0.68 to 0.78; incidence odds ratio 0.80, 0.76 to 0.83). Sensitivity analyses were consistent with a causal interpretation in which major bias from genetic pleiotropy was unlikely, although this remains an untestable possibility. Genetic predisposition towards longer education was additionally associated with less smoking, lower body mass index, and a favourable blood lipid profile. Conclusions This mendelian randomisation study found support for the hypothesis that low education is a causal risk factor in the development of coronary heart disease. Potential mechanisms could include smoking, body mass index, and blood lipids. In conjunction with the results from studies with other designs, these findings suggest that increasing education may result in substantial health benefits. Introduction Coronary heart disease (CHD) is the leading cause of death globally. Whereas the causal effects of risk factors such as smoking, high blood pressure, and raised low density lipoprotein cholesterol are generally accepted and reflected in disease prevention strategies, substantial uncertainty still surrounds other potential factors. Decades of observational studies have consistently associated socioeconomic factors such as higher education with decreased risk of CHD. 1 2 3 4 However, this association may not stem from an underlying causal effect but may arise owing to the methodological limitations of traditional observational research. 5 6 Clarifying whether the association between education and CHD is causal has widespread implications for our understanding of the causes of CHD, as well as for the potential development of novel population based approaches to its prevention. Unfortunately, randomised controlled trials are practically infeasible in this area, given the long (approximately 50 year) interval between exposure and outcome. Improving causal inference through other study designs is therefore necessary. Mendelian randomisation analysis uses genetic variants associated with a risk factor (for example, education) to make causal inferences about how environmental changes to the same risk factor would alter the risk of disease (for example, CHD). 7 Comparing the risk of disease across participants who have been grouped by their genotype enables the causal effect of a risk factor to be approximated with substantially less bias than in a traditional observational analysis. Genetic markers of a risk factor are largely independent of confounders that may otherwise cause bias, as genetic variants are randomly allocated before birth. 8 This, as well as the non-modifiable nature of genetic variants, provides an analogy to trials, in which exposure is allocated randomly and is non-modifiable by subsequent disease. 8 Until relatively recently, mendelian randomisation analyses have been conducted on single datasets in which data on genotype, risk factor, and outcome were measured for all participants (known as “one sample mendelian randomisation”). However, advanced analyses on pleiotropy require larger sample sizes to maintain statistical power. This would require data pooling across dozens of studies, which is administratively difficult to organise. As an alternative, summary level data from large genome-wide associations study (GWAS) consortia have become increasingly available in the public domain. Such data can be used to conduct mendelian randomisation analyses, whereby gene exposure measures are taken from one GWAS and gene outcome measures are taken from another GWAS (altogether known as “two sample mendelian randomisation”). 9 Further methodological developments, including mendelian randomisation-Egger (commonly abbreviated to MR-Egger), weighted median mendelian randomisation, and mode based methods, can all be used as sensitivity analyses to additionally investigate any pleiotropic effects of the genetic variants (that is, when genetic variants for education exert their influence on heart disease through an “off-target” pathway that bypasses the education phenotype; see supplementary figure 1 for details. 9 10 11 The mendelian randomisation method has successfully been applied to a range of biological and behavioural exposures. 12 13 We are aware of just two studies that have applied it to investigate a socioeconomic exposure: a polygenic score for education has previously been associated with the development of myopia and dementia. 14 15 However, these studies did not investigate the possibility of genetic pleiotropy. Our primary research question was “Is there genetic support for the hypothesis that education is a causal risk factor in the development of CHD, and, if so, does education cause changes to conventional cardiovascular risk factors that could be mediators of this?” We firstly updated traditional observational estimates of the association between education and risk of CHD from several large studies and consortia. Secondly, we applied two sample mendelian randomisation analyses to investigate whether people with a genetic predisposition towards higher education have a lower risk of CHD. A recent GWAS from the Social Science Genetic Association Consortium (SSGAC) identified a large number of independent genetic variants (single nucleotide polymorphisms—SNPs) associated with educational attainment. 16 We used 162 such SNPs to mimic the process of randomly allocating some participants to more education and other participants to less education. To compare the CHD risk of participants randomised in such a manner, we then used data from the Coronary Artery Disease Genome wide Replication and Meta-analysis plus the Coronary Artery Disease Genetics Consortium (CARDIoGRAMplusC4D) to see whether participants with genetic variants for longer education had an altered risk of CHD compared with participants with genetic variants for shorter education. 17 Careful consideration of the results from such analyses, as well as the wider literature, can support inferences about the likely cardiac consequences from environmentally acquired alterations to education. We checked the robustness of our findings across a range of sensitivity analyses and additionally tested for reverse causation by checking whether those SNPs that best predict CHD also associate with educational outcomes. Supplementary figure 2 illustrates the mains steps taken in this study. Methods Throughout all analyses, we defined education in the same way as in the original GWAS analysis, in which data from 65 studies were harmonised against the International Standard Classification of Education 1997 classification system (see supplementary table 1.3 of the original GWAS study 16 ). After harmonisation, self reported educational attainment was modelled linearly, expressed as one standard deviation (that is, 3.6 years) of additional schooling. In this form, one year of vocational education was equivalent to one year of academic education, and we did not assume any qualitative differences in the type of education. We defined CHD as a composite of myocardial infarction, acute coronary syndrome, chronic stable angina or coronary stenosis of more than 50%, or coronary death. Observational association between education and CHD In traditional observational analysis, we used a combination of cross sectional and prospective data, collected between 1983 and 2014 (table 1 ⇓ ). For prevalent CHD cases in cross sectional data, we analysed 43 611 participants (1933 cases) from the National Health and Nutrition Examination Surveys (NHANES) (see supplementary figure 3). 26 For incident CHD cases in prospective data, we analysed 23 511 participants (632 cases) from the Health, Alcohol and Psychosocial factors In Eastern Europe (HAPIEE) study 18 and combined this with published estimates from 97 048 participants (6522 cases) of the Monica Risk, Genetics, Archiving and Monograph (MORGAM) study in Europe (see supplementary table 1 for case definitions and statistical details). 3 19 Table 1 Details of studies and datasets included in analyses View this table: View popup View inline Genetic variants associated with education We retrieved a shortlist of SNPs associated with educational attainment from a recent GWAS involving 405 072 people of European ancestry (table 1 ⇑ ). 16 For our main analysis, we used 162 independent SNPs associated (P<5.10 −8 ; linkage disequilibrium r 2 <0.1) with education in a meta-analysis of the discovery (SSGAC) and replication (UK Biobank) datasets. Altogether, these 162 SNPs explained 1.8% of the variance in education. This is sufficient to generate a strong genetic instrument with which to derive unbiased causal estimates (see supplementary table 2 for power calculations). For our secondary analysis, we used another set of 72 independent SNPs (at r 2 <0.1) that were associated with education in the discovery dataset (SSGAC) alone (293 723 participants; P<5.10 −8 ) and that were subsequently found to be directionally consistent in an independent replication dataset (UK Biobank; see supplementary figure 4 for a summary of how SNPs were selected). We decided to use the larger set of instruments (with 162 SNPs) in our main analysis instead of the smaller set of instruments (with 72 SNPs) to maintain sufficient statistical power for our sensitivity analyses. To avoid potential biases that may arise when datasets contributing towards the SNP-to-exposure and SNP-to-outcome estimates overlap, we excluded studies in SSGAC that overlapped with CARDIoGRAMplusC4D (full details of these excluded studies are provided in supplementary methods 3.1). We then checked that the removal of these overlapping datasets from SSGAC had no material effect on the SNP-to-education estimates (see supplementary figures 5 and 6 for further details). Genetic variants associated with CHD Data on CHD have been contributed by CARDIoGRAMplusC4D investigators and have been downloaded from . For each of the 162 SNPs associated with education, we retrieved summary level data for either the same SNP (115 of 162 SNPs) or for a proxy SNP in high linkage disequilibrium (47 of 162 SNPs at r 2 >0.8) from datasets totalling 63 746 CHD cases and 130 681 controls (see supplementary figure 7 for how the education SNPs were matched against the CHD GWAS dataset). 17 We repeated a similar process for our secondary analysis using a set of 72 SNPs (supplementary figure 8). Statistical analyses Traditional observational analyses We used Cox proportional hazards and logistic regressions to calculate traditional observational estimates for incident and prevalent cases, respectively. Results were adjusted for age and sex. Further methodological details are given in supplementary methods 1. Mendelian randomisation analyses For all mendelian randomisation analyses, alleles from the SSGAC and CARDIoGRAMplusC4D datasets were aligned to correspond to an increase in educational attainment. To investigate whether education is likely to play a causal role in coronary heart disease, we used three mendelian randomisation approaches. Firstly, we used conventional (also termed ‘inverse variance weighted”) mendelian randomisation analyses, by regressing the SNP-education associations (exposure) against the SNP-CHD associations (outcome), with each SNP as one data point (details in supplementary methods 3.1 ] ). Secondly, we used three sensitivity analyses to investigate to what degree pleiotropic effects might bias the mendelian randomisation causal estimates. These methods allow some of the mendelian randomisation assumptions to be relaxed. For example, mendelian randomisation-Egger relies on the InSIDE assumption, which requires that the magnitude of any pleiotropic effects (from SNPs to CHD, which bypasses education) should not be correlated with the magnitude of the main effect (from SNP to education). 10 Median based and mode based methods posit that when looking at lots of SNPs (some of which may have pleiotropic effects on CHD), these pleiotropic effects are likely to be comparatively heterogeneous in nature and hence less likely to converge on a common median/modal estimate. In contrast, valid SNPs with no pleiotropic effects are more likely to show more uniform and homogeneous effects (on education and thereafter CHD), which makes them more likely to cluster towards the median/modal point estimate. 9 27 These methods are fully described in supplementary methods 3.2. Consistency of results across a range of methods that make different assumptions about pleiotropy strengthens causal inference, whereas divergent results may indicate that genetic pleiotropy is biasing some of these results (described in supplementary figure 1). Thirdly, to check whether genetic risk for coronary events might be a causal factor for educational attainment, we did mendelian randomisation in the opposite direction (bidirectional mendelian randomisation) using 53 SNPs associated with CHD (supplementary methods 3.2.4). Under conditions of massive pleiotropy, genetic risk of coronary events might also predict educational outcomes. To investigate potential mechanisms from education to CHD, we applied conventional mendelian randomisation to investigate whether genetic predisposition towards longer education could lead to improvements in the established cardiovascular risk factors. In this analysis, we discarded 60 SNPs with missing data on one of the cardiovascular risk factors from the 162 SNP instrument and thus used a smaller set of 102 SNPs (details in supplementary methods 3.3 and supplementary figure 4). Patient involvement Patients were not involved in the design or implementation of this study. There are no specific plans to disseminate the research findings to participants, but findings will be returned back to the original consortia, so that they can consider further dissemination. Results Observational analyses On the basis of NHANES data, each additional 3.6 years of education (1 SD) was associated with 27% lower odds of prevalent CHD (odds ratio 0.73, 95% confidence interval 0.68 to 0.78; illustrated in figure 1 ⇓ ). In prospective analyses, 3.6 years of additional education was associated with a 20% lower risk of incident CHD in the HAPIEE and MORGAM studies, with a pooled hazard ratio of 0.80 (0.76 to 0.83). Cohort specific results from MORGAM are additionally shown in supplementary figure 9. 18 19 These observational estimates were robust to sensitivity analyses accounting for different case definitions, age at first CHD event, and potential confounding by other measures of socioeconomic position (supplementary table 3). We also saw evidence for a dose-response relation between the amount of education and risk of CHD (supplementary figures 10 and 11). Fig 1 Comparison of observational and causal estimates for risk of coronary heart disease (CHD), per 3.6 years of educational attainment. Two observational estimates are provided according to prevalent and incident CHD cases. Risk coefficient for observational incident cases was derived by meta-analysis of hazard ratios from Health, Alcohol and Psychosocial factors In Eastern Europe (HAPIEE) and Monica Risk, Genetics, Archiving and Monograph (MORGAM) studies. Risk coefficients for observational prevalent cases and six causal estimates from mendelian randomisation (MR) are all odds ratios (see supplementary methods for full description of each analysis). IVW=inverse variance weighted approach; NHANES=National Health and Nutrition Examination Survey Download figure Open in new tab Download powerpoint Genetic association between education and CHD After integrating two GWAS datasets and examining millions of SNPs across the entire genome, we found strong evidence for a negative genetic correlation between education and CHD ( r g =−0.324; r g 2 =0.104; P=2.1×10 −12 ; further details in supplementary methods 2). 28 To interpret this, educational outcomes can vary as a result of genetic and non-genetic variance. Within the domain of genetic variance, approximately 10% of the genetic variance of education seems to be shared with the genetic variance of CHD, whereby this correlation is negative. This correlation can arise for various reasons, so we next did multiple mendelian randomisation analyses to investigate the presence and direction of any causal effects. Causal effect from education to CHD Using conventional mendelian randomisation analysis, 1 SD longer education (due to genetic predisposition across 162 SNPs) was associated with a 33% lower risk of CHD (odds ratio 0.67, 0.59 to 0.77; P=3×10 −8 ). Supplementary figure 12 additionally shows individual causal estimates from each of the 162 SNPs. As expected, sensitivity analyses using mendelian randomisation-Egger and weighted median mendelian randomisation provided less precise estimates than with conventional mendelian randomisation. Nonetheless, their causal estimates were similar in terms of direction and magnitude, and they were unlikely to have happened by chance alone (fig 1 ⇑ ). We found little evidence of a non-zero intercept from the mendelian randomisation-Egger test (intercept β=0.004, −0.056 to 0.013; P=0.417), consistent with the hypothesis that genetic pleiotropy was not driving the result. The mendelian randomisation regression slopes are illustrated in supplementary figures 13 and 14. A secondary set of analyses using a set of 72 SNPs instead of 162 SNPs yielded consistent results in terms of direction and magnitude (fig 1 ⇑ ). Further sensitivity analyses, using both sets of instruments, are reported in supplementary table 4. Briefly, an analysis that can account for some measurement error in our genetic instruments for exposure (so-called mendelian randomisation-Egger+SIMEX) gave similar findings. 29 Results from modal based mendelian randomisation approaches were consistent with the hypothesis that genetic pleiotropy was not driving the conventional mendelian randomisation result. We also did robustness checks by omitting SNPs with higher levels of missing data, as well as SNPs that were available in the CHD GWAS dataset in the form of a proxy SNP. These gave similar results in terms of direction, magnitude, and statistical significance. Collectively, all these sensitivity analyses make it less likely that the presence of pleiotropic effects, or missing data, grossly biased our main causal analysis. Causal effect from CHD to education We found little evidence for the hypothesis that genetic liability for CHD risk is associated with educational outcomes. Namely, 1-log greater genetic risk of CHD was associated with 2.4 (−16.6 to 21.4) days of longer educational attainment. Results were unchanged after application of mendelian randomisation-Egger and weighted median mendelian randomisation (fig 2 ⇓ ). The results from individual SNPs are shown in supplementary figures 18-20. Fig 2 Association of genetic liability to coronary heart disease (CHD) (exposure) on numbers of days of schooling (outcome). Causal estimates are expressed as difference in days of education per 1-log unit increase in risk of CHD as instrumented by 53 SNPs. Supplementary methods 3.2 details each mendelian randomisation (MR) analysis. IVW=inverse variance weighted approach Download figure Open in new tab Download powerpoint Causal effect from education to cardiovascular risk factors To identify potential risk factors that could mediate the association between education and CHD, we investigated whether genetic predisposition towards longer education was associated with established cardiovascular risk factors. Table 2 ⇓ shows that, in conventional mendelian randomisation analyses, a 1 SD longer education (due to genetic predisposition across 102 SNPs) was associated with a 35% lower odds of smoking, 0.17 lower body mass index, 0.14 mmol/L lower triglycerides, and 0.15 mmol/L higher high density lipoprotein cholesterol, with a P value smaller than 0.001 for each of these four outcomes. Associations with diabetes and systolic blood pressure were in the anticipated direction, but these effects may have been due to chance or insufficient statistical power (P values 0.05 to 0.08). Table 2 Causal effects from 3.6 years of education to 10 cardiovascular risk factors View this table: View popup View inline Discussion In this mendelian randomisation study, we found strong genetic support for the hypothesis that longer education has a causal effect on lowering the risk of coronary heart disease. Our findings using genetic data, which can be considered as “nature’s randomised trials,” 30 were consistent with data from observational studies, and we found little evidence that our results may be driven by genetic pleiotropy. More specifically, 3.6 years of additional education (similar to an undergraduate university degree) is predicted to translate into about a one third reduction in the risk of CHD. Comparison with previous studies A vast body of observational studies across a range of settings show an association between education and CHD. In contrast, comparatively few studies have explicitly investigated the causality of this association. The existing studies on causality come from three domains. Firstly, analyses of natural experiments have compared mortality before and after changes to compulsory schooling laws—for example, by looking at mortality rates in countries before and after the introduction of national legislation that increased minimum education. In the Netherlands, such changes were associated with reductions in all cause mortality. 31 In the UK, the largest study so far reported causal effects on improving physical activity, body mass index, blood pressure, diabetes, CHD, and all cause mortality. 32 An extension of this design is to compare geographical areas, such as the various states in the US. These studies initially suggested a large effect on all cause mortality, but this effect disappeared when state specific baseline trends were taken into account. 33 34 In Sweden, an intervention to extend compulsory schooling throughout a 13 year transition period in a stepped wedge design across multiple municipalities reported lower all cause mortality in those deaths occurring after age 40 (equivalent to hazard ratio of death of 0.86 (0.77 to 0.96) per 3.6 years of additional education). 35 Another source of causal inference comes from studies on monozygotic twins. Within each pair, both twins are exposed to the same set of genetic exposures (and also some environmental exposures, called the “shared environment”). Consequently, any difference in disease outcome between twins cannot arise from genetic effects. If differences in outcome associate with differential exposure to non-shared features of the environment (such as one twin pursuing education longer than the other twin), and if the magnitude of this association is comparable to that seen in the general population, this makes less likely the possibility that the observational association is confounded by genetic (or shared environmental) factors. Although the twin method does not eliminate the possibility of confounding from other factors in the non-shared environment, it is a design with which to eliminate the possibility of confounding from genetic factors. Twin studies conducted in Denmark initially found evidence both for and against causal effects from education to mortality and CHD incidence. 36 37 The largest study to date from Sweden (which has twice the statistical power of the previous largest study) found strong evidence for causal effects. 38 There, the association between years of education and lifespan did not attenuate at all when the conventional population based analysis was compared against the between twin analysis. Hence the twin literature suggests that, although only a handful of sufficiently powered studies exist, shared environmental factors (such as parenting) are not likely to cause substantial confounding. It also suggests that confounding from genetic factors (such as genetic differences in drive, motivation, personality, or innate intellect, all of which may predispose towards longer education) might not account for the observational associations between education and disease. A parallel domain of research, using data from millions of non-identical siblings (that sometimes reached 100 times larger sample sizes than the twin studies), has also observed little attenuation of the association between education and subsequent mortality when comparing the general population analysis with the within sibling analysis. 39 40 As with twin studies, this also suggests that environmental and genetic factors shared by the siblings are unlikely to confound the observational association seen between education and disease. Although twin and sibling studies both leave open the possibility of confounding from non-shared environmental factors, taken together with our results (using an entirely different method), the wider body of evidence is more compatible with a causal interpretation, suggesting that increasing education may lead to a reduction in CHD. Finally, some recent studies have also looked at specific genetic variants for education. An association was found between parental longevity and genetic markers for education in their offspring. 41 However, causal directions and pleiotropy were not tested in this study. Others have used conventional mendelian randomisation and found that genetic variants for education predict myopia and dementia. 14 However, these studies did not investigate pleiotropy of their genetic instruments. No mendelian randomisation studies of socioeconomic exposures have investigated any other disease outcome, such as cardiovascular diseases. Furthermore, most of the other designs listed above (including natural experiments and twin and sibling designs) have reported outcomes for all cause mortality. Few have reported cardiovascular mortality, and virtually none have reported fatal/non-fatal CHD, as we have. Strengths and limitations Our study has important strengths. We investigated the causality of the association between an easily measured socioeconomic factor (education) and a common disease (coronary heart disease). We applied the mendelian randomisation design, which in conjunction with findings from other study designs should improve our understanding of causality by reducing bias from confounding. By integrating summary level data from more than half a million individuals, our study was well powered to derive robust causal effect estimates and also powered for multiple sensitivity analyses (which typically require larger sample sizes). We used recent state of the art methodological developments to thoroughly explore the possibility of pleiotropy in our genetic variants, for which we found little evidence. Our study also has some limitations. Firstly, the genetic variants associated with education may instead mark more generic biological pathways (such as vascular supply or mitochondrial function), which could enhance systemic fitness, thereby leading to parallel increases in cognitive and cardiac function. 42 43 Under this scenario, which violates the InSIDE assumption, policy interventions to increase education may not translate into lower incidence of heart disease. However, such a scenario is less likely to lead to the consistent set of results we found across our sensitivity analyses, as this would require that pleiotropy occurs in a scenario in which the InSIDE assumption is violated (so that mendelian randomisation-Egger is biased), at least 50% of the information comes from SNPs with highly pleiotropic effects on heart disease, and these pleiotropic effects occurred in such a way as to make the causal estimates on heart disease seem very similar to one another. No definitive tests exist with which to verify such assumptions, meaning that triangulation of data from other sources and subjective judgment are needed to evaluate the plausibility of gross pleiotropic bias. 44 We believe such pleiotropy to be unlikely for four reasons. Firstly, the effects from genetic pleiotropy would have to coincide with the non-genetic associations observed in studies of monozygotic twins; secondly, they would also have to coincide with the non-genetic associations observed in natural experiments. Thirdly, if education and CHD share some of their underlying genome-wide genetic architecture (as seen in our LD score regression), and if most of the top hits for education are strongly pleiotropic for CHD, then one might imagine the top hits for CHD to also pick up some of these pleiotropic traits. However, our reverse direction mendelian randomisation found a null estimate. Fourthly, despite gaps in our understanding of the biological mechanisms through which these 162 SNPs influence education, they are disproportionately found in genomic regions that regulate brain development, they are enriched for biological pathways involved in neural development, and they are preferentially expressed in neural tissue. 16 As these 162 SNPs do not seem to have any expression or enrichment in cardiovascular tissues, this further narrows the scope for pleiotropy: any potential pleiotropy might have to exert a large effect on CHD via predominantly neurological pathways (for example, behaviours associated with obesity), rather than via global or systemic measures of fitness (such as mitochondrial function). Therefore, on balance, we believe that the scenario in which gross pleiotropy invalidates our sensitivity analysis is less consistent with the broader body of evidence, in comparison with the scenario in which our sensitivity analyses are valid. If our main and sensitivity analyses are valid, then policy interventions that mirror prolonged exposure to education (as indexed by our genetic instruments) should, on balance, probably prevent heart disease. A second limitation is that to arrive at such a policy recommendation one would have to assume that genetic predisposition towards higher educational attainment causes the same behavioural and physiological consequences as environmentally acquired changes to educational attainment, such as from a policy intervention. It may be, however, that a year of additional education from genetic causes could trigger a different set of biological and behavioural mechanisms compared with a year of additional education resulting from policy change. We know very little about the mechanisms of these genetic effects. In the analyses we did in this study, we found some initial evidence that some of these genetic effects may be mediated via common cardiovascular factors such as smoking, body mass index, and lipids. In keeping with this, policy changes to education in the US and UK have also estimated some causal effects on smoking, body mass index, blood pressure, and diabetes, 32 45 which are broadly consistent with our findings. Few studies have measured the causal effects of policy interventions on blood lipids. Although a randomised controlled trial of education is difficult for CHD outcomes, owing to approximately 50 years of lag, future research using real life interventions may be able to measure effects on potential mediators, as these occur much sooner. A second response to this overall limitation is the analogy to other exposures (such as low density lipoprotein cholesterol and systolic blood pressure), for which genetic effects have mirrored findings from environmentally acquired changes (such as from randomised controlled trials of drug therapies. 46 47 ). Taken together, although our study makes no direct inference on what health effects may stem from a policy intervention that successfully increases education, we are cautiously optimistic that such a policy should lead to reductions in heart disease. As a third limitation, we assumed the absence of dynastic effects, an assumption that is broken when parental genes associate with parental behaviours that directly cause a health outcome in the child. 48 For example, parents with a genetic predisposition towards higher education may choose to feed their children a better diet. However, parental educational attainment has been shown to be a poor predictor of conventional cardiovascular risk factors in children. 49 Fourthly, our observational and genetic data originate predominantly from samples of European origin in high income countries. We are thus unable to generalise these estimates to other populations, particularly to low income countries where cardiovascular diseases are less common. However, it may well be expected that socioeconomic factors mirror the pattern seen for other cardiovascular risk factors, whereby similar effects are typically seen across the world. For example, in the INTERHEART study, regional heterogeneity in the magnitude of associations was just as large for some conventional cardiovascular risk factors (eg, hypertension I 2 =85%, obesity I 2 =92%), 50 as it was for some psychosocial risk factors (eg, depression I 2 =85%, general stress I 2 =79%). 51 Fifthly, we do not know whether increasing education for the people with the least education will be as cardioprotective as increasing education for those with above average education. Nonetheless, a scenario of dose-response across the broad educational gradient is compatible with, firstly, the linear relation seen in the observational data. Secondly, it is also compatible with the concordance of findings from our study (which measures the average effect across the entire population) alongside the findings from studies of raising the school leaving age (which measure the effect among those with least education only). Potential mechanisms The mechanisms that might mediate the association between education and CHD remain relatively unknown. Traditional observational associations have estimated that the association between education and CHD attenuates by around 30-45% after statistical adjustment for health behaviours and conventional cardiovascular risk factors (including smoking, blood pressure, and cholesterol); however, measurement error in such analyses can underestimate their mediating effect. This suggests that these factors could account for perhaps half of the association between education and CHD. 2 52 Our study found genetic predisposition towards longer education to associate with improved smoking, body mass index, and blood lipid profiles (with some borderline results for blood pressure and risk of diabetes). The degree of mediation should now be formally assessed with more extensive methods—for example, by applying two step mendelian randomisation. 53 54 If conventional risk factors do not completely account for the mechanism between education and CHD, then additional mechanistic hypotheses for investigations are needed. These could include education leading to improved use of healthcare services (from better health knowledge or fewer financial barriers to accessing care) or better job prospects, income, material conditions, social ranking and/or diet, all factors associated with education and CHD, many of which might be amenable to intervention. 4 What our study adds After exposure to a socioeconomic factor, there is often a long latency period before the occurrence of common diseases (in this example, around 50 years). Consequently, this line of research is not particularly amenable to randomised controlled trials, which would otherwise settle questions of causality. This does not mean that these associations are less worthy of investigation, particularly as large point estimates open up the possibility of potentially large public health gains. The solution is to triangulate evidence from multiple study designs, each with its own strengths and weaknesses. The limited studies to date have suggested that a causal effect between socioeconomic exposures and all cause mortality is more likely than not to exist. Our study adds to this evidence by using an entirely new technique, which also suggests that a causal effect is more likely than not to exist between education and CHD. Implications for researchers The main question for future research is “What mechanisms account for the strong association seen between genetic predisposition towards longer education and substantially lower risk of CHD?” Were it to be found that a health behaviour (such as diet) is an important mediator, then interventions on diet could become the cornerstone of policies designed to reduce health inequalities. More molecular research is needed to delineate the mechanism, pleiotropic or not, through which these 162 education SNPs associate with cardiac outcomes. This could elucidate new causal mechanisms for CHD which, in turn, could lead to insights for potential drug discovery. Implications for clinicians and policymakers Although uncertainty remains around the precise function of each of the 162 SNPs, their degree of pleiotropy with cardiac traits, and the mechanisms by which these genetic variants exert their cardioprotective influence, conclusions can still be drawn from the current body of evidence. Firstly, policies that increase education probably lead to non-health benefits, such as increased economic productivity, higher voter turnout, better governance, and improved life satisfaction. 55 56 Secondly, very little evidence exists to suggest that increasing education might subsequently harm health or wellbeing. Thirdly, although rigorous scientific debate needs to continue on the health consequences of increasing education, the current balance of opinion seems to weigh towards the side on which increasing education will probably improve a range of health outcomes (either to a smaller or larger degree). Little discussion has taken place about how to increase education in a manner that is practical, acceptable, affordable, and sustainable. Although our data make no claims on this, we note that interventions should be accompanied by careful monitoring for unforeseen side effects, especially in those people who may not thrive when forced into extended educational settings, which may otherwise aggravate health inequalities. To briefly begin this discussion, one can imagine a range of policies by analogy to how clinicians, public health practitioners, and policymakers encourage patients to stop smoking: by raising awareness (for example, mass marketing campaigns, personalised letters, or individual counselling), convenience of access (for example, changing the geographical dispersion of educational establishments or opportunities for flexible education), and/or finance (for example, tuition fees, accommodation costs, or stipends). One can also consider complementing some of these population level policies with individual level interventions (for example, advising adolescents on whether to pursue higher education). Conclusion Our mendelian randomisation analyses found genetic support for the hypothesis that longer education plays a causal role in lowering the risk of coronary heart disease. Although completely ruling out possible pleiotropic effects is difficult, the sensitivity tests available to us gave little evidence that these could have driven our findings. In conjunction with the results from other study designs, increasing education is likely to lead to health benefits. What is already known on this topic Many observational studies have found that people who spend more time in educational settings subsequently develop less coronary heart disease However, whether this association is causal is not clear, partly because randomised controlled trials are practically infeasible in this area Few studies have applied mendelian randomisation to investigate how exposure to socioeconomic risk factors might causally change the risk of disease occurrence No such study has done sensitivity analyses around genetic pleiotropy What this study adds Increasing the number of years that people spend in the educational system may lower their risk of subsequently developing coronary heart disease by a substantial degree These findings should stimulate policy discussions about increasing educational attainment in the general population to improve population health Footnotes We are grateful to the eight GWAS consortia (especially SSGAC and CARDIoGRAMplusC4D) for publicly sharing the genetic data we used in our causal analysis, to Reedik Mägi for assistance on LD score regression, and to Daniel J Benjamin for comments on an earlier draft of this article. Contributors: TT and JV contributed equally to this paper and are joint first authors. TT had the idea for the study. AO obtained the genetic data. HP, AP, RK, AP, AT, SM, and KF obtained the observational data. MVH, GDS, JB, FPH, TP, and GV developed the study methods. JV, TT, FPH, and MVH did the analysis. All authors were involved in interpreting the data. TT, JV, and MVH wrote the first draft of the manuscript, and all authors critically revised it. TT, JV, and MVH are the guarantors. Funding: TT is funded by a Wellcome Trust fellowship (106554/Z/14/Z); JV is supported by the Swiss National Science Foundation (P2LAP3_155086); the HAPIEE study is supported by the Wellcome Trust (064947/Z/01/Z, WT081081), the US National Institute on Aging (1R01 AG23522), the MacArthur Foundation (Health and Social Upheaval network), and the Russian Science Foundation (14-45-00030); the MORGAM Project was supported by the European Union’s Seventh Framework Programme (HEALTH-F3-2010-242244, HEALTH-F2-2011-278913); GDS works in the Medical Research Council Integrative Epidemiology Unit at the University of Bristol (MC_UU_12013/1); JB is supported by a Medical Research Council (MRC) methodology research fellowship (grant MR/N501906/1); AO is supported by a European Research Council consolidator grant (647648 EdGe); MVH is supported by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and works in the MRC Population Health Research Unit at the University of Oxford that receives funding from the UK MRC. The funders had no role in the study design, data collection, analysis, interpretation, or writing, nor in the decision to submit the article for publication. Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: support for the submitted work as detailed above; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. Ethical approval: Participants gave informed consent for data sharing, as described in each of the discovery genome-wide association studies. Additional ethical approval was not needed for this study. Transparency: The lead authors affirm that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained. Data sharing: All of the summary level data used are available for instant download at the public repositories listed in table 1 ⇑ . The statistical code is available from the corresponding authors at [email protected] and [email protected]. This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: . | Staying in education is associated with a lower risk of developing heart disease, finds a study published by The BMJ today. The findings provide the strongest evidence to date that increasing the number of years that people spend in the education system may lower their risk of developing coronary heart disease by a substantial amount, say the authors. Many studies have found that people who spend more time in education have a lower risk of developing coronary heart disease. However, this association may be due to confounding from other factors, such as diet or physical activity. To date, it has been unclear if spending more time in education has any causal impact on heart disease—in other words, whether increasing education might prevent it. To better understand the nature of this association, and help inform public policy, a team of international researchers from University College London, the University of Lausanne, and the University of Oxford set out to test whether education is a risk factor for the development of coronary heart disease. They analysed 162 genetic variants already shown to be linked with years of schooling from 543,733 men and women, predominantly of European origin, using a technique called mendelian randomisation. This technique uses genetic information to avoid some of the problems that afflict observational studies, making the results less prone to confounding from other factors, and therefore more likely to be reliable in understanding cause and effect. The authors found that genetic predisposition towards more time spent in education was associated with a lower risk of coronary heart disease. More specifically, 3.6 years of additional education, which is similar to an undergraduate university degree, would be predicted to translate into about a one third reduction in the risk of coronary heart disease. Genetic predisposition towards longer time spent in education was also associated with less likelihood of smoking, lower body mass index (BMI), and a more favourable blood fat profile. And the authors suggest that these factors could account for part of the association found between education and coronary heart disease. The results remained largely unchanged after further sensitivity analyses and are in line with findings from other studies, and the effect of raising the minimum school leaving age. The authors outline some study limitations. For example, it is not fully understood how genetic variants cause changes to the length of time spent in education, and this could have introduced bias. However, key strengths include the large sample size, use of genetic randomisation to minimise confounding. They suggest that: "Increasing the number of years that people spend in the educational system may lower their risk of subsequently developing coronary heart disease by a substantial degree." These findings "should stimulate policy discussions about increasing educational attainment in the general population to improve population health," they add. A linked editorial suggests that, overall, the authors make a convincing case that a longer duration of education decreases risk of coronary heart disease in a causal manner. Brent Richards at McGill University in Canada and colleagues say the results "are strong and robust to sensitivity tests, which probe most of the potential biases in the results. When taken together with other observational studies and quasi-experiments, their conclusions are convincing." | www.bmj.com/content/358/bmj.j3542 |
Medicine | Potential therapy for incurable Charcot-Marie-Tooth disease | Robert Fledrich, Ruth M. Stassart, Axel Klink, Lennart M. Rasch, Thomas Prukop, Lauren Haag, Dirk Czesnik, Theresa Kungl, Tamer A.M. Abdelaal, Naureen Keric, Christine Stadelmann, Wolfgang Brück, Klaus–Armin Nave and Michael W. Sereda. "Soluble neuregulin–1 promotes Schwann cell differentiation in Charcot–Marie–Tooth Disease 1A." Nature Medicine, 24 August 2014 (DOI: 10.1038/NM.3664 Journal information: Nature Medicine | http://dx.doi.org/10.1038/NM.3664 | https://medicalxpress.com/news/2014-08-potential-therapy-incurable-charcot-marie-tooth-disease.html | Abstract Duplication of the gene encoding the peripheral myelin protein of 22 kDa (PMP22) underlies the most common inherited neuropathy, Charcot-Marie-Tooth 1A (CMT1A) 1 , 2 , 3 , a disease without a known cure 4 , 5 , 6 . Although demyelination represents a characteristic feature, the clinical phenotype of CMT1A is determined by the degree of axonal loss, and patients suffer from progressive muscle weakness and impaired sensation 4 , 7 . CMT1A disease manifests within the first two decades of life 8 , 9 , and walking disabilities, foot deformities and electrophysiological abnormalities are already present in childhood 7 , 8 , 9 , 10 , 11 . Here, we show in Pmp22 -transgenic rodent models of CMT1A that Schwann cells acquire a persistent differentiation defect during early postnatal development, caused by imbalanced activity of the PI3K-Akt and the Mek-Erk signaling pathways. We demonstrate that enhanced PI3K-Akt signaling by axonally overexpressed neuregulin-1 (NRG1) type I drives diseased Schwann cells toward differentiation and preserves peripheral nerve axons. Notably, in a preclinical experimental therapy using a CMT1A rat model, when treatment is restricted to early postnatal development, soluble NRG1 effectively overcomes impaired peripheral nerve development and restores axon survival into adulthood. Our findings suggest a model in which Schwann cell differentiation within a limited time window is crucial for the long-term maintenance of axonal support. Main The early onset of CMT neuropathy symptoms in childhood 7 , 8 , 9 , 10 , 11 prompted us to investigate postnatal Schwann cell development in transgenic CMT1A model rats (Sprague Dawley rats expressing Tg(Pmp22)Kan ). These rats overexpress Pmp22 ( ∼ 1,6-fold over wild-type levels, here termed CMT rats) and have a phenotype that closely resembles human CMT1A disease 1 , 12 , 13 . Histological analysis of peripheral nerves from CMT rats revealed a dysmyelinating phenotype with a reduced number of myelinated axons during early development that never reached wild-type levels throughout the disease course ( Fig. 1a,b ). Whereas the total number of sciatic nerve axons was unaltered at postnatal day 18 (P18), CMT rats demonstrated axonal loss at P90 when compared to wild-type controls ( Supplementary Fig. 3b ). In line with previous observations 14 , 15 , we noticed a prominent hypermyelination of axons from P6 on, whereas the first signs of demyelination and onion bulb formations evolved around the fourth postnatal week in CMT rats ( Fig. 1a,c and Supplementary Fig. 1a ). Figure 1: Pmp22 -transgenic Schwann cells acquire a differentiation defect during early postnatal development that is sustained until adulthood. ( a ) Electron micrographs of sciatic nerve cross-sections from male CMT rats ( Pmp22 tg) and controls (WT) at P6, P18 and P180. CMT rats display hypermyelinated (h), thinly (hypo-) myelinated (t) and amyelinated (a) axons (scale bars, 2.5 μm). ( b ) Quantification of the total number of myelinated axons per sciatic nerve in male CMT ( Pmp22 tg) and wild-type rats from P6 to P180 (P6: WT n = 5, Pmp22 tg n = 6; P18: WT n = 5, Pmp22 tg n = 4; P28: WT n = 4, Pmp22 tg n = 4; P84: WT n = 5, Pmp22 tg n = 5; P180: WT n = 3, Pmp22 tg n = 4; mean ± s.d., two-tailed Student's t -test per time point). ( c ) Scatterplots showing the myelin sheath thickness (g-ratio) plotted against the axon diameter of quantified sciatic nerve fibers from male CMT ( Pmp22 tg) and wild-type rats at P6 (top) and P180 (bottom) (P6: WT n = 5, Pmp22 tg n = 6; P180: WT n = 5, Pmp22 tg n = 4; minimum 200 axons per animal). ( d ) Analysis of mRNA expression of differentiation and maturity markers in sciatic nerve from male CMT ( Pmp22 tg) and wild-type rats from P1 to P180 ( n = 4 per group, mean ± s.d., two-tailed Student's t -test per time point). ( e ) Analysis of mRNA expression of dedifferentiation and immaturity markers in sciatic nerve from male CMT ( Pmp22 tg) and wild-type rats from P1 to P180 ( n = 4 per group, mean ± s.d., two-tailed Student's t -test per time point). ( f ) Sciatic nerve cross-section from an adult (P84) male CMT ( Pmp22 tg) rat showing immunohistochemistry against the dedifferentiation marker Jun that is expressed in myelinating Schwann cells (indicated by arrows). Fluorescence microscopy with phase contrast (left) shows Schwann cell–axon units (magenta). Nuclei are counterstained with DAPI (scale bars, 10 μm; representative images of n = 3 animals). ( g ) Western blot images (left) and the respective quantitation (right, mean of n = 2 biological replicates, i.e., two male rats per genotype and timepoint) showing Akt and Erk1/2 activation (as measured by phosphorylation, p-Akt and p-Erk) in sciatic nerve lysates from male CMT ( Pmp22 tg) and wild-type rats from P1 to P84. Two independent western blots (one for P1 and P6 and a second for P18, P28 and P84 time points) were carried out. P18 samples were also loaded on the P1 and P6 western blots (data not shown) and used as a calibrator for quantitation. ( h ) Immunohistochemistry on consecutive semithin cross-sections of the sciatic nerve using antibodies against phosphorylated Erk1/2 (p-Erk) and phosphorylated Akt (p-Akt) in male wild-type (left) and CMT (right; Pmp22 tg) rats. Arrows in the left images indicate Schwann cells positive for p-Erk and p-Akt. Arrows in the right images indicate Schwann cells positive for p-Erk without concomitant p-Akt immunoreactivity (Scale bars, 10 μm, representative images of n = 3 animals per group). * P < 0.05; ** P < 0.01; *** P < 0.001. Source data Full size image When we analyzed key regulators of Schwann cell development 16 in CMT rats, we detected disturbed Schwann cell differentiation at time points of early postnatal myelination, i.e., before the first signs of demyelination ( Fig. 1d,e ). We observed reduced mRNA levels of myelination-related genes such as Hmgcr , Mpz and Prx in CMT rats. Transcripts of the transcription factor Egr2 (also known as Krox20 ) were largely unaltered in CMT rats ( Fig. 1d ). When we quantified the mRNA levels of the disease gene Pmp22 , we found a surprising reduction during the peak of myelination, similar to that of other myelin genes. However, Pmp22 was overexpressed at the very early (P1, 2.1-fold; P6, 1.2-fold) and adult ( ∼ 1.6-fold) disease stages ( Fig. 1d ). In contrast to these differentiation-associated genes, markers of immature and dedifferentiated Schwann cells were upregulated in transgenic sciatic nerves from P18 on and remained increased throughout life ( Fig. 1e ). Notably, immaturity and dedifferentiation markers such as Jun were abnormally expressed by myelinating Schwann cells, as well as by adult nonmyelinating Schwann cells ( Fig. 1f , Supplementary Fig. 1b and data not shown), supporting findings from nerve biopsies from young patients with CMT1A (refs. 17 , 18 ). We then asked which intracellular signaling pathways may be responsible for perturbed Schwann cell differentiation. Western blot analysis of the phosphatidylinositol 4,5-bisphosphate 3-kinase (PI3K)–v-Akt murine thymoma viral oncogene homolog 1 (Akt) and the mitogen-activated protein kinase kinase 1 (Mek)–mitogen-activated protein kinase (Erk) signaling pathways revealed a strong reduction of PI3K-Akt signaling in CMT rat sciatic nerves from early postnatal development on ( Fig. 1g ). Mice lacking one copy of Pmp22 ( Pmp22 tm1Ueli , ref. 19 ) displayed increased PI3K-Akt signaling in sciatic nerve, which suggests a direct effect of cellular Pmp22 levels on PI3K-Akt signaling ( Supplementary Fig. 1c ). Notably, the decreased PI3K-Akt activity in CMT rats from P1 on was followed by an induction of Mek-Erk signaling, which we first observed at P6 ( Fig. 1g ). Indeed, hyperactivation of the Mek-Erk pathway has been described in other peripheral neuropathies 20 , 21 , and Erk inhibition reduces CMT1A pathology in a PMP22 -transgenic mouse model 22 . Notably, in CMT rats, immunohistochemistry revealed numerous Schwann cells with strong Erk activity but without detectable Akt phosphorylation, which we did not observe in wild-type Schwann cells ( Fig. 1h ). We next asked whether the observed imbalance of PI3K-Akt and Mek-Erk signaling could account for the disturbed differentiation of Pmp22-overexpressing Schwann cells in CMT rats. Mek-Erk signaling has been previously implicated in mediating Schwann cell dedifferentiation and myelin breakdown after acute nerve injury 23 . We therefore tested whether the inhibition of Mek-Erk signaling improves Schwann cell differentiation in CMT rats. Indeed, a pharmacological inhibitor of Mek-Erk, when injected into sciatic nerves of 28-d-old CMT rats, decreased the transcription of several dedifferentiation-associated genes in Schwann cells when monitored after 24 h ( Supplementary Fig. 1d ). Mek-Erk inhibition also decreased the mRNA levels of dedifferentiation markers in primary Schwann cell cultures derived from CMT rats ( Supplementary Fig. 1e ). However, in contrast to activation after acute nerve injury, Mek-Erk activation in CMT1A occurs before axonal damage. We hence asked which stimuli could lead to Mek-Erk hyperactivity in Schwann cells of CMT rats. As PI3K-Akt signaling is a negative regulator of Mek-Erk in other cell types 24 , we hypothesized that reduced PI3K-Akt signaling in CMT1A accounts for the hyperactivity of the Mek-Erk pathway. We therefore took advantage of primary Schwann cell cultures derived from CMT rats, which recapitulate the increased expression of dedifferentiation markers observed in vivo 25 ( Fig. 2a ). Indeed, a specific pharmacological activator of the PI3K-Akt signaling cascade (740YP; Supplementary Fig. 1f ) reduced phosphorylated Erk (p-Erk) levels, as shown 1 h and 6 h after treatment began ( Fig. 2b ). This inhibition of Mek-Erk by activated PI3K-Akt was triggered upstream of mammalian target of rapamycin (mTOR), as only the pharmacological blockage of PI3K, but not of mTOR, reverted phosphorylated Erk levels upon 740YP treatment ( Fig. 2c ). Moreover, PI3K-Akt activation with 740YP was sufficient to induce the downregulation of dedifferentiation-associated genes in Schwann cell cultures of CMT rats within 6 h ( Fig. 2a ). Figure 2: Activation of the Akt signaling pathway counteracts the perturbed differentiation in Pmp22 -transgenic Schwann cells. ( a ) Quantitative PCR (qPCR) analysis of control and Pmp22 tg and 740YP-treated (10 μM for 6 h) Pmp22 tg isolated primary Schwann cells derived from wild-type or CMT rats, respectively, for the mRNA dedifferentiation markers Jun and Sox2 ( n = 6 biological replicates, i.e., independent Schwann cell preparations each with 4 pooled sciatic nerves per group, mean ± s.d., one-way analysis of variance (ANOVA), Dunnett's post hoc test). ( b ) Western blot (top) and respective quantitation (bottom) against phosphorylated and total Erk. Protein lysates from isolated primary Schwann cells from wild-type and CMT ( Pmp22 tg) rats untreated (Ctrl) or treated with the PI3K activator 740YP (10 μM) for 1 h and 6 h (representative western blot of 3 independent experiments is shown). ( c ) Western blot images (top) and respective quantitation (bottom) against phosphorylated and total Erk. Protein lysates from isolated primary CMT rat ( Pmp22 tg) Schwann cells either untreated (Ctrl) or treated with the PI3K activator 740YP (10 μM), 740YP and a PI3K inhibitor (PI3K-I, LY294002, 10 μM, 1 h before stimulation) or 740YP and the mTOR inhibitor rapamycin (Rapa, 10 μM, 1 h before stimulation) for 1 h (representative western blot of 3 independent experiments). ( d ) Western blot images (top) and quantification (bottom) of sciatic nerve lysates from male CMT ( Pmp22 tg) rats against phosphorylated and total Akt. Sciatic nerves injected either with saline (Ctrl) or with NRG1 (10 ng) or NRG1 and a PI3K inhibitor (PI3K-I, LY294002, 10 μg) 24 h before tissue sampling ( Pmp22 tg Ctrl n = 5, Pmp22 tg + NRG1 n = 3, Pmp22 tg + NRG1 + PI3K-I n = 3 per group; mean ± s.d., two-tailed Student's t -test, p-Akt and Akt represent two independent western blots (processed in parallel) loaded with equal amounts of protein from the very same samples). ( e ) qPCR analysis with sciatic nerve mRNA extracts from male wild-type and CMT ( Pmp22 tg) rats for the dedifferentiation markers Jun , Sox2 and Ngfr . Sciatic nerves injected either with saline (NaCl) or with NRG1 (10 ng) or NRG1 and a PI3K inhibitor (PI3K-I, LY294002, 10 μg) 24 h before tissue sampling (WT n = 5, Pmp22 tg + NaCl n = 4, Pmp22 tg + NRG1 n = 5; Pmp22 tg + NRG1 + PI3K-I n = 4; mean ± s.d., one-way ANOVA, Dunnett's post hoc test). NS, not significant. ( f ) Western blot analyses (left) and respective quantitations (right) with protein lysates from isolated wild-type and Pmp22 tg (tg) primary rat Schwann cells against phosphorylated and constitutive Akt and Erk. Schwann cells treated with increasing doses of NRG1 (0–10 ng ml −1 ) for 1 h (representative western blots of 4 repeated experiments). ( g ) Graphical representation of f showing the relative ratio of p-Akt/Akt to p-Erk/Erk. The value of the nontreated wild-type sample was set to 1 (# indicates a shift of the ratio toward p-Akt/Akt). * P < 0.05; ** P < 0.01; *** P < 0.001. Source data Full size image In order to translate our findings into a therapeutic rationale for CMT1A, we asked whether a physiological activator of PI3K-Akt signaling in Schwann cells, such as the epidermal growth factor (EGF)-like growth factor NRG1, is able to improve Schwann cell differentiation in CMT1A. During peripheral nerve development, the transmembrane NRG1 type III isoform controls virtually all steps of Schwann cell differentiation, including the regulation of myelin sheath thickness 26 , 27 , 28 , 29 . However, NRG1 comprises a family of different membrane-bound and secreted signaling proteins 26 , 30 , and we previously identified soluble NRG1 type I to promote myelin repair after nerve injury 31 . Recombinant human NRG1 is considered clinically safe and has been tested in phase 2 clinical trials for heart failure 32 . First, we performed single injections of the recombinant EGF-like signaling domain of human NRG1 (rhNRG1) into sciatic nerves of juvenile CMT rats (P28). This induced PI3K-dependent Akt phosphorylation ( Fig. 2d ) and downregulation of several dedifferentiation markers within 24 h ( Fig. 2e and data not shown). Next, we analyzed the dose response of cultured CMT rat Schwann cells to rhNRG1 treatment ( Fig. 2f ). Western blot analysis of PI3K-Akt and Mek-Erk pathway proteins revealed an impaired response of CMT rat Schwann cells to rhNRG1 ( Fig. 2f ), but we observed a decrease of dedifferentiation markers after rhNRG1 stimulation ( Supplementary Fig. 1g ). Indeed, when we quantified the relative activation of both pathways, we found that the ratio between pAkt/Akt and pErk/Erk was shifted toward augmented pAkt/Akt levels with increasing rhNRG1 concentrations ( Fig. 2f,g ). These data suggest that the balanced activation of both signaling pathways improves the differentiation of diseased Schwann cells. In order to explore the therapeutic potential of NRG1 for CMT1A disease, we applied a genetic approach and crossbred a CMT1A mouse model with a moderate overexpression of human PMP22 ( Tg(PMP22)C61Clh , termed PMP22 tg mice 33 ) to two different Nrg1 -transgenic mouse lines, C57BL/6- Tg(Thy1-Nrg1*III)1Kan +/− (ref. 29 ) and C57BL/6- Tg(Thy1-Nrg1*I)1Kan +/− (ref. 29 ), here termed Nrg1(I) and Nrg1(III) tg mice, respectively). Similar to CMT rats, PMP22 tg mice showed increased mRNA expression of dedifferentiation markers ( Fig. 3a ), along with reduced PI3K-Akt signaling and concomitant hyperactivation of the Mek-Erk pathway ( Fig. 3b ). Notably, we found that the overexpression of the soluble Nrg1 type I isoform in PMP22 tg mice decreased the abnormal mRNA levels of dedifferentiation markers ( Fig. 3a ), whereas differentiation-associated myelin protein genes were largely unaffected ( Supplementary Fig. 1h and data not shown). The improved Schwann cell phenotype of PMP22 × Nrg1(I) double-transgenic mice was accompanied by higher p-Akt and lower p-Erk levels ( Fig. 3b ). Thus, exposure of PMP22 -transgenic mouse Schwann cells to (soluble) Nrg1 type I in vivo improves intracellular signaling and expression of dedifferentiation markers. Figure 3: Nrg1(I) transgenic overexpression improves CMT1A disease in PMP22 -transgenic mice. ( a ) qPCR of sciatic nerve mRNA extract from male wild-type ( n = 5), PMP22 tg ( n = 4) and PMP22 × Nrg1(I) double-transgenic ( n = 4) mice aged 6 months for the dedifferentiation markers Jun and Sox2 (mean ± s.d., one-way ANOVA, Dunnett's post hoc test). ( b ) Western blot analyses (left) and respective quantitations (right) of sciatic nerve protein lysates from male wild-type ( n = 3), PMP22 tg ( n = 3) and PMP22 × Nrg1(I) double-transgenic ( n = 3) mice aged 6 months against phosphorylated and total Akt and Erk (mean ± s.d., Mann-Whitney U -test). ( c ) Representative light microscopic images of sciatic nerve cross-sections from male wild-type, PMP22 tg, PMP22 tg × Nrg1(III) tg and PMP22 tg × Nrg1(I) tg mice aged 6 months. Asterisks indicate demyelinated fibers and arrowheads point to 'onion bulbs' (scale bars, 10 μm). ( d ) Quantification of the total number of myelinated axons per sciatic nerve cross-section (as shown in c ) in male wild-type ( n = 8), PMP22 tg ( n = 4), PMP22 tg × Nrg(III) tg ( n = 3) and PMP22 tg × Nrg1(I) tg ( n = 5) mice aged 6 months (mean ± s.d., one-way ANOVA, Dunnett's post hoc test). ( e ) Representative electrophysiological traces of sciatic nerve recordings from male wild-type, PMP22 tg, PMP22 tg × Nrg1(III) tg and PMP22 tg × Nrg1(I) tg mice after distal (d) and proximal (p) stimulation (arrowhead up, stimulus; arrowhead down, distal motor latency). ( f ) Quantification of the CMAP from e (all male; wild-type n = 5, PMP22 tg n = 6, PMP22 tg × Nrg1(I) tg n = 6, PMP22 tg × Nrg1(III) tg n = 3; mean ± s.d., one-way ANOVA, Dunnett's post hoc test). ( g ) Threshold electrotonus measurements in tail motor nerves of wild-type ( n = 6), PMP22 tg ( n = 8) and PMP22 tg × Nrg1(I) tg ( n = 11) mice with depolarizing test pulses on top of a hyperpolarizing conditioning pulse (all male; left, threshold reduction plot; right, quantification with mean ± s.d., Mann-Whitney U -test). * P < 0.05; ** P < 0.01; *** P < 0.001. Source data Full size image We then performed a histological analysis of PMP22 × Nrg1(I) double-transgenic mice at the age of 6 months, i.e., when PMP22 tg mice demonstrate a reduced number of myelinated axons and demyelination of peripheral nerves ( Fig. 3c,d ). Notably, in PMP22 × Nrg1(I) double-transgenic mice, the number of myelinated axons of the sciatic nerve reached wild-type levels ( Fig. 3c,d ). We found that the preservation of myelinated axons was specific to the neuronal overexpression of Nrg1 type I, as PMP22 × Nrg1(III) double-transgenic mice showed no histological improvement compared to PMP22 tg mice ( Fig. 3c,d ). Electrophysiological recordings of the sciatic nerve revealed an increased compound muscle action potential (CMAP), as well as an improved nerve excitability (as assessed by threshold electrotonus measurements) specifically in PMP22 × Nrg1(I) double-transgenic mice ( Fig. 3e–g and data not shown). In line with this, Nrg1 overexpression increased the reduced number of neuromuscular junctions in PMP22 tg mice, reaching wild-type levels ( Supplementary Fig. 1i ). The impaired nerve conduction velocity (NCV) in PMP22 tg mice was not altered by Nrg1 type I overexpression, in line with the unaltered myelin sheath thickness ( Supplementary Fig. 1j,k ). This potential of soluble Nrg1 type I to preserve axons and support axonal function in PMP22 tg mice prompted us to investigate the therapeutic effect of rhNRG1 in a preclinical experimental therapeutic study involving the CMT rat. NRG1 has been considered for pharmacological therapies, for example, in cardiovascular diseases 32 . When systemically applied, recombinant human NRG1 (the EGF-like domain, rhNRG1) mediates ErbB receptor–dependent effects in in target cells in vivo 34 . In rodent models, applied doses range from 1 μg kg −1 body weight to 1 mg kg −1 , and clinical trials have been performed with systemic applications of 0.3 to 2.4 μg kg −1 rhNRG1 (refs. 32 , 34 , 35 ). We first treated CMT rats every other day with different doses of human rhNRG1 (the EGF-like domain, injected intraperitoneally) from P6 to P18 ( Supplementary Fig. 2a ). Treated rats appeared healthy, and we observed no adverse effects, except for some weight loss at higher doses ( Supplementary Fig. 2b ). RhNRG1 treatment strongly improved motor performance of CMT rats at P18 in a dose-dependent manner, and the dose of 1 μg kg −1 rhNRG1 exerted the strongest gain in grip strength without affecting body weight ( Fig. 4a and Supplementary Fig. 2b ). Histologically, rhNRG1-treated CMT rats showed an increased number of myelinated axons in sciatic nerves that reached wild-type levels ( Fig. 4b,c ). Notably, NRG1 treatment also restored the reduced axonal caliber of P18 CMT rats ( Supplementary Fig. 2c ). Myelin sheath thickness was unaltered by NRG1 treatment ( Supplementary Fig. 2d ). Whereas differentiation-associated genes were not influenced by rhNRG1 therapy at the transcriptional level ( Supplementary Fig. 2e ), the mRNA and protein expression of dedifferentiation markers was decreased in rhNRG1-treated CMT rats, which also showed an improved balance between PI3K-Akt and Mek-Erk signaling ( Fig. 4d,e and Supplementary Fig. 2f ). Figure 4: Treatment of CMT rats with exogeneous rhNRG1 during early postnatal development induces a long-term improvement of the disease. ( a ) Hind-limb grip strength analysis of male P18 wild-type ( n = 6) and CMT ( Pmp22 tg) rats, treated with different doses of rhNRG1 from P6 to P18 every 2 d (nontreated n = 13, 0.1 μg kg −1 n = 4, 1 μg kg −1 n = 11, 10 μg kg −1 n = 3, 50 μg kg −1 n = 6; one-way ANOVA, Dunnett's post hoc test, mean ± s.d.; asterisks over blue bars represent significant difference to Pmp22 tg control group in red). ( b ) Representative electron micrographs of sciatic nerve cross-sections from male P18 wild-type, CMT ( Pmp22 tg) untreated and CMT NRG1-treated (1 μg kg −1 ) rats (scale bars, 5 μm). ( c ) Quantification of the total number of myelinated axons per sciatic nerve at P18 at the light microscopic level in male wild-type and CMT rats treated with or without 1 μg kg −1 NRG1 (wild-type n = 4, Pmp22 tg n = 3, Pmp22 tg + NRG1 n = 5, mean ± s.d., one-way ANOVA, Dunnett's post hoc test). ( d ) Immunohistochemistry of cJun- and Sox2-positive (cJun pos. and Sox2 pos.) Schwann cell nuclei in P18 sciatic nerve cross-sections of male wild-type and CMT ( Pmp22 tg) rats with or without NRG1 treatment (1 μg kg −1 ; wild-type and Pmp22 tg n = 3 per group, Pmp22 tg + NRG1 n = 4; mean ± s.d., one-way ANOVA, Dunnett's post hoc test, scale bars, 10 μm). ( e ) Western blot analysis (left) with respective quantifications (right) of sciatic nerve protein lysates from male P18 wild-type, CMT ( Pmp22 tg) and CMT + NRG1 (1 μg kg −1 ) rats against phosphorylated and total Akt and Erk (mean ± s.d., wild-type n = 3, Pmp22 tg n = 4, Pmp22 tg + NRG1 n = 3, one-way ANOVA, Dunnett's post hoc test). ( f ) Hind-limb grip strength analysis at P90 of male wild-type rats ( n = 4), CMT rats ( Pmp22 tg, n = 8) and CMT rats treated with NRG1 (1 μg kg −1 ) from P6 to P18 ( n = 6; one-way ANOVA, Dunnett's post hoc test). NS, not significant. ( g ) Representative electrophysiological traces of tail motor nerve recordings at P90 from male wild-type rats, CMT ( Pmp22 tg) rats and CMT rats treated with NRG1 from P6 to P18 (arrowhead up, stimulus; arrowhead down, distal motor latency). ( h ) Quantification of the CMAP (right) and NCV (left) from g (wild-type n = 4, Pmp22 tg n = 7, Pmp22 tg + NRG1 n = 5, all male, mean ± s.d., one-way ANOVA, Dunnett's post hoc test). ( i ) Light microscopical quantification of the total number of myelinated axons per sciatic nerve at P90 in male wild-type rats, CMT ( Pmp22 tg) rats and Pmp22 tg rats treated with NRG1 from P6 to P18 (wild-type n = 5, Pmp22 tg n = 12, Pmp22 tg + NRG1 n = 6, mean ± s.d., one-way ANOVA, Dunnett's post hoc test). ( j ) Working model showing that Pmp22 overexpression negatively interferes with the PI3K-AKT-mTOR signaling pathway, which subsequently leads to increased activity of the Ras-Raf-MEK-ERK pathway owing to a reduced feedback inhibition. Unbalanced activity between the two pathways induces a perturbed Schwann cell differentiation, resulting in a loss of long-term axonal support (left). NRG1 treatment restores the imbalance between PI3K-AKT and MEK-ERK signaling and ameliorates Schwann cell differentiation, thereby ensuring long-term nerve function. * P < 0.05; ** P < 0.01; *** P < 0.001. Source data Full size image As early rhNRG1 treatment overcomes the early developmental phenotype of CMT1A disease, we asked whether a therapy starting after the onset of a visible Schwann cell differentiation defect would still be efficient. We therefore treated CMT rats from P18 to P90 and analyzed them at P90. Notably, we detected an amelioration of the clinical phenotype, but the number of myelinated axons in sciatic nerves, as well as the CMAP, of treated CMT rats showed only a minor, nonsubstantial improvement compared to untreated CMT rats ( Supplementary Fig. 3a ). The NCV was unaltered by rhNRG1 therapy ( Supplementary Fig. 3b ). Thus, to improve the number of myelinated axons, rhNRG1 treatment should commence within the correct time window. On the other hand, PMP22 × Nrg1(I) double-transgenic mice showed clinical improvements in adulthood, i.e., at the age of 6 months ( Fig. 3c–g ). We therefore hypothesized that early rhNRG1 treatment might also exert long-term beneficial effects in CMT rats. Indeed, when we treated CMT rats between P6 and P18 and analyzed them animals as young adults (at P90), we found an ameliorated motor performance of rats in the rhNRG1-treated group that almost reached wild-type levels ( Fig. 4f ). Electrophysiological recordings, demonstrating a mixed phenotype of axonal loss and myelinopathy in CMT rats at P90, showed an improved CMAP with an unaltered NCV in rhNRG1-treated rats ( Fig. 4g,h ). Quantification of axon numbers revealed an increase in myelinated fibers in treated rats that reached normal levels ( Fig. 4i ). RhNRG1 treatment furthermore resulted in a preservation of Remak fibers, as well as in an increased (total) number of axons, as assessed by electron microscopy ( Supplementary Fig. 3b ). In line with this, western blot analysis revealed a reduction of nonphosphorylated neurofilaments in NRG1-treated versus untreated CMT rats ( Supplementary Fig. 3c ). We conclude that soluble NRG1 harbors the potential to rescue the differentiation defect of Pmp22 -overexpressing Schwann cells and can ameliorate CMT1A disease in rodent models ( Fig. 4j shows a working model). Improvement of the clinical phenotype and activation of PI3K signaling in Schwann cells upon rhNRG1 therapy occurs in a dose-dependent manner, and a dose of 1 μg kg −1 rhNRG1 was well tolerated without detectable side effects. We assume that the beneficial effect of rhNRG1 treatment in CMT1A is caused by effects on ErbB receptors expressing Schwann cells, but we cannot rule out additional systemic effects. CMT rats are most receptive to rhNRG1 treatment within a 2-week time window during early postnatal development, and treatments that begin at later ages (from P18 on) are less effective. We show that the inhibition of Mek-Erk signaling decreases the expression of dedifferentiation-associated genes, and Erk inhibition has previously been shown to ameliorate CMT1A pathology in a mouse model 22 . Although the molecular downstream consequences of increased Erk activity are not entirely understood, Mek-Erk signaling has evolved as potent negative regulator of Schwann cell differentiation and mediates Schwann cell responses in Wallerian degeneration 23 . We here demonstrate that, in CMT rats, reduced activity of PI3K-Akt signaling accounts for the hyperactivity of the Mek-Erk pathway. Early rhNRG1 therapy induces PI3K-Akt activity and restores the disturbed ratio of PI3K-Akt and Mek-Erk signaling. This does not affect the mRNA levels of differentiation genes, but it reduces the expression of dedifferentiation and immaturity markers, in line with the results obtained from the direct inhibition of Mek-Erk signaling in CMT rat Schwann cells. Eventually, rhNRG1 therapy enables CMT rat Schwann cells to differentiate properly, which enhances the potential of diseased Schwann cells to myelinate axons. RhNRG1 therapy thereby appears to protect axon function and halts disease progression at least until early adulthood. Early rhNRG1 treatment induces and accelerates ongoing myelination during postnatal development, which reduces the number of axons larger than 1 μm in diameter that are arrested at the promyelin stage at P18. Between P18 and P90, the majority of the remaining unmyelinated fibers >1 μm is lost in untreated CMT rats, suggesting that axons arrested at the promyelin stage are particularly prone to degeneration in CMT1A. Whereas the sum of myelinated and unmyelinated axons larger than 1 μm decreases between P18 at P90 in untreated CMT rats compared to wild-type controls, early rhNRG1 therapy (P6–P18) preserves those fibers, with treated CMT rats displaying wild-type levels of myelinated axons at P90. PMP22 is expressed in mature nonmyelinating Remak Schwann cells 36 , and patients with CMT1A suffer from sensory symptoms involving nonmyelinated small caliber axons 4 , 37 . It has been previously shown that Remak Schwann cells undergo dedifferentiation in response to acute nerve injury 38 . Remak Schwann cells in CMT rats also express the dedifferentiation marker c-Jun, suggesting that impaired differentiation is also a feature of nonmyelinating Schwann cells in CMT1A. As NRG1 therapy promotes the survival of Remak fibers, NRG1 treatment not only directly targets myelination but also affects long-term axonal support of fibers engulfed by nonmyelinating Schwann cells. CMT1A is often referred to as a primary demyelinating disease evolving within the first two decades of life 8 , 9 . We here suggest a model in which a PMP22-dependent cellular defect acquired during early postnatal development chronically alters Schwann cell differentiation and leads, over time, to secondary demyelination and onion bulb formation ( Fig. 4j ). We demonstrate a proof of principle that treatment of CMT rats with rhNRG1 within a short time window early in postnatal development improves Schwann cell differentiation and subsequently protects myelinated and physiologically nonmyelinated axons at least until early adulthood. Hence, an immediate fading of the therapeutic effect does not occur after rhNRG1 treatment but cannot be formally ruled out for later time points. An equivalent treatment effect in patients with CMT1A, resulting in a lower baseline disability score in young adulthood, would at least markedly delay the impending disease course, even if the rescued fibers would degenerate at much later disease stages. With the increasing clinical awareness of CMT1A disease manifestation in children and the availability of genetic testing, a therapeutic approach that has to be applied before disease onset may become relevant in the future for patients with early diagnoses of CMT1A. A transiently altered Schwann cell differentiation phenotype is part of an efficient regenerative process after acute nerve injury. However, a persistent Schwann cell differentiation defect appears to be harmful for long-term nerve function and axonal integrity. We note that a perturbed Schwann cell differentiation and abnormal dedifferentiation phenotypes of Schwann cells have also been reported for other peripheral neuropathies 39 , 40 . Treatment with NRG1 should therefore be explored as a therapeutic strategy for diseases of the peripheral nervous system in which Schwann cells are driven into abnormal dedifferentiation. Methods Mutant and transgenic animals. The generation and genotyping of Pmp22 transgenic rats 1 (SD- Tg(Pmp22)Kan ) and mice 33 ( Tg(PMP22)C61Clh ), as well as of mice hemizygous for Pmp22 (ref. 19 , Pmp22 tm1Ueli ) and transgenic for Nrg1 type I (C57BL/6- Tg(Thy1–Nrg1*I)1Kan +/ ) and type III C57BL/6- Tg(Thy1–Nrg1*III)1Kan +/ ; ref. 29 ), has been described. For PCR, we isolated genomic DNA from tail biopsies, using Invisorb Spin tissue Mini Kit (Invitek), according to the manufacturer's directions. For routine genotyping, we used PCR primers in a coamplification reaction. Primer sequences are given below. All animal experiments were conducted according to the Lower Saxony State regulations for animal experimentation in Germany as approved by the Niedersächsische Landesamt für Verbraucherschutz und Lebensmittelsicherheit (LAVES) and in compliance with the guidelines of the Max Planck Institute of Experimental Medicine. Inclusion and exclusion criteria were preestablished. Animals were randomly included according to genotyping results, age and weight into the experiments. Animals were excluded before experiments in case of impaired health condition or weight difference to the average group weight of more than 10%. Exclusion criteria, during or after the experiment was performed, comprised impaired health condition of single animals not attributed to genotype or experiment or weight loss >10% of the average group. No animals had to be excluded due to illness/weight loss in all performed animal experiments. Exclusion criteria regarding the outcome assessment were determined with an appropriate statistical test, the Grubbs' test (or ESD method), using the statistical software GraphPad (Prism). Animal experiments (phenotype analyses, electrophysiology and histology) were conducted in a single blinded fashion toward the investigator. Selection of animal samples out of different experimental groups for molecular biology/histology/biochemistry was performed randomly and in a blinded fashion. Therapeutic and surgical procedures. Randomly chosen male transgenic rats were treated with recombinant human neuregulin-1, i.e., the EGF-like domain (rhNRG1, Reprokine), which was diluted to the respective concentrations with 0.9% NaCl and injected intraperitoneally every second day. Pharmacological experiments were performed by intraneural injections in 28-d-old rats. Rats were anesthetized with ketaminhydrochloride/xylazinhydrochloride (100 mg per kg body weight and 8 mg per kg body weight), and the sciatic nerve was exposed. 5 μl of colored carrier solution containing 10 mg Mek inhibitor U0126 or 10 ng rhNRG1, either with or without 10 mg PI3K inhibitor LY290024, was injected into the sciatic nerves using a glass capillary and hand-held Hamilton syringe (Hamilton-Bonaduz, Swiss). Motor performance. All phenotype analyses were performed by the same investigator who was blinded toward genotype and treatment arm. Motor performance of CMT rats was assessed in standardized grip strength tests for hind limbs, as described previously 41 . Hind-limb grip strength was measured by supporting the forelimbs and pulling the animal's tail toward a horizontal bar connected to a gauge. The maximum force (measured in newtons) exerted onto the T-bar before the animals lost grip was recorded, and the mean of 5 repeated measurements was calculated. Electron microscopy and morphometry. Rats and mice were perfused with 4% paraformaldehyde and 2.5% glutaraldehyde in 0.1 M PBS. Sciatic nerves were removed, contrasted with osmium tetroxide and Epon embedded. Semi-thin cross-sections (0.5 μm) were cut using a microtome (Leica, RM 2155) with a diamond knife (Histo HI 4317, Diatome). Sections were stained with azur II–methylene blue for 1 min at 60 °C. Light microscopic observations were carried out with a 100× lens (Leica DMRXA), and images were digitalized and analyzed with Openlab 3.1.1 and Scion Image software. Axonal counts were performed on total sciatic nerve cross-section. For electron microscopy (EM) of sciatic nerves, ultrathin (50–70 nm) cross-sections were stained with 1% uranylacetate solution and lead-citrate and analyzed using a Zeiss EM10 or EM109 (Leo). The g-ratio was determined by dividing the circumference of an axon (without myelin) by the circumference of the same axon including myelin sheath. At least 100 fibers per animal were analyzed. Immunohistochemistry. For standard immunohistochemical analysis, rats and mice were perfused with 4% PFA in 0.1 M phosphate buffer. Dissected sciatic nerve tissue was imbedded in paraffin. Sections or samples were incubated overnight with primary antibodies against p-AKT (#3787) and p-ERK (#9101) (both pRb; 1:1,000; Cell Signaling), peripherin (pRb, 1:500, Chemicon), cJUN (mM, 1:500, BD Pharmingen), SOX2 (mM, 1:500, Calbiochem) and peripherin (pRb, 1:1,000, Chemicon). Samples were further incubated with their corresponding secondary cyanine dyes (1:1,000, Jackson ImmunoResearch) for 1 h at room temperature. Complete cross-sections of the sciatic nerve were analyzed. The quantification of neuromuscular junctions (NMJs) was performed with kryo-embedded gastrognemius muscles. Saggital cryosections of 60-μm thickness were incubated with Alexa 488–coupled α-bungarotoxin (1:1,000, B13422, Invitrogen), and the mean number of NMJs was quantified in 4 consecutive sections per animal. Digital images of stained sections were obtained by light microscopy (Zeiss Axiophot) and Openlab 3.1.1 software (Improvision). Images were processed by using NIH ImageJ, Photoshop CS (Adobe) and Illustrator 10 (Adobe) software. RNA analysis. Total RNA was extracted from sciatic nerves using Qiazol Reagent, whereas RNA from cell culture was purified using RLT lysis buffer, each according to the manufacturer's instructions (Qiagen). The integrity of purified RNA was confirmed using the Agilent 2100 Bioanalyser (Agilent Technologies). For RT-PCR analysis, cDNA was synthesized from total RNA using poly-thymine and random nonamer primers and Superscript III RNase H reverse transcriptase (Invitrogen). Quantitative real-time PCR was carried out using the Roche LC480 Detection System and SYBR Green Master Mix according to the manufacturer (Applied Biosystems). Reactions were carried out in four replicates. The relative quantity (RQ) of RNA was calculated using LC480 Software (Roche). Results were depicted as histograms (generated by Microsoft-Excel 2003) of normalized RQ values, with mean RQ value in the given control group normalized to 100%. As internal standards, peptidylprolyl isomerase A ( Ppia ) and ribosomal protein, large, P0 ( Rplp0 ) were used. PCR primer sequences are given below. Protein analysis. Sciatic nerves and cultured cells were transferred to sucrose lysis buffer (320 mM sucrose, 10 mM Tris, 1 mM NaHCO 3 , 1 mM MgCl 2 ) containing protease (Complete, Roche) and phosphatase (PhosSTOP, Roche) inhibitors and homogenized using the Precellys homogenizer (Peqlab). Detection of immunolabeled protein was performed using chemiluminescence (NEN Life Science Products). Representative results from at least three independent experiments are shown. Western blots were incubated overnight with primary antibodies against p-AKT (#3787), AKT (#9275), p-ERK (#9101), ERK (#4695) (all pRb; 1:1,000, Cell Signaling) and SMI31 and SMI32 (both monoclonal mouse, 1:1,000, Covance). Primary cell culture. Rat Schwann cells were prepared from sciatic nerves of 4 newborn rats (4 d old) as described previously 42 . For cell expansion, media were supplied with 10 ng per ml medium recombinant human neuregulin-1 EGF-like domain (rhNRG1, Reprokine) and 4 μM forskolin (SIGMA). Cells were deprived of rhNRG1 for 1 week before freezing and storage. For experiments, independent Schwann cell preparations (as biological replicates) were defrosted, replated and cultured on resting medium (DMEM + 10% FCS) for 3 d. Before experimental treatment with rhNRG1 or the specific PI3K activator (740YP, Tocris), Schwann cells were kept on serum-reduced medium (1% FCS) for 1 d. Blocking experiments were performed using the PI3K inhibitor LY294002 (Cell Signaling) or the mTOR inhibitor Rapamycin (Cell Signaling), which was added to the medium 1 h before rhNRG1 treatment. Electrophysiology. Standard electroneurography was performed on rats and mice that were anesthetized with ketaminhydrochloride/xylazinhydrochloride (100 mg per kg body weight/8 mg per kg body weight). A pair of steel needle electrodes (Schuler Medizintechnik, Freiburg, Germany) was placed subcutaneously along the nerve at sciatic notch (proximal stimulation). A second pair of electrodes was placed along the tibial nerve above the ankle (distal stimulation). Supramaximal square wave pulses lasting 100 ms were delivered using a Toennies Neuroscreen (Jaeger, Hoechsberg, Germany). Compound muscle action potential (CMAP) was recorded from the intrinsic foot muscles using steel electrodes. Both amplitudes and latencies of CMAP were determined. The distance between the two sites of stimulation was measured alongside the skin surface with fully extended legs, and nerve conduction velocities (NCVs) were calculated automatically from sciatic nerve latency measurements. For threshold tracking measurements (refs. 43 , 44 ), mice were placed in a pronated position, the right back was shaved just lateral to the lumbar vertebrae and the anode was firmly attached to the exposed disinfected skin. Ag-AgCl disk electrodes (Asmuth GmbH Medizintechnik, 43 mm × 45 mm) were used for stimulation, and stainless steel acupuncture needles were used for recording and reference. Electrodes were positioned as previously described for mouse models 45 . Stimuli were generated by a PC (Dell Optiplex 745) running QTRAC Software ( ) with the TRONDHW protocol using a DS5 Bipolar Constant Current Stimulator (Digitimer, UK). EMG signals were amplified and filtered (2 Hz to 2 kHz, D360 Isolated Patient Amplifier System, Digitimer Ltd., UK) and then digitized using analog-to-digital acquisition board (NI USB-6251, National Instruments) at a 10-kHz sampling rate. Line interference was removed with an online noise eliminator (HumBug, Quest Scientific, North Vancouver, Canada). QTRAC software (Institute of Neurology, London) running the TRONDNW program was used to control peripheral nerve stimulation. CMAP target amplitude was set at 40%. For threshold electrotonus measurements, caudal nerves were exposed to hyperpolarizing stimuli that are subthreshold. The same conditioning stimuli (−40% of the stimulus intensity used to evoke the target 40% CMAP amplitude) were applied sequentially before the test stimulus at increasing intervals between 0 and 200 ms at 10-ms increments. Primer sequences. Genotyping primers: Pmp22 tg rats sense 5′-CCAGAAAGCCAGGGAACTC-3′ antisense 5′-GACAAACCCCAGACAGTTG-3′ PMP22 tg mice sense 5′-TCAGGATATCTATCTGATTCTC-3′ antisense 5′-AAGCTCATGGAGCACAAAACC-3′ Pmp22 +/− mice sense 5′-GCATCGAGCGAGCACGTAC-3′ antisense 5′-ACGGGTAGCCAACGCTATGTC-3′ Nrg1(I) tg mice sense 5′-CTGGTAGAGCTCCTCCGCTTC-3′ antisense 5′-TGGCAAAGGACCTTAGGCAGTGT-3′ Nrg1(III) tg mice sense 5′-CGATAAGTTTAGGAGCAGTTTGCAG-3′ antisense 5′-TGGCAAAGGACCTTAGGCAGTGT-3′ qPCR primers: Rplp0 (rat) sense 5′-GATGCCCAGGGAAGACAG-3′ antisense 5′-CACAATGAAGCATTTTGGGTAG-3′ Rplp0 (mouse) sense 5′-GATGCCCAGGGAAGACAG-3′ antisense 5′-ACAATGAAGCATTTTGGATAATCA-3′ Ppia (rat) sense 5′-AGCACTGGGGAGAAAGGATT-3′ antisense 5′-AGCCACTCAGTCTTGGCAGT-3′ Ppia (mouse) sense 5′-CACAAACGGTTCCCAGTTTT-3′ antisense 5′-TTCCCAAAGACCACATGCTT-3′ Hmgcr (rat) sense 5′-CAACCTTCTACCTCAGCAAGC-3′ antisense 5′-CACAGTGCCACACACAATTCG-3′ Hmgcr (mouse) sense 5′-CAACCTTCTACCTCAGCAAGC-3′ antisense 5′-CACAGTGCCACATACAATTCG-3′ Prx (rat) sense 5′-GAGCCTCAGTTTGCAGGAAG-3′ antisense 5′-TTGTAGGGCTCGGCACAT-3′ Prx (mouse) sense 5′-GCCCCCAGGTGACTCTCT-3′ antisense 5′-CTCCACGATAATCTCCACCAA-3′ Mpz (rat) sense 5′-GTCCAGTGAATGGGTCTCAGATG-3′ antisense 5′-CTTGGCATAGTGGAAGATTGAAA-3′ Mpz (mouse) sense 5′-GTCCAGTGAATGGGTCTCAGATG-3′ Antisense 5′-CTTGGCATAGTGGAAAATCGAAA-3′ Egr2 (rat) sense 5′-CTACCCGGTGGAAGACCTC-3′ antisense 5′-TCAATGTTGATCATGCCATCTC-3′ Egr2 (mouse) sense 5′-CAGTTCAACCCCTCTCCAAA-3′ antisense 5′-ACCGGGTAGAGGCTGTCA-3′ Pou3f1 (mouse/rat) sense 5′-GCGTGTCTGGTTCTGCAAC-3′ antisense 5′-AGGCGCATAAACGTCGTC-3′ p75 NTR (rat) sense 5′-GGTTGCCATCACCCTTGA-3′ antisense 5′-GACAGCGGCATCTCTGTG-3′ p75 NTR (mouse) sense 5′-CGGTGTGCGAGGACACTGAGC-3′ antisense 5′-TGGGTGCTGGGTGTTGTGACG-3′ Notch1 (rat) sense 5′-GTCATCCTCGCAATGCTTC-3′ antisense 5′-GAGGCTCCACCGTCTCAC-3′ cJun (mouse/rat) sense 5′-CCTTCTACGACGATGCCCTC-3′ antisense 5′-GGTTCAAGGTCATGCTCTGTTT-3′ Sox2 (mouse/rat) sense 5′-TCCAAAAACTAATCACAACAATCG-3′ antisense 5′-GAAGTGCAATTGGGATGAAAA-3′. Statistical analyses. For power analyses, the software G*Power Version 3.1.7. was used. Power analyses were performed before conducting experiments (a priori). Adequate Power (1 – β-error) was defined as ≥80% and the α-error as 5%. The sample size was calculated with the following prespecified effect sizes: (i) analysis of axonal numbers in wild-type and CMT rats and analysis of axonal numbers in wild-type, PMP22 -transgenic and PMP22 × Nrg1 double-transgenic mice, effect size d of approximately 2.5 (estimated mean difference of 7% and s.d. 3%), (ii) analysis of axonal numbers in wild-type, transgenic CMT rats treated or not with rhNRG1, effect size d of approximately 1.7 (estimated mean difference of 5% and s.d. 4%), (iii) mRNA expression analysis, effect size d approximately 2.4 (estimated mean difference of 40% and s.d. of 15%), (iv) western blot analysis, effect size d approximately 3.3 (estimated mean difference of 20% and s.d. of 5%), (v) phenotype analysis, effect size d approximately 2.2 (estimated mean difference of 25% and s.d. of 10%) and (vi) electrophysiology, effect size d approximately 2.6 (estimated mean difference of 15% and s.d. of 15%). Data are expressed in mean ± s.d. unless indicated otherwise. All data were tested for normal distribution with the Kolmogorov-Smirnov test in order to select the appropriate statistical test. Statistical differences between two groups were determined using the two-tailed Student's t -test for normally distributed data with comparable variances. The nonparametric Mann-Whitney U -test was used for data showing no normal distribution or if no normality test could be applied. Data sets containing more than two groups were tested by applying analysis of variance (ANOVA) and Dunnett's post hoc test. Axon size distribution ( Supplementary Fig. 2c ) was analyzed using Kolmogorov-Smirnov test. The applied statistical test is indicated for all respective experiments within the figure legends. Statistical differences were considered to be significant when P < 0.05 (* P < 0.05, ** P < 0.01, *** P < 0.001). All statistical analysis was performed using the software Statistica 10.0 (StatSoft, Tulsa, USA), GraphPad (Prism) and MS Excel. | Charcot-Marie-Tooth disease type 1A is the most common inherited disease affecting the peripheral nervous system. Researchers from the Department of Neurogenetics at the Max Planck Institute of Experimental Medicine and University Medical Centre Göttingen have discovered that the maturity of Schwann cells is impaired in rats with the disease. These cells enwrap the nerve fibres with an insulating layer known as myelin, which facilitates the rapid transfer of electrical impulses. If Schwann cells cannot mature correctly, an insufficient number of nerve fibres is covered with myelin during development. According to the researchers, the growth factor neuregulin-1 has immense therapeutic potential: rats treated with neuregulin-1 have more myelinated nerve fibres. The symptoms of the disease diminish significantly as a result. Patients with Charcot-Marie-Tooth disease type 1A harbour an extra copy of the PMP22 gene which leads to the overproduction of the peripheral myelin protein 22 (PMP22), a key component of myelin. This causes slow, progressive nerve damage, which can begin as early as childhood. Patients suffer from numbness, tingling and pains in the arms and legs, as well as weakness of leg and arm muscles. Some patients can only move around with the help of a wheelchair. The disease has been incurable to date, as little is known about the fundamental mechanisms of the disorder. The researchers in Göttingen have now studied genetically modified rats, which, like Charcot-Marie-Tooth patients, produce too much PMP22. In these animals, the Schwann cells cannot mature properly, with the result that insufficient numbers of axons are enwrapped with myelin during development. "This is caused by an imbalance of two signalling pathways in the Schwann cells – the PI3K/AKT and the MEK/ERK signalling pathway – which are important for cellmaturation ," explains Robert Fledrich from the Max Planck Institute of Experimental Medicine. In this latest study, the scientists tested the therapeutic potential of the growth factor neuregulin-1. They were able to show that the balance between the two signalling pathways could be restored by administering neuregulin-1. The growth factor is normally produced by nerve cells and acts as an important maturation signal for Schwann cells during development. In the case of acute nerve damage, it is also formed by Schwann cells and plays an important role in repairing damaged nerves. "In genetically modified rats, even a brief neuregulin-1 treatment within the first two weeks of life improves the animals' disease symptoms until they reach adulthood," says co-author Ruth Stassart. The scientists and the neurologists from Michael Sereda's Research Group now want to conduct additional studies to drive forward the development of a treatment for Charcot-Marie-Tooth disease type 1A. Applying the findings to patients, however, is still a long way off. The safety of any potential treatment must first of all be studied, and various substances that can imitate the neuregulin-1 signalling pathway will be tested. The recently established Germany-wide network for Charcot-Marie-Tooth disease (CMT-NET) should help in this regard. Patients, researchers and doctors will be able to use the network in Germany to find out about the progress being made in researching this little-known disease. | 10.1038/NM.3664 |
Biology | 15,000-year-old viruses discovered in Tibetan glacier ice | Zhi-Ping Zhong et al, Glacier ice archives nearly 15,000-year-old microbes and phages, Microbiome (2021). DOI: 10.1186/s40168-021-01106-w | http://dx.doi.org/10.1186/s40168-021-01106-w | https://phys.org/news/2021-07-year-old-viruses-tibetan-glacier-ice.html | Abstract Background Glacier ice archives information, including microbiology, that helps reveal paleoclimate histories and predict future climate change. Though glacier-ice microbes are studied using culture or amplicon approaches, more challenging metagenomic approaches, which provide access to functional, genome-resolved information and viruses, are under-utilized, partly due to low biomass and potential contamination. Results We expand existing clean sampling procedures using controlled artificial ice-core experiments and adapted previously established low-biomass metagenomic approaches to study glacier-ice viruses. Controlled sampling experiments drastically reduced mock contaminants including bacteria, viruses, and free DNA to background levels. Amplicon sequencing from eight depths of two Tibetan Plateau ice cores revealed common glacier-ice lineages including Janthinobacterium , Polaromonas , Herminiimonas , Flavobacterium , Sphingomonas , and Methylobacterium as the dominant genera, while microbial communities were significantly different between two ice cores, associating with different climate conditions during deposition. Separately, ~355- and ~14,400-year-old ice were subject to viral enrichment and low-input quantitative sequencing, yielding genomic sequences for 33 vOTUs. These were virtually all unique to this study, representing 28 novel genera and not a single species shared with 225 environmentally diverse viromes. Further, 42.4% of the vOTUs were identifiable temperate, which is significantly higher than that in gut, soil, and marine viromes, and indicates that temperate phages are possibly favored in glacier-ice environments before being frozen. In silico host predictions linked 18 vOTUs to co-occurring abundant bacteria ( Methylobacterium , Sphingomonas , and Janthinobacterium ), indicating that these phages infected ice-abundant bacterial groups before being archived. Functional genome annotation revealed four virus-encoded auxiliary metabolic genes, particularly two motility genes suggest viruses potentially facilitate nutrient acquisition for their hosts. Finally, given their possible importance to methane cycling in ice, we focused on Methylobacterium viruses by contextualizing our ice-observed viruses against 123 viromes and prophages extracted from 131 Methylobacterium genomes, revealing that the archived viruses might originate from soil or plants. Conclusions Together, these efforts further microbial and viral sampling procedures for glacier ice and provide a first window into viral communities and functions in ancient glacier environments. Such methods and datasets can potentially enable researchers to contextualize new discoveries and begin to incorporate glacier-ice microbes and their viruses relative to past and present climate change in geographically diverse regions globally. { "name": "40168_2021_1106_MOESM1_ESM.mp4", "description": "40168_2021_1106_MOESM1_ESM.mp4", "thumbnailUrl": " "uploadDate": "2021-07-03T07:46:02.000+0000", "contentUrl": " "duration": "PT1M57.08S", "@context": " "@type": "VideoObject" } Video Abstract Background The first reports of microbes in glacier ice appeared early in the twentieth century [ 1 ] but were largely ignored until the 1980s when microbes were investigated in the deep Vostok ice core [ 2 ] and subsequent studies near the end of the 1990s (reviewed in [ 3 , 4 , 5 , 6 ]). These studies revealed microbial cell concentrations of 10 2 to 10 4 cells ml −1 in most glacier-ice samples [ 4 ], which are several orders of magnitude lower than other environments such as seawater or soils [ 7 ]. The microbes identified in glacier cores potentially represent the microbes in the atmosphere at the time of their deposition [ 3 , 8 ], though we cannot rule out post-deposition metabolisms of microbes [ 9 ]. Microbial communities of glacier cores were reported to correlate with variations in dust and ion concentrations [ 10 , 11 , 12 , 13 , 14 ]. A long temporal record (27k to 9.6k years before present) of prokaryotic cell concentration from a deep West Antarctic ice core revealed that airborne prokaryotic cell deposition differed during the Last Glacial Maximum, Last Deglaciation, and Early Holocene periods [ 8 ]. Hence, the glacier-ice microbes may reflect climatic and environmental conditions during that time of deposition [ 3 ]. Taxonomically, Proteobacteria , Actinobacteria , Firmicutes , and Bacteroidetes are the dominant bacterial phyla found in ice cores [ 4 , 15 , 16 , 17 ]. Bacteria of above phyla have been successfully cultured from very old frozen glacier ice [ 18 , 19 , 20 , 21 ], including some that were believed to have been preserved for >750,000 years [ 19 ] because of the subzero temperatures and low water activities within the ice matrix. Some bacteria were preserved as spores in glacier ice [ 22 , 23 ]. Although there is currently no direct evidence for in situ activity, some studies have hinted at the possibility of microbial activity in frozen glacier ice based on the detection of some excess gases (e.g., CO 2 , CH 4 , and N 2 O), which may be produced by post-depositional microbial metabolism [ 24 , 25 , 26 ]. Though most ice core microbiological studies have focused on microbial communities using culture-dependent and culture-independent (e.g., 16S rRNA gene amplicon sequencing) methods, and how to use them to understand past climatic and environmental conditions archived in the glaciers [ 3 , 4 , 5 , 6 ], there have been only two reports of viruses in ancient glacier ice. One detected the atmosphere-originated tomato mosaic tobamovirus RNA in a 140,000-year-old Greenland ice core using reverse-transcription PCR amplification [ 27 ], and the other reported the presence of virus-like particles (VLPs) deep (i.e., 2749- and 3556-m depth) in the Vostok ice core using transmission electron microscopy [ 3 ]. Ancient viruses were also reported from other environments such as permafrost [ 28 ] and frozen animal feces [ 29 ]. The viral abundance was reported to range from 0.1 to 5.6 × 10 5 VLPs ml −1 in the surface ice (top 90 cm) of two Arctic glaciers in Svalbard [ 30 ], while the cryoconite holes on the surface of some glaciers possess abundant and active viral communities [ 30 , 31 , 32 ]. For example, 10 8 to 10 9 VLPs g −1 of sediment and viral production rate of 10 7 to 10 8 VLPs g −1 h −1 were detected in Arctic cryoconite holes [ 31 ]. However, virtually, nothing is known about the archived ancient glacier-ice viral genomes or communities, which might be active on the glacier surfaces before being frozen tens of thousands of years ago. If other microbial ecosystems are any indication, this likely provides hints for potentially major players in these archived communities before being frozen. For example, in marine systems, viruses are abundant (10 6 to 10 9 particles ml −1 of seawater [ 33 ]), with virulent viruses altering microbial communities through lysis, horizontal gene transfer, and metabolic reprogramming (e.g., [ 34 , 35 , 36 , 37 , 38 ]), and temperate viruses modulating host gene regulation and providing novel niche-defining features [ 39 ]. In the cryosphere, viruses are much less known, but some data are starting to emerge, such as the studies of viral ecology and evolution in Arctic cryoconite holes [ 40 , 41 ] and a recent work in Arctic sea ice and ancient cryopegs which revealed viruses are abundant, predicted to infect dominant microbial community members, and encoded auxiliary metabolic genes that enabled host adaptations to extreme cold and salt conditions [ 42 ]. Thus, even in these extreme conditions, it appears viruses can play key roles in the ecosystem when they and their hosts are active. Problematically, beyond the expeditionary efforts required to obtain glacier ice cores, community metagenomics approaches are challenged by the low biomass of these samples. First , the low quantity of nucleic acids that can be extracted has left such samples intractable for methods that commonly require micrograms of nucleic acids for metagenomes. Second , because of low biomass, contamination from sampling, storage, and processing is a major issue as genetic material from contaminant organisms can muddle with and overwhelm material from the real glacier-ice community [ 43 , 44 ]. For the former regarding low quantity of nucleic acids, significant progress has been made in seawater viral communities both in ultra-low input sample preparation [ 45 , 46 , 47 , 48 ], data interpretation, and standards [ 36 , 49 , 50 ]. For the latter, clean sampling techniques and surface decontamination strategies have been pioneered to remove potential contaminants on ice core surfaces before melting them for microbial analysis [ 51 , 52 , 53 , 54 ]. In addition, background controls were processed in parallel to authentic ice samples to track and in silico remove suspected contaminants introduced during the processing of ice in the laboratory [ 17 , 22 , 53 ]. We acknowledge that these available methods are not perfect and may still have limitations in decontamination, e.g., it is hard, if not impossible, to demonstrate the removal of all “contaminants” by these methods, while these are the best methods available to date for efficiently eliminating the suspected microbial contaminants and have been adopted for many microbial investigations of glacier ice (e.g., [ 14 , 17 , 22 , 25 ]). However, the removal efficiency of viral “contaminants” is yet evaluated on the ice core surface. Here, we sought to apply these available approaches, including the low-biomass metagenomics approaches initially developed from seawater and the decontamination techniques, to glacier ice, and further establish clean procedures to remove microbial and viral contaminants on ice surfaces through artificial-ice-core “contamination” experiments. Once optimized, we applied these updated procedures to investigate microbial and viral communities archived in two ice cores drilled on the summit (6710 m asl) and plateau (6200 m asl) of the Guliya ice cap (35.25°N; 81.48°E) in far northwestern Tibetan Plateau. Results and discussion Establishing clean surface-decontamination procedures with mock contaminants In the field, no special procedures were used to avoid microbial contamination during ice core drilling, handling, and transport. Therefore, ice core surfaces likely contained microbial contaminants that impeded the identification of microbial communities archived in the ice [ 52 , 55 ]. To develop a clean surface-decontamination procedure for removing possible microbial contaminants on the ice core surfaces and for collecting clean ice for microbial investigations, we constructed sterile artificial ice core sections and covered them with a known bacterium ( Cellulophaga baltica strain 18, CBA 18), a known virus ( Pseudoalteromonas phage PSA-HP1), and free DNA (from lambda phage), according to established protocols [ 52 ] (see “ Materials and methods ” and Fig. 1a ). The decontamination procedure involved three sequential steps to remove a total of ~1.5 cm of the core radius, and the decontamination efficiency was evaluated (see “ Materials and methods ” and Fig. 1a ). Fig. 1 Establishment of decontamination protocol. a Schematic of layered removal of the outer core surface to obtain clean inner ice (top panel) and experimental approach to establish decontamination procedures using sterile artificial ice core sections coated with mock “contaminants” (down panel). Cut, wash, and inner represent ice samples collected from band saw scrapping, water washing, and the inner ice, respectively. Mix represents a sample from the melted ice of a control ice core section prepared without decontamination processing. The mock contaminants were detected by qPCR and nested PCR (see “ Materials and methods ”) in ( b ) and ( c ). b Total bacterial (dark teal color) and viral (purple color) numbers were quantified by qPCR using strain-designed primers in all samples collected in ( a ). c Lambda DNA was detected using nested PCR with designed outer and inner primer sets for lambda DNA. PCR products from inner primer sets were visualized by agarose gel electrophoresis; 1, 100bp DNA ladder; 2–7 represent 1.9×10 4 , 10 3 , 10 2 , 10 1 , 10 0 , and 10 −1 (10-times dilution from standards) copies of lambda DNA, respectively, used as templates for nested PCR; 8, Control_Negative (no template); 9, Sample Cut1; 10, Wash1;11, Inner1; 12, Cut2; 13, Wash2; 14, Inner2; 15, 100bp DNA ladder (same as 1); 16, Control_Mix; 17, Control_Negative (same as 8) Full size image The bacterial and viral contamination in each sample was quantified using strain-specific primers and qPCR (see “ Materials and methods ”). The contaminant bacteria and viruses were reduced by several orders of magnitude to background levels (Fig. 1b ), after being processed with the surface-decontamination procedures described above (Fig. 1a and Additional file 2 : Fig. S1). Even with extremely sensitive method (nested PCR), contaminant lambda phage DNA was not detected in the resulting inner ice (Fig. 1c ). These results indicate that the decontamination procedure removed contaminants such as bacteria, viruses, and free DNA from the surface ice and left clean inner ice that was free of detectable contaminants for microbial and viral analysis. Earlier studies [ 51 , 52 , 53 , 54 ] have put foundational efforts to establish clean ice methods to decontaminate microbes; here, we constructed different decontamination systems (e.g., different washing facilities with three sequential steps; Additional file 2 : Fig. S1) and expanded the clean procedures to also decontaminate viruses from glacier ice core surfaces. Decontamination method provides clean ice from glacier core sections After we established that the surface-decontamination procedure removed surface contaminants, we then used authentic ice core sections to further evaluate the procedure. Two sections (samples D13.3 and D13.5, from 13.34 to 13.50 and 13.50 to 13.67 m depth, respectively) obtained from a plateau shallow ice core (PS ice core) drilled in 1992 from the plateau of the Guliya ice cap (Fig. 2a, b, c ) were decontaminated using the procedures described above (Fig. 1a ). The ice removed during saw cutting and water washing steps (cut: saw-scraped ice; wash: H 2 O-washed ice), along with the inner ice (inner) for each section, was collected as described above (Fig. 1a ). Microbial profiles of six samples (three samples—cut, wash, and inner—from each of the two ice sections) were examined using Illumina Miseq 16S rRNA gene amplicon sequencing. Fig. 2 Sampling sites of glacier ice and an overview of experimental design. a Location of the Guliya ice cap; b drilling sites of the S3 and PS ice cores in Guliya ice cap; c sampling depths of eight ice samples used to investigate the microbial and viral communities; and d an overview of experimental design for microbial and viral investigations of collected ice samples. S3 and PS cores were drilled from the summit and plateau of Guliya ice cap, respectively ( b ). The drill date and length of the two ice cores and the approximate age of each sample are indicated ( c ). The sample names are coded by depth, e.g., for D13.3 is from 13.3 m below the glacier surface. All samples were subjected to microbial investigations, and two samples D25 and D49 (light blue) were selected for viral investigation Full size image The 30 most abundant bacterial genera, each accounting for ≥0.5% of the sequences in at least one sample, comprised 94.7% of the total 72,000 sequences in the six samples (12,000 sequences each sample). These groups were designed as “major genera” and were selected to compare the microbial communities of all cut, wash, and inner samples for both ice sections (Additional file 2 : Fig. S2A). Within each ice section, the most abundant genera were shared across the cut, wash, and inner samples (Additional file 2 : Fig. S2A). For example, the 11 most abundant genera (i.e., an unclassified genus within Microbacteriaceae , an unclassified genus within Comamonadaceae , Flavobacterium , Hymenobacter , an unclassified genus within Sphingobacteriaceae , an unclassified genus within Sporichthyaceae , Polaromonas , an unclassified genus within Actinomycetales , Nocardioides , Janthinobacterium , and an unclassified genus within Rhizobiales ; ordered by relative abundance) were represented in all three (i.e., inner, wash, and cut) D13.3 samples; these genera comprised 93.4%, 92.8%, and 89.1% of the microbial communities in the inner, wash, and cut samples, respectively (Additional file 2 : Fig. S2A). In addition, results from a two-tailed paired t -test showed that the microbial communities did not change significantly across inner, wash, and cut samples of the same ice section (p values were 0.70–0.96 for all pairs of samples, i.e., cut versus wash, cut versus inner, and wash versus inner of each section). To further evaluate these results, we next compared the microbial communities at species level using the most abundant OTUs (n = 33), each of which accounted for ≥1.0% of the sequences in at least one sample. The summed relative abundance of these OTUs ranged from 71.6 to 78.6% in these samples (Additional file 1 : Table S1). Similar to the comparisons at genus level, the inner, wash, and cut samples of the same ice section shared most of the top abundant OTUs (Additional file 2 : Fig. S2b). Specially, 29 of 31 and 29 of 32 OTUs were shared between the inner and the other two removed ice samples (i.e., cut and wash) for the D13.3 and D13.5, respectively. These comparisons at both genus and species levels suggest that the contaminants on the ice core surface were not abundant and diverse enough to alter the overall microbial community composition of glacier ice based on the most abundant microbial groups in these ice core sections. Notably, the PS ice core was drilled in 1992 using an electromechanical drill with no drilling fluid [ 56 ]; in general, the surfaces of these cores are less contaminated than ice cores extracted using a fluid in the borehole [ 55 ]. Several OTUs were unique in the removed samples, including one OTU belonging to the genus Acinetobacter for sample D13.3, as well as two OTUs within the genus Hymenobacter and one unclassified bacterial OTU for sample D13.5 (Additional file 2 : Table S1). We posit that these OTUs (<1.0%) might be contaminants removed from the ice core surface. We also note that there may also be natural variations in microbial communities across the same cross section of an ice core (here they were represented by cut, wash, and inner samples from the same depth), as uneven horizontal distribution of dust, nutrients, and microbes in an ice core is not unexpected and may reflect variation in deposition. Microbial profiles potentially differ between the PS and S3 ice cores Once a clean decontamination procedure was established with both artificial ice cores and authentic ice core sections, we investigated the microbial and viral communities of two ice cores from Guliya ice cap (Fig. 2a, b, c, d ). We first focused on microbial communities from five different depths (i.e., 13.3, 13.5, 24.1, 33.3, and 34.4 m) in the 1992 PS ice core, and compared them with the communities of three samples (i.e., D25, D41, and D49) from the 2015 summit core 3 (S3) (Fig. 2a, b, c, d ). These three S3 samples were processed at the same time, and the 16S rRNA gene data for two (i.e., D41 and D49) of them were published previously to establish in silico decontamination method [ 17 ] and were cited in this study for comparison of microbial communities across eight depths of two ice cores from the same glacier. Four background controls were co-processed with the glacier ice samples to trace the background microbial profiles, which were then proportionally removed in silico from amplicon data of the ice core samples (see “ Materials and methods ”), according to our previously published method [ 17 ]. After in silico decontamination, we compared the microbial community composition at genus level between and within ice cores. Reads were rarefied to 24,000 sequences in each sample, and collectively, the samples contained 254 bacterial genera, 118 of which were taxonomically identified to the genus level (Additional file 1 : Table S2). The 26 most abundant genera, defined as those comprising at least 1.0% of sequences in at least one ice sample, represented >95.1% of each community (Fig. 3a ). Bacterial genera including Janthinobacterium (relative abundance 1.0–23.8%), Polaromonas (2.6–4.1%), Flavobacterium (2.3–23.6%), and unknown genera within the families Comamonadaceae (15.5–24.3%) and Microbacteriaceae (7.1–48.5%) were abundant and present in all five PS samples (Fig. 3a ). This indicates that members belonging to these lineages subsist over long periods of time in the environments before being frozen permanently, although their relative abundances vary across ice core depths (ages). These genera and families have also been reported as abundant groups in glacier ice cores by many previous studies (e.g., [ 4 , 15 , 17 , 57 , 58 , 59 ]). The detection of bacterial sequences belonging to similar genera in ice core samples from different glaciers located around the world can be explained by the ubiquitous distribution of certain species in geographically distant environments [ 60 ]. The S3 and PS ice core samples shared some abundant genera, such as Janthinobacterium , Herminiimonas , and Flavobacterium (Fig. 3a ); however, several abundant genera in the S3 samples were nearly absent in the PS samples, including Sphingomonas , Methylobacterium , and an unclassified genus in the family Methylobacteriaceae (Fig. 3a ). Thus, there are potential differences in the microbial communities between the ice cores retrieved from the plateau (shallow part) and the summit of the Guliya ice cap. Fig. 3 Distinct microbial profiles between PS and S3 ice cores. a Microbial profiles of the 26 most abundant genera in PS and S3 ice core samples. Profiles are illustrated as a percent of the total 16S rRNA gene amplicon sequences. The key indicates genera, preceded by family, or order in cases where family is not assigned. Genera labeled “Other” represent sequences with unknown genus-level taxonomy, i.e., distinct from taxonomically assigned genera in the reference database. The 26 most abundant genera, defined as those comprising at least 1.0% of the sequences in at least one ice sample, collectively represented >95.1% of each community. The total relative abundance of these genera was normalized to 100%. b PCoA showing sample clustering based on microbial communities at OTU (~species, 97% identity) level. Samples from the same ice core are marked with the same color. Sample names are indicated next to each symbol. PCoA was performed on the weighted UniFrac metric, which accounts for the relative abundance and inferred relatedness of the lineages present Full size image We next used Principal Coordinates Analysis (PCoA) to compare microbial community compositions at OTU (~species; 97% identity) level among all eight samples and found that the communities clustered primarily by ice core (Fig. 3b ), separating along the first principle coordinate (which accounted for 68.2% of community variability; the second axis accounted for 13.4%). Analysis of similarity statistics (ANOSIM) confirmed that the microbial communities of samples from the plateau core were significantly different from summit core samples (p = 0.02). Because of the differences in the elevation-relevant factors such as the wind power and temperature, the process from deposition to accumulation could be different between plateau and summit surfaces, which may further contribute to the variations in their microbial communities. In addition, all PS core samples were from the shallower part of the ice cap (top 34.5 m of the ~310-m thick ice field) [ 56 ] and were much younger than the three samples from the S3 core (~70–300 years versus ~355–14,400 years old; Additional file 1 : Table S3), which were collected near the bottom of the summit ice core (~51-m length; Fig. 2 ). Therefore, the ice samples from the two different ice cores represent very different climate conditions at the time of deposition. This is further illustrated by variations in several environmental parameters (e.g., concentration of insoluble dust and ions such as sulfate and sodium) measured in the two ice cores (Additional file 1 : Table S3). To further identify the environmental parameters potentially influencing these microbial communities, two-tailed Mantel tests were performed to examine the relationships between environmental properties (Additional file 1 : Table S3) and microbial community compositions. Parameters including elevation, ice age, and concentrations of dust, chloride, sulfate, and sodium, significantly (p ≤ 0.05) correlated with microbial community compositions (Additional file 1 : Table S4). This further supports above discussion that explains the potential differences between the microbial communities of the two ice cores, and is consistent with many previous reports that the microbial communities archived in glacier ice often reflect the differences in many physicochemical parameters such as dust concentration [ 10 , 11 , 12 ] and some ion concentrations [ 13 , 14 ]. The significant correlations between microbial community compositions and environmental parameters of ice samples indicated that the ice core microbial communities may possibly reflect climate conditions at the time they were deposited. We note that other possibilities might also influence the microbial communities, such as the deposition-to-accumulation process as discussed above and the potential post-deposition microbial activity on glacier surfaces. Ice-archived viruses We focused on the virus communities in two ice samples (D25 and D49) from the S3 ice core. The samples were selected based on their difference in ice age (~355 versus ~14,400 years old), climate conditions (colder versus warmer based on the δ 18 O data, not shown), and dust concentrations, which are up to 10 times higher in the D49 sample (Additional file 1 : Table S3). Viruses were concentrated from 0.22-μm-pore-sized filtrate, which excluded intracellular viruses including temperate viruses [ 61 ], and then treated with DNase to remove free DNA. Counts of VLPs in the two samples were below the detection limit using a wet-mount method (<10 6 VLPs ml −1 [ 62 ];). Thus, we applied the low-input quantitative viral metagenomic sequencing that was previously established to study seawater viral communities [ 46 , 47 , 63 , 64 ], to the viral concentrates in our low-biomass glacier ice samples. After sequencing, quality control, and de novo assembly, we obtained 1849 contigs with a length of ≥10 kb (Additional file 1 : Table S5). Overall, VirSorter predicted 43 “confident” viral contigs (≥10 kb in size and categories 1, 2, 4, or 5; Additional file 1 : Table S5 [ 65 ]), which were grouped into 33 vOTUs (viral OTUs) using currently accepted cutoffs that approximate species-level taxonomy [ 35 , 50 , 66 ]. This is a small number of viral species compared to well-studied and relatively easy-to-process sample types (e.g., global ocean samples [ 35 , 37 , 66 ]), and may not represent the entirety of dsDNA viral diversity in the glacier ice environments. However, it is on par with recent reports in other more challenging systems such as soils where, for example, 1.4% of assembled contigs were predicted as “confident” viruses and 53 long (≥10 kb) viral genome fragments were recovered from eight viromes [ 67 ]. On average, 1.4% (2.2 and 0.6% for D25 and S3.49, respectively) of the quality-controlled reads were recruited to these vOTUs (Additional file 1 : Table S5). Low percentage of reads recruited to predicted viral sequences is not unusual for low-input viromes, and consistent with previous studies from more diverse communities (e.g., as low as 0.98% [ 35 , 67 ]). While previous studies have detected tomato mosaic tobamovirus RNA and estimated VLP concentrations in ancient glacier ice [ 3 , 27 ], this is the first report of viral genome fragments assembled de novo from such an environment. Rarefaction curves were constructed (see “ Materials and methods ”) and showed that both viromes approached saturation of long vOTUs (≥10 kb) at the sequencing depth used in this study (Additional file 2 : Fig. S3), though we note that this analysis may underestimate the total viral diversity in these samples because (i) these rarefaction curves missed any potential virus whose genome was not extracted, sequenced, or assembled from the samples, and (ii) low-input libraries have to be PCR-amplified prior to sequencing (15 PCR cycles in this study), and this can underestimate the total diversity within a library due to PCR duplicates and skew the shape of rarefaction curves [ 68 ]. Ice viral communities consist of mostly novel genera and differ between depths With 33 vOTUs (length ≥10 kb) obtained from the two S3 ice samples, we then evaluated how viruses in this unexplored extreme environment compared to known viruses. Because viruses lack a single, universally shared gene, taxonomies of new viruses are now commonly established using gene-sharing analysis from viral sequences [ 69 ]. In our dataset, that meant comparing shared gene sets from 33 vOTUs with genomes from 2304 known viruses in the NCBI RefSeq database (version 85; Additional file 1 : Table S6) using vConTACT version 2 [ 69 ]. Such gene-sharing analyses produce viral clusters (VCs), which represent approximately genus-level taxonomic assignments [ 37 , 69 , 70 ]. Of the 33 vOTUs, four were clustered into four separate VCs containing RefSeq viral genomes, two formed a VC with only ice vOTUs, and the other 27 vOTUs remained isolated as singletons or outlier vOTUs (Fig. 4a ; Additional file 1 : Table S6). Therefore, only four vOTUs (12%) could be assigned a formal taxonomy: they belonged to four different genera in the families Siphoviridae (three genera) and Myoviridae (one genus) within the order Caudovirales (Additional file 1 : Table S6). These taxonomic results indicate that glacier ice has a diversity of unique viruses, consistent with, but much higher than, other environmental studies in oceans (52% unique genera) [ 37 ] and soils (61% unique genera) [ 71 ]. Fig. 4 Taxonomies ( a ), communities ( b ), and host linkages ( c – f ) of 33 vOTUs recovered from two glacier ice samples. a Viral taxonomy was assigned by comparing genome-content-based network analysis of the 33 glacier vOTUs and 2304 known viral genomes in the NCBI RefSeq database using vConTACT v2 (see “ Materials and methods ”). vOTUs were classified into three groups: “Singletons” (gray) that had no close relatives; “Exclusive VCs” (black) that were viral clusters (VCs) of exclusively glacier ice vOTUs; and “Classified VCs” (blue) which included glacier ice vOTUs and Refseq viral genomes. b The normalized coverage of these 33 vOTUs was generated by mapping quality-controlled reads to vOTUs, and was normalized to per gigabase of metagenome. c – f Relative abundances of three abundant (>1.0%) microbial genera and their viruses: c Methylobacterium in D25, d Methylobacterium in D49, e Janthinobacterium in D25, and f Sphingomonas in D49. Relative abundances of microbes are based on 16S rRNA amplicon sequencing, and vOTUs are based on their coverages generated by mapping quality-controlled reads to vOTUs. Viruses were linked to hosts in silico by three methods: Blastn, VirHostMatcher, and CRISPR matches (see “ Materials and methods ”) Full size image We then explored the environmental distribution of these 33 glacier viruses by recruiting metagenomic reads from a range of environments including global ocean [ 66 ], Arctic sea ice and ancient permafrost brine (cryopeg) [ 42 ], soils [ 72 , 73 ], lakes [ 74 , 75 ], deserts [ 76 , 77 , 78 , 79 ], air [ 80 , 81 ], cryoconite [ 40 ], and Greenland ice sheet [ 40 ] (225 metagenomes total). None of our 33 glacier vOTUs was detected in any of the tested metagenomes, indicating that the glacier ice archived unique viral communities compared to other environments, at least based on the viral populations recovered here. This may be due to the fact that the glacier viruses were “frozen” several thousands of years ago, that these ancient glacier viruses are unique from the viruses in the modern environments that have probably been evolving for a long time, or that these preserved glacier viruses were not transported from those regions where the tested metagenomes were sampled. Unfortunately, the lack of viromes from ancient glacier ice limits worldwide glacier habitat analyses. However, it is promising that the “black box” of the archived ancient virus in glacier ice can now be gradually opened as the technologies to generate and study clean and low-biomass viromics, including a modern viromic toolkit [ 36 ], are becoming available [ 46 , 47 , 63 , 64 ]. Next, we looked more closely at the vOTU (~species) level to compare viral communities obtained from the archive of two depths of the S3 ice core. With standard read-mapping to 33 vOTUs (see “ Materials and methods ”), we found that the glacier ice from the two depths contained a mix of shared and depth-unique vOTUs (Fig. 4b ; Additional file 1 : Table S7). A mix of shared and depth-unique microbes was also observed for these samples (Fig. 3a ; Additional file 1 : Table S2). Previous studies have also reported different microbial community structures in ice samples collected from different depths of the same ice core, which probably reflects differences in the environmental conditions at the time the ice was deposited [ 11 , 82 ]. Interestingly, three vOTUs were abundant (relative abundance >10%) among the recovered vOTUs in both depths: D49_170_39214, D49_576_17121, and D25_155_24088 (vOTU names, Fig. 4b ; Additional file 1 : Table S7). This suggests that these viruses may be active in these ice cores or that a large number of virus particles were initially deposited so that a sufficient amount was still intact for DNA extraction and sequencing after being frozen for potentially 15,000 years. Glacier ice viruses are predicted to infect dominant glacier ice microbes Microbial analysis found that both the D25 and D49 samples were dominated by the bacterial genus Methylobacterium , an unclassified genus within the family Methylobacteriaceae , and genus Sphingomonas , with relative abundances of 18.2–67.5%, 5.0–8.3%, and 1.4–75.3%, respectively (Fig. 3a ). In addition, the genera Janthinobacterium (7.1%) and Herminiimonas (6.6%) were also abundant in D25, but were absent or rare (<0.01%) in D49 (Fig. 3a ). All of these genera are common abundant microbial groups in glaciers [ 4 , 15 , 17 , 57 , 58 , 59 ]. In addition, many members belonging to these genera are psychrophilic bacteria and have been revived and isolated from glacier ice, such as Sphingomonas glacialis C16y, Sphingomonas sp. V1, Methylobacterium sp. V23, Janthinobacterium svalbardensis JA-1, and Herminiimonas glaciei UMB49 [ 18 , 57 , 83 , 84 , 85 ]. These results indicate that the ice serves as an archive for abundant taxa that are likely equipped with genomic adaptations to cold conditions and might revive and be introduced into ecosystems after the glaciers melt in the future. We then explored the potential impacts of viruses on these abundant microbes by linking viruses to their hosts in silico. Hosts for the 33 vOTUs were predicted using three in silico methods: similarities in viral and bacterial nucleotide sequences [ 37 , 86 ], composition [ 87 ], or CRISPR spacer matches [ 37 ]. The sequence similarity method (Blastn) predicted hosts for 14 of the 33 vOTUs (Additional file 1 : Table S8), whereas the sequence composition method (VirHostMatcher) linked nine vOTUs to microbial hosts (Additional file 1 : Table S9; see “ Materials and methods ”). The CRISPR method matched hosts for two vOTUs (Additional file 1 : Table S10), one of which was also linked to the same host at genus level by the sequence similarity method but none of them was matched by the sequence composition method (Additional file 1 : Tables S7, S8 & S9). Although only about half (18 of 33 vOTUs) of the vOTUs were linked to a host by at least one of the three methods, these host predictions indicated that viruses in glacier ice were infectious to microbes at some time (whether before and/or after ice formation) in these extreme cold and high-elevation environments, and that they probably played an important role in modulating microbial communities. The predicted host genera that were most abundant in the same ice cores included Methylobacterium , Sphingomonas , and Janthinobacterium (Fig. 3a ; Additional file 1 : Table S2). Many members of these genera are psychrophilic bacteria as mentioned above. The relative abundance of Methylobacterium -associated vOTUs was high in both D25 (67.5%) and D49 (18.2%), which was consistent with the dominance (48.2% and 44.0%, respectively) of this bacterial genus in the microbial communities of both samples (Fig. 4c, d ). Similarly, Janthinobacterium -linked viruses were detected with a high relative abundance of 7.1% in the D25 sample, where microbial community was found to be dominated by the genus Janthinobacterium with 4.5% relative abundance (Fig. 4e ); Sphingomonas -associated viruses represented 3.1% of communities in the D49 sample, while members of Sphingomonas accounted for 75.3% of the microbial profiles in this sample (Fig. 4f ). The relatively high abundance of these genera and their associated viruses suggests that the recovered viruses infected abundant microbial groups and thus might play a major role in this extreme ecosystem by influencing their hosts when they are active, although it is still uncertain when the infections occurred. Notably, no host could be predicted for about half of the vOTUs, partly due to the limitations of available reference databases and techniques used for host prediction [ 86 ]. As methods improve and host databases expand (e.g., Genome Taxonomy Database [ 88 ] and metagenome-assembled genomes from glacier ice), continued studies will likely provide more complete understanding of the relationship between viruses and their microbial hosts in the ice cores. Temperate viruses likely dominate glacier ice environment Having investigated virus-host pairs, we then explored the lifestyle (i.e., temperate or virulent) of the 33 vOTUs we were able to recover here. Interestingly, 14 (42.4%) vOTUs were identified as putative temperate viruses (see “ Materials and methods ”; Additional file 1 : Table S11). Though a small dataset, the percentage of identifiably temperate phages in glacier ice was 3.2-, 8.4-, and 14.1-fold more than that in gut (13% [ 89 ]), soil (5% [ 67 , 71 ]), and marine (3% [ 66 ]) viruses, respectively, detected by the same method. Several specificities of glacier ice habitats may explain such high percentage of temperate phages. Glacier ice is an extreme habitat for microbes and viruses with low temperature, high UV, and low nutrient concentration, in which microbes are usually under poor growth conditions, and microbial density is very low (i.e., 10 2 –10 4 cells ml −1 [ 4 ]) compared to most other environments (e.g., seawater contains 10 4 –10 6 cells ml −1 [ 7 ]). Previous reports highlighted how the frequency of temperate viruses is influenced by environmental conditions (reviewed in [ 39 , 90 ]) and that temperate viruses tend to be more abundant compared to virulent viruses under extreme environments of low temperature [ 91 , 92 ], high latitude [ 93 ], low nutrients [ 94 ], and low host concentrations [ 95 ]. We hypothesize that, as similar to other extreme and low-nutrient environments, temperate phages are selected for and favored before being frozen in glacier ice. Mechanistically, this selection process likely happened on the glacier ice surface, as microbes on the surface snow of the glacier are exposed to nutrients, light, and possible melt water when temperature is high in the summer, and they may still be active and undergo a selection progress on glacier surfaces (reviewed in [ 9 ]). This progress may lead to substantial size fluctuation of microbial populations and bottleneck events, which have been shown to favor temperate viruses [ 90 , 96 ]. Overall, our data suggest that temperate phages likely dominate glacier ice environment and highlighted the importance to specifically target these viruses (e.g., intracellular viruses) in future studies of viruses archived in glacier ice. Insights into the gene content and genome organization of viruses infecting Methylobacterium Microbial analyses and viral host predictions found that both microbial members within the genus Methylobacterium and their associated viruses were abundant in the two studied glacier ice samples. Members of the genus Methylobacterium were reported to dominate the microbial community in ancient ice cores from many previous studies (e.g., [ 4 , 12 , 16 , 25 ]) including several microbial investigations of the Guliya ice cap ice cores using culture-dependent methods about two decades ago [ 18 , 23 , 57 ], and they are widely distributed in natural environments. For example, the genus Methylobacterium contains 47 validly published isolates at the time of writing ( ) from environments including air, aquatic sediments, fermented products, freshwater, plants, and soil (summarized in [ 97 ]). The broad distribution indicates their ability to live in a wide range of environments. The viruses infecting Methylobacterium may also have significant ecological roles, so next we evaluated the environmental distribution of viruses infecting Methylobacterium and the genome features of Methylobacterium -linked glacier viruses and their closely related viruses from other environments. Methylobacterium -associated viruses were obtained from environmental viromes including global oceans [ 35 ], Arctic sea ice and ancient permafrost brine (cryopeg) [ 42 ], soils [ 72 , 73 ], lakes [ 74 , 75 ], deserts [ 76 , 77 , 78 , 79 ], air [ 80 , 81 ], cryoconite [ 40 ], and Greenland ice sheet [ 40 ], by the same method as for glacier-ice viruses. In addition, prophages were extracted from 131 Methylobacterium genomes from the RefSeq database (release v99). Only six Methylobacterium viruses were obtained from the environmental metagenomes, including three from global oceans [ 35 ], two from lake water [ 75 ], and one from a desert salt pan [ 77 ], while 478 prophages were detected from 127 out of 131 Methylobacterium genomes that were from diverse environments such as plant, soil, freshwater lake, drinking water, ocean water, salt lake, air, and ice (Additional file 1 : Table S12). A genome content–based network was built to evaluate the relationship of five glacier-ice viruses with 484 viruses from other environments, all predicted to infect Methylobacterium (Fig. 5a ). In the network, two glacier virus (D25_155_13915 and D49_576_17121) were separate from any other viruses (i.e., they were singletons), the other three glacier viruses formed three VCs with eight prophages (i.e., VC0_0, VC8_0, and VC11_0; assessed with confidence scores by vConTACT v2 [ 69 ]). The vOTU D49_418_13568 was associated with viruses from air and drinking water (VC11_0), vOTU D49_170_39214 (VC8_0) was clustered with viruses from plants, while D25_14_65719 (VC0_0) was clustered with plant, air, and soil viruses (Fig. 5a and Additional file 1 : Table S12). Notably, most of the associated prophages within the three VCs were from plant, soil, or air, which might be the habitats from which the glacier Methylobacterium hosts and viruses originated. Fig. 5 Genome content–based network ( a ) and genome organization ( b – c ) of viruses infecting Methylobacterium . a A gene content–based network was built to evaluate the relationship of five glacier-ice viruses to 484 viruses from other environments, all predicted to infect Methylobacterium (see “ Materials and methods ”). For clarity, viruses that were not connected to any of the five glacier-ice viruses were excluded from the network. Each node represents a virus, with glacier-ice viruses and others shaped in triangle and circle, respectively. The edge between nodes indicates the distances between two viruses. Viral clusters (VCs) are generated by vConTACT v2, and viruses that belonged to the same VC are indicated in the same color. In each VC, the name and source environment of each member are indicated, with glacier-ice virus at the top. All gray nodes represent viruses from other environments that did not share VC with any glacier-ice virus. b – c Genomic organization and comparison of Methylobacterium viruses that are longer than 15kb in VC0_0 and VC8_0 from ( a ). Only glacier viruses and their closely related viruses with genome size more than 15kb were illustrated, including four and four viruses from VC0_0 ( b ) and VC8_0 ( c ), respectively. Viral contigs were compared in terms of gene similarity, order, and direction (i.e., leftward or rightward arrow). Genes are coded in color based on their putative biological function. Potential microbial genes were identified by CheckV (see “ Materials and methods ”) and marked in green color. The predicted protein with no functional annotation is classified as “Hypothetical protein” and colored in gray. The gray lines indicate the amino acid identities between genes, as illustrated in the scale bar. Abbreviations: TransR, transcriptional regulator; MTase, mRNA methyltransferase; tRNASL, tRNA-splicing ligase; terS, terminase small subunit; terL, terminase large subunit; Mu N, Mu N gene product; DNARP, DNA repair protein; DNAM, DNA methylase; DNAP, DNA polymerase; RNAP, RNA polymerase; LytT, lytic transglycosylases; TransmP, transmembrane protein; DigC, diguanylate cyclase Full size image We next evaluated the genome content and organization of above-clustered Methylobacterium viruses using two glacier-ice viruses and six prophages that were longer than 15 kb (Fig. 5b, c ). The glacier viruses shared a similar genomic content and arrangement with the prophages in the same VC, especially for the phage structure genes including the portal, capsid, tail, and baseplate genes (Fig. 5b, c ). Notably, all these viruses contained two copies of Mu N genes that were located near the tail and baseplate genes (Fig. 5b, c and Additional file 1 : Table S13). The N gene product (i.e., DNA circularization protein) has been reported as a multifunctional protein that is injected into the host cell along with the infecting phage DNA and is involved in tail assembly, as well as the protection and circularization of the infecting DNA [ 98 , 99 , 100 ]. Phylogenetic analysis of the 16 (two copies in each of eight viruses) N genes showed that these genes formed two clusters, and each cluster included one of the two copies of N genes from all eight Methylobacterium viruses (Additional file 2 : Fig. S4). These results indicated that the two copies of N genes likely evolved independently in the same virus, though this is still unclear with the limited information presented in this study. In agreement with the genome-based network analysis, the viruses from the same VC clustered together based on either copies of the N genes (Fig. 5a ; Additional file 2 : Fig. S4), indicating strong conservation of N genes in the Methylobacterium viruses. Taken together, the viruses infecting Methylobacterium appear to be abundant in the glacier ice and are related to viruses infecting Methylobacterium strains in plant and soil habitats. This is consistent with a previous report that the main source of dust deposited on Guliya ice cap likely originates from the soils [ 101 ]. This points to a potential long-standing association between phages and their host in the Methylobacterium genus, possibly over more than tens of thousands of years, and highlights how some bacteria and phages can seemingly stably coexist in the environment, as argued in other studies (e.g., [ 102 , 103 ]). Glacier ice viruses unravel novel auxiliary metabolic genes (AMGs) potentially influencing host chemotaxis Virus-encoded auxiliary metabolic genes (AMGs) are microbial-derived genes that can modulate host metabolism during infection and have been reported in viruses from diverse ecosystems such as marine water [ 37 ], soil [ 67 , 71 ], animal host (e.g., rumen [ 104 ]), and some extreme environments (e.g., Arctic cryopeg brine and sea ice [ 42 ]). Here, we begin to explore the AMGs of viruses archived in glacier ice. Briefly, 1466 predicted genes from the 33 vOTUs (length ≥ 10 kb) were queried against functional databases by DRAM-v (see “ Materials and methods ”), which resulted in about half genes (n = 779) matching annotated sequences in KEGG or PFAM databases (Additional file 1 : Table S13). These annotations will potentially enable the datasets as valuable public resource of ancient viral genes. Four putative AMGs were identified from these annotated genes (Additional file 1 : Table S14). Two of them were previously reported, including concanavalin A-like lectin/glucanases superfamily and sulfotransferase [ 37 , 71 ]. The former one was associated with virus-encoded glycoside hydrolase that was potentially involved in pectin cleavage, thus, further potentially facilitating microbial carbon degradation and utilization through cleaving polymers into monomers and influencing the carbon cycling [ 71 ]. The later one was associated with sulfation that contributes to the transfer reaction of the sulfate group from the donor (e.g., 3′-phosphoadenosine 5′-phosphosulfate) to an acceptor that can be a number of substrates, and can potentially play a key role in biological processes such as cell communication and growth [ 105 ]. The other two AMGs, motA and motB , that were potentially relevant to cell flagella assembly (Additional file 1 : Table S14), were never reported previously as AMGs in viral contigs, though our screening of 848,507 viral contigs in the Global Ocean Viromes 2.0 dataset (GOV 2.0 [ 66 ]) identified motA or motB genes from 70 high-quality viral contigs in 52 viromes including 23, 15, and 14 viromes from surface water, mesopelagic water layers, and deep chlorophyll maximum layers, respectively (Additional file 1 : Table S15), indicating their broad distribution in the ocean environment. These AMGs can potentially offer new insights into how viruses manipulate microbial metabolisms when they might have been active ~14,400 years ago before being frozen. Here, we further focused on the two novel AMGs and discussed how they potentially influence the metabolisms of microbial hosts in glacier ice. These two novel genes were motility genes ( motA and motB ) from the same vOTU D25_22_20338 (Additional file 1 : Table S14; Additional file 2 : Fig. S5a). Fueled by ion flow, bacterial flagella are turned by rotary motors which consist of the stator and the rotor [ 106 ]. Analyses of AMGs in glacier-ice viruses revealed that the vOTU D25_22_20338 encoded two membrane-embedded proteins, MotA and MotB (Additional file 2 : Fig. S5a), which compose the stator of a flagellar motor. In bacteria, MotA/MotB protein complexes function in delivering protons to the rotor, thus generating a proton motive force as the energy source to rotate the rotor [ 107 ]. Chemotaxis plays a central role in controlling the rotational direction of flagellar motors, which allows bacteria to respond to environmental stimuli [ 108 ]. Considering the harsh environment associated with nutrient deficiency in glacier ice [ 109 ], we speculate that viruses potentially hijacked these motility genes (i.e., motA and motB ) to facilitate nutrient acquisition of their hosts. We then explored the functionality and evolution of the two novel AMGs (i.e., motA and motB ). The protein sequences of the two novel AMGs were structurally modeled using Phyre2 [ 110 ], and the results showed that both had 100% confidence scores that linked them to their closest template protein (Additional file 2 : Supplementary Fig. S5bc). MotB uses a conserved peptidoglycan-binding motif to anchor the stator complex to the peptidoglycan layer around the rotor [ 111 ], and this motif was identified in the virus-encoded MotB (Additional file 2 : Fig. S5e). Though MotA lacks a conserved motif (Additional file 2 : Fig. S5d), it functions as a complex and is co-transcribed and translated with MotB [ 112 ]. Together, these in silico analyses suggested that these AMGs are likely functional. Evolutionarily, both AMGs were deeply isolated from all clades with their mostly close microbial homologs (Additional file 2 : Supplementary Fig. S6ab). These phylogenetic results limited us to further identify potential horizontal gene transfer events of these AMGs from hosts to viruses, while they suggested that these genes found in the ancient glacier-ice viruses recovered in this study are very distinct from known microbial sequences in modern environments. In summary, these findings about AMGs can potentially provide a glimpse into how glacier-ice viruses, in the Guliya ice cap, manipulate host metabolism and hence likely affect biogeochemical cycles when they were active before being frozen. We note that all these speculations are based on in silico analyses; future experiments are necessary to validate the activity and function of these potential virus-encoded proteins. Many studies have demonstrated microbial activity on the glacier surfaces especially in the cryoconite holes in summer (e.g., [ 113 , 114 ]), including glaciers from Tibetan plateau region [ 9 , 82 ]. However, the surface activity may vary from glaciers with different location, elevation, radiation, and surface temperature. Guliya ice cap is located at middle latitude (35.25°N; 81.48°E) of Tibetan Plateau, and the summit elevation is about 6710 m above sea level. The surface temperature of the summit is below the water frozen point (0°C) during most of the time in a year around, while in the summer, the Guliya surface temperature could approach near or above 0°C for short periods and has strong sunlight input; this likely leads to produce some melt water on the glacier surface, which was supported by the observation of melt layers (i.e., clean and transparent ice) in the ice core (data not shown). Therefore, there is likely microbial activity on the surface of Guliya ice cap before microbes were “permanently” frozen. In addition to the glacier surface, some studies have hinted at the possibility of microbial activity in frozen glacier ice based on the detection of some excess gases (e.g., CO 2 , CH 4 , and N 2 O) at some depths, which may be produced by post-depositional microbial metabolism [ 24 , 25 , 26 ]. However, without direct observational measurements, it remains controversial in whether there is in situ microbial activity in glacier ice after being frozen. We anticipate future studies could better articulate the potential microbial activity in glacier environments including the surface and englacial ice (i.e., after being frozen). Here, we propose next-step experiments trying to explore the “activity” questions described above. Ideally in the field work, we could sample the time-series snow before deposition (i.e., from air) and after deposition (i.e., from different depths of glacier ice surface) and compare the microbial communities of matched snow samples from before and after deposition. The results from comparison will help us understand if there is activity and how communities change on the glacier surfaces. In addition, some specific microbial groups (e.g., Cyanobacteria and Chloflexia) may be used as indicator of surface growth, as they need light to grow and may “bloom” on the glacier surface [ 82 ]. In the lab, microbial activity in glacier ice could be measured using the BONCAT-FACS method [ 115 ] through comparing the potential change of microbial communities of the sample replicates after incubations under various conditions in temperatures (< 0°C) and times. Conclusions Glaciers potentially archive environmental conditions and microbes over tens to hundreds of thousands of years. Unfortunately, glaciers around the world, including those from Tibetan Plateau and Himalaya, are rapidly shrinking, primarily due to the anthropogenic-enhanced warming of Earth’s ocean-atmosphere system [ 116 ]. Such melting will not only lead to the loss of those ancient, archived microbes and viruses but also release them to the environments in the future. To begin accessing these archived microbes and viruses, we expanded upon prior in silico [ 17 ] and experimental decontamination methods to remove microbial contaminants from ice core surfaces [ 51 , 52 , 53 , 54 ] and optimized similar preparation methods for viruses. Application of these new ultra-clean methods to ~14,400-year-old glacier ice presents the first glimpse of past microbial and viral communities archived in glacier ice from the Tibetan Plateau. These efforts revealed microbiological findings concordant with other ice cores and provided a first window into viral genomes, communities and their ecology, functions, and origin in ancient glacier ice in this remote part of the world. Future work will benefit from emerging technologies to detect microbial growth (e.g., BONCAT-FACS [ 115 ]), better capture of very small diverse vOTUs and niche-defining hypervariable regions (VirION [ 117 ]) including ssDNA [ 118 ] and RNA viruses [ 119 , 120 ], and high-throughput cultivation (e.g., Microfluidic Streak Plates method [ 121 ]). Earth is now squarely in the Anthropocene, and human activities are impacting the planet and its interconnected ecosystems in ways no single species has done before [ 122 ]. Fortunately, application of advanced research capabilities for the intensive study of ice-core-derived biotic and abiotic information may reveal the primary drivers of both natural (pre-anthropogenic) and anthropogenic variations in microbial evolution. Materials and methods Sterile artificial ice core sections and mock “contaminants” An artificial ice core was constructed from sterile water, which was pre-filtered through a Millipore system (Cat No. MPGP04001, MillipakR Express 40 Filter, Merck KGaA) outfitted with a 0.22-μm mesh final filter and autoclaved at 121°C for 30 min, then frozen at −34°C for 12–24 h in a 2-L sterile plastic cylinder (Nalgene). The cylinder was transferred from −34 to −5°C and kept at that temperature overnight to reduce the possibility of fracturing (which is caused by sudden temperature changes) before placing it at room temperature for about 30 min to melt the surface ice and expose the underlying ice core. Cellulophaga baltica strain #18 (CBA 18; NCBI accession No. CP009976) was cultured in MLB medium (15 g sea salts (Cat No. S9883, Sigma), 0.5 g bacto peptone, 0.5 g yeast extract, 0.5 g casamino acids, 3 ml glycerol, and 1000 ml water) stationary overnight at room temperature. The cell concentration was measured by epifluorescence microscopy after the cells were captured on a 0.22-μm-pore-sized filter (Cat No. GTTP02500, Isopore) and stained by SYBR Green (Cat No. S9430, Sigma) as described previously [ 123 ] with some modifications. Briefly, cells on the filter were covered with several drops of 20×SYBR Green (Cat No. S11494, Life Technologies). After 15 min of staining in the dark, the SYBR Green was carefully removed with a 50-μl pipette and by touching the backside of the membrane with a Kimwipe (Kimtech). The filter was mounted on a glass slide with freshly made anti-fade solution (1 mg ascorbic acid: 100 μl PBS: 100 μl glycerol) and a 25-mm 2 cover slip. Cells on the filter were counted using epifluorescence microscopy (Zeiss Axio Imager.D2) with >350 cells or >20 fields counted, which was a reliable threshold to estimate the total bacterial abundance [ 124 ]. Pseudoalteromonas phages strain PSA-HP1 (NCBI: txid134839) were harvested from 95% lysed plaque assays (agar overlay technique). The concentration of PSA-HP1 was counted by a wet-mount method using SYBR Gold (Cat No. S11494, Life Technologies) staining and glass beads as described previously [ 62 ]. The lambda phage DNA (100 μg/ml; 1.88×10 9 copies/μl; genome size 4.8 kb) was purchased from Life Technologies (Cat. No. P7589). Above components (i.e., CBA 18, PSA-HP1, and lambda phage DNA) were combined in 1 ml ddH 2 O, which contained 1.00×10 6 cells, 4.48×10 7 viruses, and 1.88×10 8 copies of lambda DNA to make the mock contaminants. The concentration of contaminant cells is approximate to the cell numbers in glacier ice (~10 2 –10 4 cells/ml [ 4 ]) and a previous report of core exteriors (~10 2 –10 5 cells/ml [ 52 ]). The 1 ml mixtures were spread evenly on the artificial ice core surface with sterile gloved hands. The ice core was cut into three equal-sized sections with a sterilized band saw, which was previously wiped with 75% ethanol and exposed to UV light for >12 h. Surface decontamination procedures The decontamination procedure consisted of three steps (Fig. 1a ) following a previously published method [ 52 ] with slight modifications. First, the exterior (~0.5 cm of the core radius) of the ice core was scraped away using a sterile band saw; second, the ice core was rinsed with 95% ethanol (v/v; Cat No. 04355223, Decon Labs) to remove another ~0.5 cm of the surface; third, a final ~0.5 cm of the surface was washed away with sterile water (Fig. 1a ; Additional file 2 : Fig. S1). After about 1.5 cm of the core surface was removed, the inner ice was the “clean” sample and collected for further analyses. Two artificial ice core sections (sections 1 and 2) were processed using the decontamination procedure described above (Fig. 1a ). The ice removed by the saw scraping (first step), water washing (third step), and the inner ice were collected as three different samples in sterile beakers. As a positive control, another ice core section was placed in a sterile beaker, which was not decontaminated (Fig. 1a ). All sampling steps were conducted in a cold room (−5°C), which was exposed to UV light for more than 12 h before ice core processing to kill microbes and viruses in the air and on the surface of the instruments (e.g., band saw, washing systems, and flow hood; Additional file 2 : Fig. S1). In addition, we performed the washings with 95% ethanol and water in the BioGard laminar flow hood (Baker Company, model B6000-1) to avoid environmental contamination (Additional file 2 : Fig. S1). Ice samples were melted at room temperature. One milliliter of each melted sample was preserved at 4°C and used for nested PCR to detect the coated lambda DNA. Other volumes of each sample were subjected to concentrating the microbes and viruses using 100 kDa Amicon Ultra Concentrators (EMD Millipore, Darmstadt, Germany). Each sample was concentrated to 0.8 ml and then was used for DNA extraction. Guliya ice core sampling and physiochemical conditions The plateau shallow core (PS core 34.5-m depth; 35°14′ N; 81°28′ E; 6200 m asl) and the summit core 3 (S3 51.86-m depth to bedrock; 35°17′ N; 81°29′ E; ~6710 m asl) were drilled on the Guliya ice cap in 1992 and 2015, respectively (Fig. 2a, b, c ). Both cores were 10 cm in diameter, and the bedrock temperature at the S3 site was about −15°C [ 125 ]. Ice core sections (~1 m each) were sealed in plastic tubing, placed in cardboard tubes covered with aluminum, and transferred at −20°C by truck from the drill sites to freezers in Lhasa, by airplane to freezers in Beijing, by airplane to Chicago, and then by freezer truck to the Byrd Polar and Climate Research Center at The Ohio State University where they have been stored at −34°C. Five samples were collected from the PS core at depths of 13.34–13.50 (sample name D13.3), 13.50–13.67 (D13.5), 24.12–24.54 (D24.1), 33.37–33.52 (D33.3), and 34.31–34.45 (D34.3) m (Fig. 2c ; Additional file 1 : Table S3). These ice samples were decontaminated using the surface-decontamination procedure described above, and the inner ice was collected for further analysis. In addition, the ice removed from the saw scraping and water washing was also collected for two samples (D13.3 and D13.5) as described for the artificial ice core sections in order to evaluate the surface decontamination procedures using authentic ice samples. The microbial communities from two of the S3 core samples (D41 and D49) were published previously [ 17 ]. Another sample D25 (25.23–25.79-m depth; not published) was collected at the same time as the two samples mentioned above and was included in this study (Fig. 2 ). Four controls were used to trace possible sources of background contamination during ice sample processing as described previously [ 17 ]. First, we assessed what microbes inhabited the air of the cold room laboratory in which the sampling took place. Cells from about 28 m 3 of air were collected over 4 days of continuous sampling in the room using an air sampler (SKC Inc.) as described previously [ 17 ], during which the ice samples were processed at the same time. This provided an evaluation of the background contamination due to ice exposure to air during the processing (Sample AirColdRoom). Second, an artificial ice core was made from sterile water (as described above), which was frozen at −34°C for 12–24 h. This sterile core was processed in parallel with the authentic ice core samples through the entire analysis. This control allowed evaluation of contamination from the instruments used to process the ice (Sample ArtificialIce). Third, a blank control was established by extracting DNA directly from 300 ml of sterile water. This control allowed evaluation of contamination downstream of the ice processing, including the molecular procedures (DNA extraction, PCR, library preparation, and sequencing; Sample Blank). Finally, 30 μl of filtered and autoclaved water was subjected to standard 16S rRNA gene amplicon sequencing to check contamination from the sequencing procedures (Sample BlankSequencing). A total of 300 ml of artificial ice, 300 ml of the blank control, and 100–300 ml each of the glacier ice samples were filtered through sterilized polycarbonate 0.22-μm-pore-sized filters (Cat No. GTTP02500, Isopore) to collect microbes including all bacterial/archaeal cells larger than 0.22 μm. The filters were preserved at −20°C until DNA extraction (within 24 h). Viruses in the filtrate of two samples (D25 and D49) were concentrated to 0.8 ml using 100 kDa Amicon Ultra Concentrators (EMD Millipore, Darmstadt, Germany) and preserved at 4°C until DNA extraction (within 24 h). To check for possible cross contamination among samples and potential viral contaminants introduced to the samples during processing, 1 ml of 0.22-μm-pore-size filtrate from the water of the Olentangy River (named RiverV; 39°59′52″ N, 83°1′24″ W, Columbus, Ohio) was co-processed in parallel with samples D25 and D49 throughout the entire analyses. All the biological work in this study after the ice sampling in the cold room laboratory was performed in a hood within a small (~2 m 2 in area) room that is reserved for microbial experiments with low-biomass samples. The hood was exposed with UV light for more than 1 h before experiments. Concentrations of insoluble dust, major ions, and oxygen isotopes of glacier ice were analyzed as described previously [ 126 ]. The development of the chronologies for the two ice cores from which the samples were collected is discussed in Additional file 1 : Table S3, where the ages of the samples were provided. Genomic DNA extraction The viral concentrates from samples D25, D49, and RiverV were subjected to isolating genomic DNA as previously described [ 45 ]. Briefly, viral concentrates were treated with DNase (100 U/ml) to eliminate free DNA, followed by the addition of 100 mM EDTA/100 mM EGTA to halt DNase activity; genomic DNA was then extracted using Wizard® PCR Preps DNA Purification Resin and Minicolumns (Cat. No. A7181 and A7211, respectively; Promega, USA) [ 45 ]. Viral abundance, calculated prior to DNA extraction, was obtained by enumerating and comparing the counts of VLPs and beads (with a known concentration) using the wet-mount method [ 62 ]. Genomic DNA from all other samples was isolated with a DNeasy Blood & Tissue Kit (Cat No. 69506, QIAGEN) according to the manufacturer’s instructions, with an additional step of beating with beads to disrupt bacterial spores and Gram-positive cells before cell lysis by homogenizing at 3400 RPM for 1 min with 100 mg of autoclaved (121°C for 30 min) 0.1-mm-diameter glass beads (Cat No. 13118-400, QIAGEN) in a MiniBeadBeater-16 (Model 607, BioSpec Products). Nested PCR Nested PCR experiments [ 127 ] were performed during the clean surface decontamination procedures, using two pairs of primers designed to detect lambda phage DNA in the artificial ice section samples. The external primer set LamouterF (5′-CAACTACACGGCTCACCTGT-3′) and LamouterR (5′-ACGGAACGAGATTTCCGCTT-3′) amplifies a 674 bp fragment, and the nested primer set LaminnerF (5′-GAAGCTGCATGTGCTGGAAG-3′) and LaminnerR (5′-CACACTCTGGAGAGCACCAC-3′) amplifies a 189 bp fragment within the previous fragment. In the first PCR with the external primer sets, the 25 μl reaction mixture consisted of 12.5 μl 2× commercial mix (Cat No. M712B, GoTaq® Green Master Mix, Promega), 1.25 μl of each external primer (LamouterF/LamouterR, 10 uM), 5.0 μl template DNA, and 5 μl of ddH 2 O. The amplification included a 5-min denaturation step at 95°C, followed by 40 cycles of 30 s at 95°C, 30 s at 56°C, and 50 s at 72°C, with a final extension of 5 min at 72°C. For the nested PCR, the reaction mixture was identical to the first PCR, except that 5.0 μl of the first PCR product and 1.25 μl of each nested primer (LaminnerF/LaminnerR, 10 μΜ) were included. The amplification conditions were also identical to the first PCR except for the extension time of 20 s at 72°C for 40 cycles of amplifications. For each of artificial ice section samples (i.e., Cut1, Wash1, Inner1, Cut2, Wash2, Inner2, and Mix; Fig. 1a ), 5 μl of melt water served as the DNA template in the first PCR. In addition, nested PCRs were performed using diluted lambda DNA (1.88×10 4 , 10 3 , 10 2 , 10 1 , 10 0 and 10 −1 copies, respectively) as templates to serve as a reference. A negative control was conducted using 5 μl of ddH 2 O as template. Real-time quantitative polymerase chain reaction (qPCR) Each 20 μl reaction for qPCRs contained 10 μl of 2× QuantiTect SYBR Green PCR Master Mix (Cat No. 204143, QIAGEN), 0.5 μl of each primer (10 μM), 3 μl of template DNA, and 6 μl of RNase-free water. All reactions were performed in triplicate, using an Illumina Eco cycler (Cat No. 1010180). Total bacterial and archaeal biomasses of the glacier ice samples and the “background” controls were estimated using qPCR after isolating DNA. The primer set 1406f (5′-GYACWCACCGCCCGT-3′) and 1525r (5′-AAGGAGGTGWTCCARCC-3′) was used to amplify bacterial and archaeal 16S rRNA genes [ 128 ]. Thermocycling consisted of an initial polymerase activation and template DNA denaturation step at 95°C for 15 min, followed by 40 cycles of 95°C for 15 s, 55°C for 30 s, and 72°C for 15 s. A standard curve was generated with a PCR product using primers 1406f/1525r from CBA 18 (NCBI accession number of the complete genome, CP009976). Total numbers of CBA 18 in each of the artificial ice samples (i.e., Cut1, Wash1, Inner1, Cut2, Wash2, Inner2, and Mix; Fig. 1a ) were quantified using the primer set Cbal18M666_05390F (5′-ACGTACAAATAAGGAGAATGGCTT-3′) and Cbal18M666_05390R (5′-AGCGCTAATCCCTGTTGAGA-3′), which specifically targets a 61 bp fragment of an ATP synthase subunit C of CBA 18, with thermocycling: 95°C for 15 min, 45 cycles of 95°C for 15 s, 60°C for 30 s, and 70°C for 25 s. Similarly, total PSA-HP1 numbers of these samples were quantified using strain-designed primer set 10-94a_dF (5′-TCTCTCGTCTTAATGACTTTCATCAT-3′) and 10-94a_dR (5′-TTCTTTCTCAACTTCCTGCTCTAA-3′), with the identical thermocycling conditions, except that 50 cycles of amplifications were conducted. The standard curves of the above two qPCRs were generated with the PCR products from their primer sets and strains, respectively. Tag-encoded amplicon sequencing of the microbial community Bar-coded primers 515f/806r [ 129 ] were used to amplify the V4 hypervariable regions of 16S rRNA genes of bacteria and archaea for all the glacier ice samples and the “background” controls. Resulting amplicons were sequenced by the Illumina MiSeq platform (paired-end reads) as described previously [ 129 ]. These experiments were performed at Argonne National Laboratory. Amplicon sequence analysis Sequences with an expected error >1.0 or length <245 nt were excluded from the analyses [ 130 ]. The remaining sequences were truncated to a constant length (245 nt). Various analyses were conducted using the QIIME (Quantitative Insights Into Microbial Ecology, version 1.9.1) software package [ 131 ] using default parameters, except that chimera filtering, operational taxonomic unit (OTU) clustering, and singleton exclusion were performed with QIIME through the UPARSE pipeline [ 130 ]. A phylogenetic tree was constructed with a set of sequence representatives of the OTUs using the method of FastTree [ 132 ]. Chimeras were identified and filtered by UPARSE with the UCHIME algorithm using the ChimeraSlayer reference database [ 133 ], which is considered to be sensitive and quick [ 134 ]. Reads were clustered into OTUs at 97% sequence similarity by UPARSE. A representative sequence from each OTU was selected for taxonomic annotation using the Ribosomal Database Project (RDP) classifier [ 135 ] from the RDP Release 11.5 database. Taxonomic assignments with <80% confidence were marked as unclassified taxa. Mitochondrial and chloroplast sequences were excluded from further analyses. A new profile of OTU composition for the ice samples was generated after in silico and proportional decontamination using R-OTU values >0.01 according to the method established previously [ 17 ]. Briefly, an R-OTU value was designated as the ratio between the mean “absolute” abundance of OTUs in “background” controls and ice samples; then, an approximated estimation of the “absolute” abundance of OTUs was calculated by multiplying the relative abundance of each OTU by the 16S rRNA gene copy number in a given sample (determined by qPCR). The OTUs with R-OTU values >0.01 were considered to be contaminants and were removed from the ice samples. Each library was subsampled to the same sequencing depth before following analyses. Relative abundance of the microbial profiles was generated at genus and class levels. Principal Coordinates Analysis (PCoA) using weighted UniFrac metrics was performed to distinguish general distribution patterns of microbial profiles among all samples. The Mantel tests were conducted to evaluate the linkage between the microbial community structure and environmental parameters. The significance of the difference in microbial community between grouped samples (PS versus S3 core samples) was evaluated by analysis of similarity statistics (ANOSIM, number of permutations = 999), which was performed using functions in the Vegan package version 2.4-4 in R version 3.4.2 [ 136 ]. Metagenomic sequencing of viral metagenomic dsDNA The viral genomic DNA from three samples (D25, D49, and RiverV) was subjected to low-input library preparation pipeline using the Nextera® XT Library Prep Kit (Cat No. 15032354, Illumina) in the clean room, according to our methods described previously [ 46 , 47 , 63 ]. The metagenomes were sequenced by Illumina HiSeq 2000 platform (1×100 bp) at JP Sulzberger Genome Center at Columbia University. Viromic analysis and characterization of viral communities All metagenomic analyses were supported by the Ohio Supercomputer Center. Viromic sequence data was processed using iVirus pipeline with default parameters described previously [ 35 , 137 ]. Briefly, raw reads of three viromes, including two glacier ice samples (D25 and D49) and the River water control (RiverV), were filtered for quality using Trimmomatic v0.36 [ 138 ], followed by the assembly using metaSPAdes v3.11.1 (k-mer values include 21, 33, and 55) [ 139 ], and the prediction of viral contigs using VirSorter v1.0.3 in virome decontamination mode on CyVerse [ 65 ]. The viral contigs (categories 1, 2, 4, and 5) were first checked for contaminants by comparing them to viral genomes considered as putative laboratory contaminants (e.g., phages cultivated in our lab including Synechococcus phages, Cellulophaga phages, and Pseudoalteromonas phages) using Blastn. Then, they were clustered into vOTUs if viral contigs shared ≥95% nucleotide identity across 80% of their lengths as described previously [ 35 , 49 ]. The longest contig within each vOTU was selected as the seed sequence to represent that vOTU. A coverage table of each vOTU was generated using iVirus BowtieBatch and Read2RefMapper tools by mapping quality-controlled reads to vOTUs, and the resulting coverage depths were normalized by library size to “coverage per gigabase of virome” [ 137 ]. Rarefaction curves of the two glacier ice viromes were produced by estimating vOTU (length ≥10 kb) numbers along sequencing depth (i.e., read number), which was obtained by subsampling quality-controlled reads (Additional file 2 : Fig. S3). A total of 33 and 107 vOTUs (length ≥10 kb) were obtained for two glacier ice samples (D25 and D49) and the river water control (RiverV) viromes, respectively. Mapping the quality-controlled reads of the 3 viromes to the 140 vOTUs (33+107) showed that the viral communities in the glacier ice samples were completely different from those in the river water control (Additional file 2 : Fig. S7), suggesting that the procedures for handling glacier ice samples were “clean,” and no cross contamination was detected among these samples. Only the two glacier ice viromes were used for additional analyses. The assembled contigs, excluding the predicted viral contigs by VirSorter, were examined for eukaryotic viruses by comparing their genes to the NCBI NR database (non-redundant protein sequence). Only two genes from two contigs (one gene per contig) had significant hits to eukaryotic viruses (bit score 128 and 164). In addition, two other efforts were made to detect eukaryotic viruses (chloroviruses) in the glacier ice samples: (a) the four known chlorovirus hosts, including Chlorella variabilis NC64A, C. variabilis Syngen 2-3, C. heliozoae SAG 3.83, and Micractinium conductrix Pbi, were incubated with about 4 ml of melted inner ice water and plaqued for virus [ 140 ] and (b) PCR-cloning-sequencing method was used to detect chloroviruses using two pairs of primers mcp F/mep R [ 141 ] and CHL Vd F/CHL Vd R [ 142 ]. However, none of these experiments detected any chloroviruses. Thus, this study focused on viruses infecting bacteria (bacteriophage). Taxonomy assignments were performed using vConTACT v2.0 [ 69 ]. Briefly, this analysis compared the vOTUs in this study to 2304 viral genomes in the National Center for Biotechnology Information (NCBI) RefSeq database (release v85), and generated VCs approximately equivalent to known viral genera [ 37 , 69 , 70 ]. The putative virus–host linkages were predicted in silico using three methods based on: (i) nucleotide sequence composition, (ii) nucleotide sequence similarity, and (iii) CRISPR spacer matches, as described previously [ 37 , 71 ]. Thirty-three vOTUs from glacier ice samples were linked to their microbial hosts using the oligonucleotide frequency dissimilarity (VirHostMatcher) measure, with ~32,000 bacterial and archaeal genomes as the host database and a dissimilarity score ≤0.1 and possibility ≥80% as the threshold to pick the host [ 87 ]. In addition to sequence composition analysis using VirHostMatcher, the nucleotide sequence of each vOTU was compared (Blastn) to bacterial and archaeal genomes from the NCBI RefSeq database (release v81) and the database (~32,000 genomes) used above. The viral sequences were considered for successful host predictions if they had a bit score of ≥50, E-value of ≤10 −3 , and average nucleotide identity of ≥70% across ≥2,000 bp with the host genomes [ 37 ]. Finally, nucleotide sequences of 33 vOTUs were compared to CRISPR spacers of bacterial and archaeal genomes in both databases using the sequence similarity method. The CRISPR spacers with >2 direct repeats in the array were identified using MinCED (mining CRISPRs in environmental data sets [ 143 ];) and compared to nucleotide sequences of 33 vOTUs. Hosts were selected if the spacers had zero mismatches to vOTUs. The putative AMGs were identified and evaluated according to our previously established methods [ 144 ]. Specially, all the 33 vOTUs were processed with DRAM-v [ 145 ] to obtain gene functional annotations and identify AMGs. Genes on these contigs were regarded as AMGs if having auxiliary scores ≤3 and the M flag. AMGs with transposon regions were not included. To obtain high-quality AMGs, CheckV and manual checking were then used to assess host-virus boundaries and remove the potential host fraction on the viral contig and rule out AMGs potentially coming from microbial contamination using default parameters [ 146 ]. Phylogenetic analyses of AMGs were conducted to infer their evolutionary histories. DIAMOND BLASTP [ 147 ] was used to query an AMG amino acid sequence against RefSeq database (release v99) in a sensitive mode with default settings, to obtain the reference sequences (top 10 and 100 hits for each viral AMG sequence for conserved motif identification and phylogenetic analysis, respectively). Multiple sequence alignment was performed using MAFFT (v.7.017) [ 148 ] with the E-INS-I strategy for 1000 iterations. The aligned sequences were then trimmed using TrimAl [ 149 ] with the flag gappyout. The substitution model was selected by ModelFinder [ 150 ] for accurate phylogenetic analysis. Phylogenies were generated using IQ-TREE [ 151 ] with 1000 bootstrap replicates, and then visualized in iTOL (v5) [ 152 ]. Protein sequences from interesting AMGs were structurally modeled using Phyre2 [ 110 ] in normal modeling mode to confirm and further resolve functional predictions. The visualization of genome map for the virus containing AMGs of interest was performed using Easyfig version 2.2.5 [ 153 ]. Phage genes and hallmark genes were identified by VirSorter [ 65 ]. Putative temperate phages were identified by VIBRANT (identified as lysogenic viruses) [ 154 ] using its default parameters. To explore the geographic distribution of glacier viruses, the genome fragments of 33 vOTUs were used as baits to recruit reads from 225 previously published viromes from a wide range of environments including global oceans (145 viromes of GOV 2.0) [ 66 ], Arctic sea ice and ancient permafrost brine (cryopeg) [ 42 ], soils [ 72 , 73 ], lakes [ 74 , 75 ], deserts [ 76 , 77 , 78 , 79 ], air [ 80 , 81 ], cryoconite [ 40 ], and Greenland ice sheet [ 40 ]. The coverage of all vOTUs in each environmental virome was calculated as described above using iVirus BowtieBatch and Read2RefMapper tools [ 137 ]. None of the 33 vOTUs were detected from any of these viromes. Characterization of phages infecting members of Methylobacterium The 123 previously published viromes (these are the same as the 225 viromes described above, except that the global ocean viromes only included 43 Tara Oceans virome samples [ 35 ]) were re-analyzed, by the same method as for glacier-ice viruses, to identify viruses infecting Methylobacterium . In addition, Methylobacterium viruses (prophages) were also extracted from 131 bacterial genomes within Methylobacterium species, which were obtained from the RefSeq database (release v99). These efforts identified 484 Methylobacterium phages, which were used for genome-based network analyses to evaluate their relationship with five glacier-ice viruses infecting Methylobacterium , using vConTACT version 2 [ 85 , 86 ]. The genome content and organization for long (>15 kb in size) Methylobacterium viruses of interest were evaluated and illustrated by Easyfig version 2.2.5 [ 153 ]. The phylogenetic analysis of the DNA circulation protein genes obtained from Methylobacterium viruses was performed as described above for AMGs. Availability of data and materials The amplicon sequences obtained in this study have been deposited in the NCBI Sequence Read Archive under BioProject accession number PRJNA594142. The viral metagenomes are available through iVirus ( ), including raw and quality-controlled reads and vOTUs. Abbreviations vOTU: Viral operational taxonomic unit VLP: Virus-like particle PCoA: Principal Coordinates Analysis ANOSIM: Analysis of similarity statistics VC: Viral cluster CBA 18: Cellulophaga baltica strain #18 PS core: Plateau shallow core qPCR: Quantitative polymerase chain reaction OTU: Operational taxonomic unit RDP: Ribosomal database project | Scientists who study glacier ice have found viruses nearly 15,000 years old in two ice samples taken from the Tibetan Plateau in China. Most of those viruses, which survived because they had remained frozen, are unlike any viruses that have been cataloged to date. The findings, published today in the journal Microbiome, could help scientists understand how viruses have evolved over centuries. For this study, the scientists also created a new, ultra-clean method of analyzing microbes and viruses in ice without contaminating it. "These glaciers were formed gradually, and along with dust and gasses, many, many viruses were also deposited in that ice," said Zhi-Ping Zhong, lead author of the study and a researcher at The Ohio State University Byrd Polar and Climate Research Center who also focuses on microbiology. "The glaciers in western China are not well-studied, and our goal is to use this information to reflect past environments. And viruses are a part of those environments." The researchers analyzed ice cores taken in 2015 from the Guliya ice cap in western China. The cores are collected at high altitudes—the summit of Guliya, where this ice originated, is 22,000 feet above sea level. The ice cores contain layers of ice that accumulate year after year, trapping whatever was in the atmosphere around them at the time each layer froze. Those layers create a timeline of sorts, which scientists have used to understand more about climate change, microbes, viruses and gasses throughout history. Researchers determined that the ice was nearly 15,000 years old using a combination of traditional and new, novel techniques to date this ice core. When they analyzed the ice, they found genetic codes for 33 viruses. Four of those viruses have already been identified by the scientific community. But at least 28 of them are novel. About half of them seemed to have survived at the time they were frozen not in spite of the ice, but because of it. "These are viruses that would have thrived in extreme environments," said Matthew Sullivan, co-author of the study, professor of microbiology at Ohio State and director of Ohio State's Center of Microbiome Science. "These viruses have signatures of genes that help them infect cells in cold environments—just surreal genetic signatures for how a virus is able to survive in extreme conditions. These are not easy signatures to pull out, and the method that Zhi-Ping developed to decontaminate the cores and to study microbes and viruses in ice could help us search for these genetic sequences in other extreme icy environments—Mars, for example, the moon, or closer to home in Earth's Atacama Desert." Viruses do not share a common, universal gene, so naming a new virus—and attempting to figure out where it fits into the landscape of known viruses—involves multiple steps. To compare unidentified viruses with known viruses, scientists compare gene sets. Gene sets from known viruses are cataloged in scientific databases. Those database comparisons showed that four of the viruses in the Guliya ice cap cores had previously been identified and were from virus families that typically infect bacteria. The researchers found the viruses in concentrations much lower than have been found to exist in oceans or soil. The researchers' analysis showed that the viruses likely originated with soil or plants, not with animals or humans, based on both the environment and the databases of known viruses. The study of viruses in glaciers is relatively new: Just two previous studies have identified viruses in ancient glacier ice. But it is an area of science that is becoming more important as the climate changes, said Lonnie Thompson, senior author of the study, distinguished university professor of earth sciences at Ohio State and senior research scientist at the Byrd Center. "We know very little about viruses and microbes in these extreme environments, and what is actually there," Thompson said. "The documentation and understanding of that is extremely important: How do bacteria and viruses respond to climate change? What happens when we go from an ice age to a warm period like we're in now?" | 10.1186/s40168-021-01106-w |
Physics | Squeezing in the micro-domain | Williams, I. et al. Direct measurement of osmotic pressure via adaptive confinement of quasi hard disc colloids, Nature Communications, 02 October 2013. www.nature.com/ncomms/2013/131 … 5/pdf/ncomms3555.pdf Journal information: Nature Communications | http://www.nature.com/ncomms/2013/131002/ncomms3555/pdf/ncomms3555.pdf | https://phys.org/news/2013-10-micro-domain.html | Abstract Confining a system in a small volume profoundly alters its behaviour. Hitherto, attention has focused on static confinement where the confining wall is fixed such as in porous media. However, adaptive confinement where the wall responds to the interior has clear relevance in biological systems. Here we investigate this phenomenon with a colloidal system of quasi hard discs confined by a ring of particles trapped in holographic optical tweezers, which form a flexible elastic wall. This elasticity leads to quasi-isobaric conditions within the confined region. By measuring the displacement of the tweezed particles, we obtain the radial osmotic pressure. We further find a novel bistable state of a hexagonal structure and concentrically layered fluid mimicking the shape of the confinement. The hexagonal configurations are found at lower pressure than those of the fluid, thus the bistability is driven by the higher entropy of disordered arrangements, unlike bulk hard systems. Introduction Phenomena induced by confinement include decoupled dynamics parallel and perpendicular to boundaries, the adoption of structures mimicking the confining geometry and the formation of novel phases. Such behaviour is found in a broad range of systems including simple 1 , 2 , 3 , 4 and molecular 5 , 6 liquids, colloidal and nanoparticle suspensions 7 , 8 , 9 , 10 , 11 , 12 and granular materials 13 . As a result, confinement offers new routes to self-assembly and control of reaction rates and pathways 4 , 14 , 15 , 16 . Underlying the impact of confinement upon a system, leading to changes in structure, dynamics and phase behaviour, is its effect upon the free energy 17 , which can be accurately determined for model systems such as we employ here. Combined with direct comparison to bulk behaviour 18 , 19 , 20 , such calculations enable an understanding of the effects of confinement. By emphasizing adaptive confinement, we open the possibility to use colloids as basic models of biological systems such as cell walls. Our system comprises a suspension of polystyrene colloids in a water–ethanol mixture, in which 27 particles of diameter σ =5 μm are held by holographic optical tweezers (HOT) 21 in a ring configuration ( Fig. 1 ). These ring particles thus form a flexible ‘membrane’ that can adapt to the interior. The colloids are restricted to quasi-two-dimensions (2D) by gravity, thus the ring, or corral, confines a population of up to N =49 particles. We express the interior population in terms of the effective area fraction where σ eff is the Barker–Henderson effective hard sphere diameter 22 accounting for electrostatic interactions between the colloids and ‹ A Vor › is the average area accessible to an interior particle. Further details are provided in the Methods. Coordinates of both interior and membrane particles are obtained throughout the experiment. We compare our experimental results with 2D Monte-Carlo simulations of a hard disc system that is similarly confined. In our study of quasi hard discs, adaptive confinement introduces two main effects. First, by measuring displacements of the membrane particles we directly obtain the osmotic pressure in the interior. Thus, our system marks a departure from the constant-volume ensemble characteristic of soft matter, as the fluctuating membrane exerts a pressure on the interior creating instead a quasi-isobaric ensemble. Second, the combined experimental and simulation approach reveals that adaptive confinement enables hexagonal ordering reminiscent of the bulk 18 , 19 , 20 , leading to a bistability between this structure and a layered fluid characteristic of similarly confined hard wall systems 1 , 2 , 7 . Figure 1: Colloidal corral system overview. ( a ) Schematic showing side view of experimental system defined by corral radius R . ( b – d ) Phase diagram as a function of effective area fraction, φ eff , with images of fluid, layered fluid and hexagonal structures. Scale bars represent 10 μm. Note that the images of the layered fluid ( c ) and hexagonal structure ( d ) both have interior population N =48. Pink circles in ( b – d ) indicate positions of optical traps. Full size image Results Phase behaviour and structure The phase diagram of the system is shown in Fig. 1b–d . By increasing the corral population a qualitative change in structure is observed. At low density, the interior structure is fluid-like ( Fig. 1b ), but upon increasing the population, a concentrically layered structure consisting of rings of particles mimicking the symmetry of the confining boundary becomes evident ( Fig. 1c ). In our experiments, we sometimes find hexagonal ordering for effective area fraction φ eff ≳ 0.77 ( Fig. 1d ). The degree of local hexagonal ordering is quantified using the bond-orientational order parameter, where z j is the co-ordination number as defined by a Voronoi construction and is the angle made by the bond between particle j and its m th neighbour and a reference axis. For perfect hexagonal ordering, ψ 6 =1, whereas totally disordered systems have ψ 6 =0. We consider ψ 6 >0.775 to represent a hexagonal structure. As the hexatic transition in bulk hard discs has been identified in the range 0.70≤ φ ≤0.716 (ref. 20 ), we assume our system explores the kind of configurations a bulk system at the same area fraction would exhibit. Owing to curvature, hexagonal ordering is suppressed in the layer of particles by the wall. However, upon sufficient lowering of the spring constant in the optical traps by reducing the laser power, the adaptivity of the corral is enhanced. Under these conditions complete hexagonal ordering may be possible. In simulation, hexagonal ordering is most strongly suppressed at high spring constant, and is enhanced as the spring constant is reduced ( Supplementary Fig. S1 ). Thus, local hexagonal ordering imposed by packing constraints competes with concentric layering imposed by the boundary shape. Such hexagonal structure has not, to the best of our knowledge, been reported for systems of comparable size confined by hard boundaries. We argue that this structure, which more closely resembles the bulk than the confining geometry, is made possible by the adaptive confinement. Wall roughness on the particle lengthscale has been shown to inhibit particle layering 2 and although the walls of this system are indeed rough, the roughness is naturally commensurate with the particle size and interparticle separation. If the circular boundary were ‘flattened’, removing the wall curvature, the regular spacing of the optically trapped particles would promote hexagonal ordering. However, the curvature of the boundary is incommensurate with hexagonal order, suppressing ψ 6 in its immediate vicinity to , ( Supplementary Fig. S2 ) indicating that wall roughness is not the source of the observed locally hexagonal structure but that it is made possible by the adaptive confinement that can be distorted. We explore these competing structures further in Fig. 2a where ψ 6 is plotted as a function of packing fraction. There is a general trend of increasing ψ 6 with area fraction for φ eff <0.77. Concentrically layered structures such as that shown in Fig. 1c for N =44 have ψ 6 in the range 0.6< ψ 6 ≲ 0.75. In our experiments, at high packing fractions, a distribution of populations with high and low ψ 6 are found. These correspond to hexagonal and layered fluid structures respectively. For hard systems in general, we associate entropically driven ordering with an increase packing, and hard discs are no exception 20 . Figure 2: Structure and dynamics in adaptive confinement. ( a ) Average local hexagonal order parameter ψ 6 as a function of effective area fraction. Horizontal dashed line demarcates hexagonal structures ( ψ 6 >0.775) from layered structures ( ψ 6 ≤0.775). Circles are experimental data, open triangles are simulation. Blue shaded region represents densities at which concentrically layered structures compete with locally hexagonal structures. Points are coloured by corral population, N . Inset includes low-density data in the bulk fluid regime. Fluid-hexatic phase coexistence in the bulk 20 is indicated by the turquoise region. Line is to guide the eye. ( b ) Experimental self ISF, F S ( q , t ), at different effective area fractions, φ eff . Lines labelled with φ eff . The wavevector q =2π/σ eff is taken close to the main peak of the static structure factor. Full size image Before turning to the cause of this apparent degeneracy in ψ 6 at high area fraction, we address its connection with ergodicity. To investigate the dynamics, we calculate the self intermediate scattering function (ISF) F S ( q , t )=‹| e iq [ r ( t + t ′)— r ( t ′)] |› where the angle brackets indicate averaging over particles and t ′ and the wavevector . The resulting ISFs are plotted in Fig. 2b . For φ eff ≳ 0.75, the ISF does not decay on the experimental timescale (2.03 × 10 4 s or 290 Brownian diffusion times, τ B ), indicating that the system does not reach equilibrium. Thus, it is possible that the amorphous structures could be metastable to hexagonal configurations or even vice versa . As we shall see however, this degeneracy is in fact a manifestation of a bistable state found at high density. By contrast, our simulations do reach equilibrium, as shown in Supplementary Fig. S3 . Thus we find one value of ψ 6 for each φ eff in Fig. 2a . In order to observe full decay of the experimental ISFs at all area fractions considered, much longer experiments would be required. We estimate that an experimental duration of ~200 h is required for full equilibration of all experimental samples. Additionally, ergodicity may be recovered through a reduction in boundary stiffness, via reduction in φ eff , as a softer boundary allows the system to expand. We estimate that if the trap stiffness falls to ~ 0.45; of its experimental value then the N =49 particle system would relax on the experimental timescale. Measurement of pressure Owing to the elasticity of the confinement, densely populated corrals have a radius that is, on average, larger than that of the unpopulated corral. The confining boundary is ‘stretched’ by the interior population, displacing the wall particles from their optically defined energy minima at R 0 ≈4.32 σ eff . For small deformations of the corral, we assume each trap creates an identical Hookean restoring force around each particle in the ring (see Methods and Supplementary Fig. S4 ). By measuring the expansion of the corral with respect to its unpopulated size, we directly obtain the osmotic pressure of our system. The Hookean restoring force on each trapped particle is F =−λ( R – R 0 ). We average over all 27 trapped particles when calculating R . That the force experienced by each trapped particle can be treated as identical is important—it indicates the system is quasi-isobaric. The pressure is calculated as Figure 3 shows the dimensionless time-averaged radial pressure calculated using Equation 1 for experimental and simulated corral systems in the range 0.50≤ φ eff ≤0.8. For state points where bulk hard discs are fluid ( φ eff ≤0.70), we find good agreement between pressure measurements of our confined system and bulk values 23 . The symbols are coloured based on the value of ψ 6 plotted in Fig. 2 , with red indicating low ψ 6 (concentric layering) and blue indicating high ψ 6 (local hexagonal ordering). Figure 3 reveals the interplay between pressure and ordering: for a given population, higher ψ 6 samples in general exhibit lower pressures and higher area fractions. Figure 3: Pressure measurement. Dimensionless radial pressure as a function of area fraction for both experiment (filled circles) and Monte-Carlo simulation (open triangles). Symbols are grouped inside dashed lines indicating the corral population, N , they represent and are coloured based on the average value of ψ 6 of particles non-adjacent to the corral wall with red points indicating low ψ 6 and blue points indicating high ψ 6 . Black crosses joined by grey lines are data from bulk hard disc simulations 24 . Inset includes low-density data in the bulk fluid regime. Fluid-hexatic phase coexistence in the bulk 20 is indicated by the turquoise region. Grey line indicates bulk pressure for the fluid 23 and the solid 24 . Shaded blue regions denote area fractions for which bistable behaviour is observed. Full size image At high packings ( φ ≳ 0.77), the experimental system does not reach equilibrium. As our measurements of pressure are mechanical, we can measure p for such non-equilibrium state points ( Fig. 2b ). However, because phase space is not fully sampled, each experiment gives a different pressure value, resulting in a range of observed pressure for a given interior population. Conversely, because our simulations reach equilibrium, a single value is found for the pressure. Our system exhibits a departure from bulk behaviour 24 for φ ≳ 0.77, a similar packing fraction to that at which the system starts to visit hexagonal configurations. We assume that such a deviation is due to confinement. Structural bistability Although the highly ordered hexagonal structures feature a lowering in pressure, which in bulk systems indicates a reduction in free energy, here we find these coexist with higher pressure disordered layered fluid structures. We find transitions between both structures, accompanied by changes in area fraction and ψ 6 . Examples are given in Fig. 4 , which shows the time evolution of the instantaneous area fraction, pressure, ψ 6 and mean-squared particle displacement for a sample of population N =47. The system initially undergoes a transition from high to low area fraction and high to low ψ 6 . A structure similar to the original is recovered between 140–170τ B . These transitions are indicated by the dashed lines in Fig. 4a–d and occur over a time interval of ~10τ B and 30τ B , respectively. Figure 5a shows the particle rearrangements in the second transition time interval (purple data in Fig. 4a–d at 140–170τ B ), resulting in this increase in volume fraction and decrease in pressure. A cooperative rearrangement of particles, strongly localized in time, results in a structural change in the system, allowing the boundary to contract. Figure 4: Experimentally observed structural transitions. Time evolution of ( a ) instantaneous effective area fraction ( b ) pressure ( c ) ψ 6 and ( d ) mean squared particle displacement over a timescale of 6 τ B for a single sample of population N =47. Transitions occur, initially down from a high ψ 6 configuration at t =0 and later from a low φ eff , high-pressure configuration to one at higher φ eff and lower pressure. Transition intervals are indicated by vertical dashed lines. Full size image Figure 5: Dynamical heterogeneity and structural bistability. ( a ) Displacement of particles in transition period around 140–170 τ B in Fig. 4 . Magnitude of displacement is indicated by the length and colour of each arrow. Arrows in grey ring correspond to particles in optical traps forming the confining wall. ( b ) Distribution of ψ 6 from simulated configurations at N =47. Full size image On the experimental timescale, such cooperative rearrangements are rare. Dynamically, the system is characterized by long periods of low particle mobility at different area fraction (or pressure) separated by short rearrangement intervals of higher particle mobility (Δ r 2 >0.3 σ 2 ), during which a subset of the system undergoes cooperative, neighbour-changing motion, as shown in Fig. 4d . Such isolated events are the mechanism by which the system relaxes. Thus we find temporal dynamic heterogeneity. In other words, rearrangements correspond to active periods interspersed with inactive periods. Such temporally heterogeneous dynamics between inactive and active periods have been related to the glass transition 25 . Plots such as that shown in Fig. 5a allow the visualization of the size and shape of co-operatively rearranging regions, which yields detailed information about the nature of slow dynamics in confinement. Data such as those in Fig. 4 indicate that the system undergoes transitions between two structures, one of high order and low pressure and one of high pressure and low order. To test this hypothesis, we plot the probability distribution of ψ 6 as obtained from Monte-Carlo simulation in Fig. 5b . Indeed the behaviour is consistent with structural bistability in that two peaks, one corresponding to each structure, are found. A similar plot obtained from experimental data is shown in Supplementary Fig. S5 , although our experiments do not reach full equilibrium at these high area fractions. Figure 5b provides strong evidence that the adaptive confinement induces a bistability in these assemblies of hard discs between a layered fluid (with high pressure and low order) and a hexagonal system (with low pressure and high order). This contrasts strongly with phase coexistence in bulk systems, where coexistence of a fluid and a crystal occurs at a single pressure but at different volumes (or equivalently, areas). Pressure and area are coupled in our system and therefore the two coexisting structures have distinct areas and pressures. Discussion At first sight, the emergence of a hexagonal structure at high density seems reminiscent of bulk hard discs. However, our adaptive confinement introduces a potential energy to the system. Indeed, that the hexagonal structure resides at lower pressure and higher order would suggest that it should be strongly favoured. That we find the system in layered fluid configurations at all indicates that more is at play than energy. This leaves entropy, and in fact there are fewer configurations accessible to the hexagonal structure. This is because free volume is expended in the ‘voids’ close to the walls in Fig. 1d . Thus, because the layered fluid fits the cavity better than the hexagonal configuration, it is entropically favored, in contrast to the case of bulk hard discs. Further evidence for this scenario is provided by the suppression of hexatic ordering relative to the bulk, where it is found for φ ≥0.716 (ref. 20 ). Here we find hexatic ordering only for φ eff ≥0.77. This suppression results from the entropic loss associated with the hexatic ordering. The prescence of voids adjacent to the boundary in hexagonally ordered systems is also indicated in Supplementary Fig. S6 , where the local area fraction is found to decrease towards the boundary of the corral for both concentrically layered and locally hexagonal configurations. This effect is stronger in the case of the hexagonally ordered system, where the local area fraction adjacent to the boundary is suppressed by ~3% relative to the average. In the layered fluid this suppression is only half as strong, with local area fraction ~1.5% lower than the average. As the observed structural bistability is dependent upon the adaptivity of the boundary, it is feasible that controlled modification of the wall can induce transitions between layered and hexagonal structures. The corral wall is defined by its size, shape and stiffness. Altering any one of these in situ can potentially drive the confined system from one structure to another. For instance, a hexagonal configuration can be ‘melted’ by increasing the diameter of the confining ring. Similarly, hexagonal configurations can be favoured by altering the shape of the confining boundary—the extreme case of this being confinement by a hexagonal wall ( Supplementary Fig. S7 ). Furthermore, as noted above, stiffer boundaries inhibit hexagonal ordering ( Supplementary Fig. S1 ), which indicates that increasing optical trap strength is capable of driving a locally hexagonal configuration into a concentrically layered configuration and vice versa. A confined model experimental system of quasi hard discs is introduced with a ring of particles held in HOT. Monte-Carlo computer simulations show that the system is well described as hard discs confined by particles in harmonic potentials. We demonstrate that measuring the expansion of the optically defined confining boundary due to interior particles enables direct measurement of the radial osmotic pressure of the confined system. As the confining wall adapts to the interior, many more configurations can be accessed than in conventional static confinement. In particular, we find hexagonal ordering reminiscent of the bulk, which competes with concentric layering echoing the ring of trapped particles. Furthermore, for a given number of interior particles, the system is quasi-isobaric, enabling transitions between these two structures—a bistable state. Unlike the case for bulk systems, here the configurational entropy of the hexagonal structure is lower than that of the fluid, because it is incommensurate with the circular boundary. It would be interesting to carry out microscopic density functional calculations for hard disks in order to predict the structural and dynamical collective behaviour of the confined systems 26 . Our system is ideal for studying the basic properties of adaptive confinement and can readily be generalized to three dimensions (3D). This would enable the osmotic pressure measurements to be applied to other systems, including active matter such as driven colloids 27 and bacteria. By constructing a corral around a cell and varying the salt concentration, our technique might even enable the turgor pressure to be directly determined. Moreover, as ensembles of rigid non-spherical particles are used as an approximation to the cytoplasm 28 , our approach could even model some properties of cells. Finally, with a suitable choice of geometry, for example a fixed cylinder with a 2D tweezed array, realization of nanoscale Brownian pistons is now feasible, which would enable direct tests of basic thermodynamic behaviour such as compression. Methods Sample details The experimental system consists of polystyrene colloids of diameter σ =5.0 μm with a polydispersity of 2%, suspended in a water–ethanol mixture at a ratio of 3:1 by weight. The Debye length in our experimental samples is estimated by matching the Barker–Henderson effective hard sphere diameter 22 to the simulated hard disc diameter that best reproduces experimental behaviour. This results in a Debye length of κ −1 ≈25 nm, which is consistent with the experimental conditions. We assume the effective colloid charge Z eff is given by Z eff λ B /σ~6. This leads to a Barker–Henderson effective hard sphere diameter σ eff =5.08 μm. The density mismatch between the particles and the solvent is such that their gravitational length is l g / σ eff =0.015(1), which results in fast sedimentation of suspended particles and the formation of a quasi-2D monolayer adjacent to a glass coverslip substrate. This coverslip is made hydrophobic by treatment with Gelest Glassclad 18 to prevent particle adhesion. The Brownian time, τ B , is determined empirically by measuring the mean-squared displacement in a dilute system. We define Brownian time as the average time needed to diffuse a distance of one particle radius. For our experimental conditions we measure a Brownian time of τ B ≈70.2 s. Holographic optical tweezers The HOT apparatus consists of an ytterbium fibre laser of wavelength 1,064 nm modulated by a computer-addressed liquid crystal on a silicon spatial light modulator (Holoeye PLUTO-NIR) capable of applying phase shifts of up to 2π to laser light reflected from each of its 1,920 × 1,080 pixels. The application of superposed phase gratings to the SLM modulates the incident beam such that it can form arbitrary arrays of optical traps 21 . The HOT apparatus is controlled using LabVIEW software adapted from that developed by the Glasgow University Optics Group 29 . Optical trapping is integrated into an inverted microscope (Zeiss Axiovert 200) and facilitated by a high numerical aperture objective (Zeiss Plan-Neofluar × 100 magnification). This same objective images the sample to a charged coupled device camera (Allied Vision Technologies Dolphin F-145B), which relays sample images to the computer. The colloidal corral As shown in Fig. 1 , 27 particles are positioned using HOT forming the adaptive corral boundary. These traps are well-approximated by parabolae with spring constant λ =302(2) . This spring constant is determined by measuring the probability distribution of radial coordinates for particles forming the boundary in the absence of a confined population. By assuming this radius is Boltzmann-distributed the optical potential is extracted and fit with the parabolic form characteristic of a Hookean spring (see Supplementary Fig. S2 ). As we find a good degree of uniformity in the strength of our optical traps, we consider an effective corral potential described by a single spring constant, rather than extracting individual spring constants for each of our 27 optical traps. Variations in the radial spring constant for individual optical traps are a few percent of the value of the effective spring constant. It is integral to this work that the confined particles are unaffected by the light field used to create the circular boundary. The light field due to the 27 optical traps is assessed by imaging the laser light reflected from a glass–air interface at maximum camera gain. Supplementary Fig. S8a shows a single image of the light field in which the 27 optical traps are clearly visible. If the intensity of each pixel is summed over a sequence of images one obtains the composite image in Supplementary Fig. S8b , in which background noise is suppressed. It is clear from these high-gain images of the light field that the interior region is free from unwanted optical influence. Experimental data are acquired for up to 6 h at 0.5 frames per second. Particle trajectories are extracted 30 for corral populations N ≤49 corresponding to area fractions φ eff <0.8. Area fraction is estimated from experimental data by considering the Voronoi decomposition of the particle coordinates for each frame. The particles forming the confining boundary are neglected—only the Voronoi cells of the interior population are considered. In a given frame, the instantaneous area fraction is defined using the circular cross-sectional area of a particle (assuming particles are monodisperse) and the average Voronoi cell area (or the average area per particle). For densely populated corrals this gives a good estimate of the area fraction; however, at low densities, as the Voronoi polygons of the particles forming the confining walls penetrate further into the corral interior, this method may over-estimate the area fraction. Unless a distinct transition is observed in the course of an experiment, the area fraction is taken to be the time average of the instantaneous area fraction. Data exhibiting a transition is split into pre- and post-transition sequences and thereafter treated as distinct experiments. Monte-Carlo simulation Monte-Carlo simulations reproduce the experiments. N hard discs are placed in a circular region enclosed by 27 additional discs. To reproduce the confining effect of the optical tweezers, each of these 27 discs lies in a parabolic potential energy well of stiffness λ =302 as found by fitting experimental data. The experimental system is quasi-2D and the electrostatic charge leads to some softness in the interparticle interactions. Even without these considerations, in 2D the accuracy to which φ eff can be determined is ≳ 4% (ref. 31 ). The effect of interactions complicates the situation 32 . Here we treat R 0 as a fit parameter, with 4.30≤ R 0 ≤4.44 and find best agreement with R 0 =4.32. We perform five independent runs for each N using 10 7 Monte Carlo sweeps to equilibrate the system. Although considerable fluctuations are seen between different configurations, at a given state point no significant changes in φ , ψ 6 or pressure are observed between any of our runs, so we assume that the system is equilibrated. Supplementary Fig. S3 presents further evidence of ergodicity in simulation. Confirmation that the simulations of 2D hard discs are a good approximation to our experimental system is provided in Supplementary Fig. S9 , where we explicitly include the effect of gravity to form a quasi-monolayer in a 3D system and model electrostatic interactions via a Yukawa potential. Additional information How to cite this article: Williams, I. et al . Direct measurement of osmotic pressure via adaptive confinement of quasi hard disc colloids. Nat. Commun. 4:2555 doi: 10.1038/ncomms3555 (2013). | While the air pressure in a wheel and the blood pressure inside a human body can precisely be measured, it is still a challenge to measure the pressure inside microscopic objects such as cells in our bodies. Researchers from Universities of Bristol and Düsseldorf (Germany) have found a method to measure the pressure in small objects, which is published in the latest issue of Nature Communications. The idea is similar to using a sleeve when our blood pressure is taken, but on a scale ten thousand times smaller. Rather than squeezing an arm, a liquid of tiny particles is squeezed by other particles using the tiny forces of light known as optical tweezers. Dr Paddy Royall, Royal Society University Research Fellow in the Schools of Physics and Chemistry, said: "In the future, this method can be used to access the turgor pressure inside cells and thus to diagnose various diseases, for example certain types of cancerous cells have abnormally low pressure." | www.nature.com/ncomms/2013/131 … 5/pdf/ncomms3555.pdf |
Computer | Perovskite material with superlattice structure might surpass efficiency of a 'perfect' solar cell | Yusheng Lei et al, Perovskite superlattices with efficient carrier dynamics, Nature (2022). DOI: 10.1038/s41586-022-04961-1 Journal information: Nature | https://dx.doi.org/10.1038/s41586-022-04961-1 | https://techxplore.com/news/2022-08-perovskite-material-superlattice-surpass-efficiency.html | Abstract Compared with their three-dimensional (3D) counterparts, low-dimensional metal halide perovskites (2D and quasi-2D; B 2 A n −1 M n X 3 n +1 , such as B = R-NH 3 + , A = HC(NH 2 ) 2 + , Cs + ; M = Pb 2+ , Sn 2+ ; X = Cl − , Br − , I − ) with periodic inorganic–organic structures have shown promising stability and hysteresis-free electrical performance 1 , 2 , 3 , 4 , 5 , 6 . However, their unique multiple-quantum-well structure limits the device efficiencies because of the grain boundaries and randomly oriented quantum wells in polycrystals 7 . In single crystals, the carrier transport through the thickness direction is hindered by the layered insulating organic spacers 8 . Furthermore, the strong quantum confinement from the organic spacers limits the generation and transport of free carriers 9 , 10 . Also, lead-free metal halide perovskites have been developed but their device performance is limited by their low crystallinity and structural instability 11 . Here we report a low-dimensional metal halide perovskite BA 2 MA n −1 Sn n I 3 n +1 (BA, butylammonium; MA, methylammonium; n = 1, 3, 5) superlattice by chemical epitaxy. The inorganic slabs are aligned vertical to the substrate and interconnected in a criss-cross 2D network parallel to the substrate, leading to efficient carrier transport in three dimensions. A lattice-mismatched substrate compresses the organic spacers, which weakens the quantum confinement. The performance of a superlattice solar cell has been certified under the quasi-steady state, showing a stable 12.36% photoelectric conversion efficiency. Moreover, an intraband exciton relaxation process may have yielded an unusually high open-circuit voltage ( V OC ). Main We studied the growth process and structure of BA 2 SnI 4 ( n = 1) superlattice on a MAPb 0.5 Sn 0.5 Br 3 substrate (Supplementary Discussion 1 and Supplementary Figs. 1 – 4 ). The Sn-I slabs exhibit a favourable epitaxial relationship with the substrate, forming a thermodynamically stable, vertically aligned lattice 12 (Supplementary Fig. 1 ). Scanning electron microscopy (SEM) images show that the crystals first grow into criss-cross vertical thin plates followed by lateral merging (Fig. 1a and Supplementary Fig. 2 ). Similar growth behaviour can be observed in other low-dimensional perovskites grown on different substrates (Supplementary Figs. 3 and 5 ). Cryogenic scanning transmission electron microscopy (STEM) was used to study the structure of a single plate, which exhibits anisotropy (Fig. 1b ). The a – c plane shows a periodic distribution of inorganic Sn-I slabs and organic BA spacers along the a direction (Fig. 1b , middle and Supplementary Fig. 6 ). The b – c plane shows a continuous Sn-I slab with a coherent heteroepitaxial interface with the substrate (Fig. 1b , right). Therefore, the criss-cross vertical plates on the substrates create a 3D network of Sn-I slabs, not seen previously in any polycrystals (Supplementary Fig. 7 ) or conventionally grown single crystals. Furthermore, grazing-incidence wide-angle X-ray scattering further verified their vertically aligned structures 13 , 14 (Supplementary Fig. 8 ). Fig. 1: Structural characterizations of the BA 2 SnI 4 superlattice. a , SEM images showing the criss-cross epitaxial BA 2 SnI 4 superlattice before and after merging into a thin film. Scale bars, 2 μm. b , Schematic (left) and atomic-resolution cryogenic STEM images (middle and right) showing the superlattice structure of a single plate. Cryogenic STEM is essential to minimize the damage of beam-sensitive materials. The epitaxial layer has a well-aligned anisotropic structure without grain boundaries or dislocations. The insets are fast Fourier transform patterns from the epitaxial layer in the a – c plane, which show a 2D diffraction pattern of the superlattice and is different from that of the substrate (middle). The inset fast Fourier transform images in the b – c plane show the structural similarity between the inorganic slab and the substrate (right). Organic atoms are usually invisible under electron diffraction. Scale bars, 6 nm. c , Photocurrent measurements with a linearly polarized excitation source showing that the response of the epitaxial layer (top) shows a period that is half of that of a conventionally grown single crystal (bottom). d , Transient photovoltage measurements showing the orientation-dependent carrier lifetime in the a – b plane. The inset optical image shows the measurement setup. The error bars are from measurements of five different devices. Scale bar, 500 μm. Full size image To further study the crystal orientation in the a – b plane, we measured the polarization-dependent photocurrent of superlattices and conventionally grown single crystals (Fig. 1c ). The results in both show a strong dependence on the polarization direction, but the response of superlattices has a 90° period, whereas that of conventionally grown single crystals has a 180° period. This is because the inorganic slabs are aligned in two perpendicular orientations in the a – b plane of superlattices, but in only one orientation of conventionally grown single crystals (Supplementary Fig. 9 ). Similar trends can also be observed in the carrier lifetime obtained from orientation-dependent transient photovoltage measurements (Fig. 1d and Supplementary Fig. 10 ). These results collectively support that the superlattice has Sn-I slabs interconnected, with numerous criss-cross thin plates merged in the a – b plane. Because of the interconnected Sn-I slabs, carriers in the superlattice do not need to cross any grain boundaries or organic spacers both in plane and out of plane. Transient photocurrent measurements along the film thickness ( c direction) show a much higher carrier mobility in the superlattice than in the polycrystalline or conventionally grown single-crystal sample (Fig. 2a ). The grain boundaries in polycrystals markedly reduce carrier mobility 15 (Supplementary Fig. 11 ). The layered organic spacers make the mobility in conventionally grown single crystals the lowest (Supplementary Fig. 12 ). Power-dependent time-resolved photoluminescence measurements show that the superlattice has a longer carrier lifetime than the polycrystal (Fig. 2b ), indicating minimal restriction of the carriers. Furthermore, superlattices show better tolerance to high excitation power than polycrystals, suggesting that better crystallinity can reduce material degradation under high excitation power 16 . Fig. 2: Carrier transport properties of the BA 2 SnI 4 superlattice. a , Transient photocurrent measurements along the film thickness ( c ) direction. The superlattice shows the highest carrier mobility. The carrier mobility in the polycrystal is limited by grain boundaries and lattice misalignments between grains. The conventionally grown single crystal shows the lowest carrier mobility because of the energy barriers caused by the organic spacers along the film thickness direction. The insets show the schematic measurement setups. The error bars are from measurements of five different devices. b , Time-resolved photoluminescence measurements showing a longer carrier lifetime in the superlattice than the polycrystal. The lifetime–power relationship in the polycrystal tends to deviate from a linear fit (dashed lines) at high excitation power owing to absorber degradation. The error bars are from measurements of five different devices. c , Temperature-dependent J – V measurements on solar cells (ITO/ICBA/perovskite/PTAA/Au; active size, 1 mm 2 ) fabricated on as-grown films. The current density values are normalized. As temperature drops, the FF of the superlattice device does not change as strongly as the polycrystal device, indicating a lower internal energy barrier in the superlattice. PTAA, poly[bis(4-phenyl)(2,4,6-trimethylphenyl)amine]. d , SEM images and corresponding EBIC mapping of the top surface of BA 2 SnI 4 films. The polycrystal exhibits grain-dependent current signals. The superlattice exhibits stronger current signals with a criss-cross pattern, even with a smooth film surface. Scale bars, 200 nm. e , SEM images and corresponding EBIC mapping of the cross section of BA 2 SnI 4 films. The polycrystal exhibits grain-dependent current signals. The superlattice exhibits stronger current signals with a linear pattern. Scale bars, 100 nm. f , Thickness-dependent EQE measurements. The superlattice device exhibits a higher EQE with a larger optimal absorber thickness, indicating that the carrier diffusion length in the superlattice is longer than that in the polycrystal. A longer-wavelength collection edge also indicates a smaller bandgap in the superlattice. Full size image The structural advantages of superlattices are validated with temperature-dependent current density–voltage ( J – V ) characteristics of a BA 2 SnI 4 solar cell. To investigate internal energy barrier for carrier transport, we fabricated a device directly on the superlattice without peeling it off from the epitaxial substrate to minimize any possible confounding factors introduced by the fabrication process 15 (Supplementary Discussion 2 and Supplementary Figs. 13 and 14 ). As the temperature gradually decreases, thermal energy becomes too small for the carriers to overcome barriers (for example, owing to ionized impurity scattering), so the fill factor (FF) decreases substantially for both superlattice and polycrystalline devices (Fig. 2c ). However, the decrease is less pronounced in superlattices, indicating lower internal energy barriers. We measured the electron-beam-induced current (EBIC) to visualize carrier transport barriers. For polycrystals, the collected currents on the thin film surface heavily depend on grain orientations, indicating disorientated multiple quantum wells (Fig. 2d , left). By contrast, superlattices yield higher and much more uniform currents owing to the well-aligned crystal structure (Fig. 2d , right). Note that superlattices exhibit a criss-cross current pattern owing to their imperfect merging during solution growth (Supplementary Fig. 15 ). Similar observations can also be made in the sample cross sections (Fig. 2e and Supplementary Discussion 3 ). The improved carrier dynamics of superlattices allows a longer carrier diffusion length. As the photovoltaic absorber, the thickness of polycrystals is usually highly restricted 17 , for which the external quantum efficiency (EQE) peaks at about 400 nm for BA 2 SnI 4 (Fig. 2f , top). By contrast, the absorber thickness for superlattices can be increased to around 700 nm with enhanced light absorption and, thus, EQE (Fig. 2f , bottom). We investigated the heteroepitaxial strain in BA 2 SnI 4 superlattices quantitatively by X-ray diffraction. Compared with conventionally grown single crystals, high overall compressive strains are present in superlattices along the a and b directions, at around 8.59% and around 1.32%, respectively (Fig. 3a , top); a tensile strain of roughly 0.99% is present in the c direction owing to the Poisson effect 18 (Fig. 3a , bottom, Supplementary Discussion 4 and Supplementary Table 1 ). These strains are validated by STEM images (Supplementary Fig. 6 and Supplementary Discussion 4 ). Structural computation by density functional theory (DFT) further shows a lattice compression of Sn-I slabs from about 6.04 Å to about 5.94 Å in the a direction (Supplementary Fig. 16 ), yielding an approximately 1.66% strain, which is close to the 1.32% strain in the b direction; the width of organic spacers is compressed from about 7.00 Å to about 5.98 Å (Supplementary Figs. 16 and 17 ), corresponding to an approximately 14.6% strain. Therefore, the high compressive strain is mostly accommodated by organic spacers. High strain reduces the stability of superlattices (Supplementary Figs. 18 and 19 ). For general heteroepitaxial BA 2 MA n −1 Sn n I 3 n +1 , as n increases, the volume ratio of the Sn-I slabs increases, the overall lattice strain decreases (Fig. 3b ) and the structure is more stable. Moreover, lower strain results in fewer structural defects and smoother surfaces (Fig. 3b , inset images). Fig. 3: Strain properties of BA 2 MA n −1 Sn n I 3 n +1 superlattices. a , X-ray diffraction measurements of the BA 2 SnI 4 superlattice and conventionally grown BA 2 SnI 4 single crystals. A compressive strain in the a – b plane and a tensile strain along the c direction are observed in the superlattice. b , DFT-computed and experimentally calculated lattice strain with different n in low-dimensional BA 2 MA n −1 Sn n I 3 n +1 perovskites. Crystals with larger n will have smaller strain. Inset SEM images show that a larger n will result in a smoother surface, which is attributed to fewer defects under smaller epitaxial strain. Scale bars, 50 μm. c , Ellipsometry measurements of the dielectric function ( ε ′ + iε ″) of the BA 2 MA 2 Sn 3 I 10 superlattice and conventionally grown BA 2 MA 2 Sn 3 I 10 single crystals. The larger ε ′ in the superlattice indicates that the compressive strain can increase the dielectric constant and the Bohr radius in the superlattice. A red shift in ε ″ shows that the compressive strain decreases the bandgap of the superlattice. d , Estimated exciton binding energies obtained from temperature-dependent photoluminescence measurements. The smaller fitted exciton binding energy in the superlattice than the polycrystal indicates a weaker quantum confinement effect because of the smaller width of the organic barrier. In the inset equation, I is the integrated photoluminescent intensity, I 0 is the integrated intensity at room temperature, A is an arbitrary constant, E B is the exciton binding energy, k B is the Boltzmann constant and T is the temperature. Full size image To avoid structural change and achieve reliable measurements of superlattices, we chose BA 2 MA 2 Sn 3 I 10 ( n = 3) to study their strain-controlled optoelectronic properties. We used ellipsometry to study the dielectric functions ( ε ′ + iε ″). The higher ε ′ of superlattices indicates weakened quantum confinement by compressed organic spacers (Fig. 3c ), a larger Bohr radius in the multiple quantum wells and, therefore, a higher rate of free-carrier generation 19 (Supplementary Discussion 5 ). Besides, the shift in ε ″, which reflects the absorption wavelength 20 , suggests a smaller bandgap in superlattices compared with conventionally grown single crystals, which is also evident by the longer-wavelength collection edge of superlattices (Fig. 2f and Supplementary Fig. 20 ). Temperature-dependent photoluminescence measurements also show a much-reduced fitted exciton binding energy in superlattices compared with conventionally grown single crystals 18 , 19 (Fig. 3d ). In addition, the carrier lifetime in superlattices is slightly longer than conventionally grown single crystals at 0° in transient photovoltage measurements (Fig. 1d ). All these characteristics can be attributed to the weakened quantum confinement in superlattices. Large heteroepitaxial strains heavily influence the stability of superlattices (Fig. 3b , Supplementary Discussion 4 and Supplementary Figs. 18 and 19 ). We choose BA 2 MA 4 Sn 5 I 16 ( n = 5) to investigate the device performance owing to its better stability. To further relieve the strain and create an even more stable structure, we investigated using Bi 3+ (103 pm in radius 21 ) to partially replace Sn 2+ (118 pm in radius 22 ). DFT calculations show that the Bi 3+ tends to concentrate at the interface between the inorganic slab and the organic spacer to relieve the compressive strain (Fig. 4a , top and Supplementary Fig. 21 ), forming an aggregated Bi 3+ atomic layer to decrease the formation energy (Supplementary Fig. 22 and Supplementary Discussion 6 ) of the superlattice and yield a more stable structure (Supplementary Fig. 23 ). Furthermore, the aggregated Bi 3+ alloying decreases the conduction band minimum (CBM) (Fig. 4a , bottom and Supplementary Figs. 24 and 25 ). The region without Bi 3+ alloying remains intact. The result is an inorganic slab with a double-band structure. Fig. 4: Photovoltaic studies of Bi 3+ -alloyed superlattice. a , Structure of the Bi 3+ -alloyed BA 2 MA 2 Sn 3 I 10 superlattice computed by DFT. The Bi 3+ ions preferentially aggregate at the interface between the organic and inorganic slabs to relieve the lattice strain (top). The aggregated Bi 3+ alloying alters the electronic band structure, resulting in a substantially decreased CBM. Combined with the region without Bi 3+ , they form a double-band structure in the inorganic slab (bottom). b , Certified photovoltaic performance measurement based on a Bi 3+ -alloyed BA 2 MA 4 Sn 5 I 16 superlattice, showing a bandgap of 1.042 eV and a V OC of 0.967 V. c , Unusual carrier transport processes with intraband relaxation, resulting in a high V OC . Note that both Sn-I and Bi/Sn-I regions are in direct physical contact with the ETL. HTL, hole transport layer. d , Wavelength-dependent J – V measurements of a polycrystalline solar cell with a uniform Bi 3+ distribution and, therefore, a single bandgap (left) and a superlattice (right) solar cell. In the polycrystalline device, reasonably small variations in the FF indicate that the carrier transport and the collection are almost independent of wavelength; therefore, the abruptly decreased V OC at about 1,000 nm suggests that the uniform Bi 3+ distribution does not alter the band structure. In the superlattice device, when the incident wavelength is shorter than around 900 nm, neither FF nor V OC exhibit an obvious wavelength dependency. However, once the excitation wavelength is longer than about 900 nm, both FF and V OC decrease substantially. e , Extracted FF and V OC from d . Full size image We studied the photovoltaic performance of those superlattices. We chose 10% Bi 3+ -alloyed BA 2 MA 4 Sn 5 I 16 ( n = 5) superlattice with a textured surface and fabricated a solar cell directly on the epitaxial substrate (Supplementary Figs. 26 and 27 ). Indene-C60 bisadduct (ICBA) was used as the electron transport layer (ETL) because its CBM level (Supplementary Fig. 28 ) is higher than that of the Bi/Sn-I but lower than the Sn-I slabs (Supplementary Table 2 ). The Bi/Sn-I and the Sn-I regions are both in contact with the ETL. The as-certified superlattice solar cell exhibits a stable 12.36% photoelectric conversion efficiency under the quasi-steady state (Supplementary Fig. 29 )—the highest in lead-free low-dimensional perovskite solar cells. To further replace the lead-containing substrate, it is also feasible to use other substrates (Supplementary Figs. 3 and 5 ) or to exfoliate and transfer the superlattice from the epitaxial substrate to a general substrate (Supplementary Figs. 30 , 31 and 32 ). Moreover, the quantum efficiency of the solar cell (Fig. 4b and Supplementary Fig. 29 ) shows a carrier collection cut-off at approximately 1,190 nm, which gives a bandgap of about 1.042 eV and a V OC of at most 0.802 V according to the Shockley–Queisser limit 23 . However, the certified V OC is 0.967 V, indicating other contributing mechanisms. Figure 4c shows the schematic band diagram of the superlattice solar cell. Because the aggregated Bi 3+ alloying in superlattices could lead to a radiative band structure besides the band-tail states that commonly exist in Bi 3+ -doped polycrystals 24 , 25 , 26 , 27 , 28 , 29 (Supplementary Fig. 23 and Supplementary Discussion 6 ), an intraband relaxation mechanism is possible for contributing to the high V OC . We performed wavelength-dependent J – V measurements to investigate the potential mechanism (Fig. 4d,e ). Under short incident wavelengths (less than about 1,000 nm), most electrons are excited into energy states higher than the CBM of both Sn-I and Bi/Sn-I regions. Those electrons from the Sn-I region naturally relax to the CBM of the Sn-I region. Furthermore, a substantial portion of the electrons from the Bi/Sn-I region can also relax to the CBM of the Sn-I region through intraband relaxation (solid blue arrows in Fig. 4c ). This transition is possible because the atomic-thin Bi/Sn-I region is easy for carriers to diffuse across. Also, the built-in potential in the p-i-n structure might have facilitated this atomic-scale transition; moreover, the ETL layer favours electron collection from the Sn-I region (solid red arrow in Fig. 4c ). Therefore, most of the carriers are in the Sn-I region, yielding a high V OC and a high FF (Fig. 4d,e ). Under long incident wavelengths (more than about 1,000 nm), electrons can only be excited in the Bi/Sn-I region. The relatively low-energy electrons can only relax to the CBM of the Bi/Sn-I region and then to the ETL by means of interband transition (dashed red arrows in Fig. 4c ). Therefore, most of the carriers are in the Bi/Sn-I region, contributing to a low V OC (Fig. 4d,e ). The energy barrier between the Bi/Sn-I region and the ETL causes serious charge accumulation (Supplementary Discussion 7 ), resulting in a low FF (Fig. 4d,e ). When the device is excited under mixed incident wavelengths, the high-energy electrons facilitate the quasi-Fermi-level splitting in the Sn-I region. The low-energy electrons will have a relatively small influence on the overall V OC because of the small portion of long wavelengths (between about 1,000 nm and about 1,200 nm) in the solar spectrum (roughly 9%) 30 and, thus, the small quantity of the low-energy electrons. The overall V OC is predominantly determined by the bandgap of the Sn-I region (Supplementary Fig. 33 and Supplementary Discussion 7 ). To verify this mechanism, we collected pump–probe ultrafast transient absorption spectra to investigate their hot carrier dynamics (Supplementary Discussion 8 ). To meet the measurement requirement, a transferred device structure (ITO/superlattice/ICBA/polypropylene tape/ITO) (Supplementary Fig. 34 ) was adopted under an external electrical field to mimic the built-in potential of the solar cell. We measured transient absorption spectra with and without the bias (Fig. 5a and Supplementary Figs. 35 and 36 ). The polycrystalline thin films exhibit very different spectral profiles from superlattices (Fig. 5a and Supplementary Fig. 35 ). Obvious ground state bleaching (GSB) signals in the negative intensity region could only be observed in superlattices, indicating more efficient carrier dynamics in the superlattices than those in the polycrystalline thin films. Fig. 5: Dynamics analysis of hot electrons in Bi 3+ -alloyed superlattices. a , Measured transient absorption spectra for Bi 3+ -alloyed BA 2 MA 2 Sn 3 I 10 superlattice devices. It is clear to see that the device exhibits obvious changes in GSB and ESA intensities with and without the 10-V bias, suggesting bias-dependent hot carrier dynamics in Bi 3+ -alloyed BA 2 MA 2 Sn 3 I 10 superlattice devices. b , Extracted hot carrier relaxation lifetimes from a for Bi 3+ -alloyed BA 2 MA 2 Sn 3 I 10 superlattice devices. Their lifetimes show negligible changes with and without the 10-V bias, excluding the influence of the applied bias on the hot carrier lifetimes. Full size image The lifetime of hot electrons could be obtained by extracting and fitting relaxation time profiles at selected wavelengths (Fig. 5b and Supplementary Fig. 37 ). The hot electron lifetimes of superlattices (Bi 3+ -alloyed and Bi 3+ -free) are between about 0.35 and 0.36 ps, which are almost twice that of Bi 3+ -doped polycrystalline thin films (approximately 0.19 ps) (Supplementary Fig. 37 ). Accordingly, the calculated hot electron diffusion length in superlattices is around 3.9 nm, much longer than the width of the Bi/Sn-I regions (about 0.6 nm) (Supplementary Fig. 37 ), suggesting that the hot electrons can readily travel across the Bi/Sn-I regions to the Sn-I regions. Furthermore, transient absorption spectra show an obviously enhanced GSB intensity in Bi 3+ -alloyed superlattices when the applied bias increases from 0 V to 10 V (Fig. 5a ). By contrast, the excited state absorption (ESA) signal decreases (Fig. 5a ). However, no such phenomenon can be observed in Bi 3+ -free superlattices or Bi 3+ -doped polycrystalline thin films (Supplementary Fig. 35 ), supporting the potential intraband relaxation in Bi 3+ -alloyed superlattices: the increased GSB signal intensity indicates a reduced number of electrons at the ground state in the valence band. Because the excitation setups for 0-V and 10-V measurements are the same, the reduced electrons at the ground state in the valence band are not from a stronger excitation. Therefore, it suggests that the number of electrons relaxing from the conduction band to the valence band after excitation is reduced. However, because the hot carrier lifetime is minimally influenced by the electrical field (Fig. 5b and Supplementary Figs. 37 and 38 ), those ‘reduced’ electrons can only transport to Sn-I regions but not to the ITO or ICBA layers because of the direction of the applied electrical field and the strong interfacial barriers, respectively (Supplementary Discussion 8 ). The decreased ESA intensities owing to a reduced number of hot electrons in the valence band provide further evidence for the potential intraband relaxation. However, because of the same excitation setup and similar hot electron lifetimes for 0-V and 10-V measurements (Fig. 5b and Supplementary Figs. 37 and 38 ), the obviously reduced hot electron population is not from a weaker excitation or more rapid relaxation but from other relaxation routes. The excited hot electrons have short lifetimes and can only undergo atomic-scale diffusion to Sn-I regions but not to the ITO or ICBA layers (Supplementary Discussion 8 ). Besides the unique intraband relaxation mechanism discussed here, other carrier transport processes might also be possible for the high V oc , such as the superposition principles between parallel subcells 31 , sub-band absorption 32 , multiple exciton generation in atomic-scale structures 33 and ion diffusions 34 . Further research is required to gain a complete understanding of this phenomenon. Continued improvements in the device performance are possible with optimizations of the design of the electrode patterns, the resistivity of the top electrode and the band alignment of the ETL/hole transport layer. Methods Materials The materials used in this study were used as purchased without further purification, which included lead iodide (PbI 2 , 99.99%, Tokyo Chemical Industry), lead bromide (PbBr 2 , 98%, Alfa Aesar), hydrobromic acid (HBr, 48 wt% in water, Sigma-Aldrich), methylamine (CH 3 NH 2 , 40% in methanol, Tokyo Chemical Industry), tin (II) oxide (SnO, 97%, Sigma-Aldrich), hydroiodic acid (57% in water, Sigma-Aldrich), hypophosphorous acid (H 3 PO 2 , 50 wt% in water, Sigma-Aldrich), methylammonium iodide (99.9%, GreatCell Solar), n-butylammonium iodide (99.9%, GreatCell Solar), caesium chloride (CsCl, 99.9%, Sigma-Aldrich), silver chloride (AgCl, 99%, Sigma-Aldrich), antimony (III) chloride (SbCl 3 , 99%, Sigma-Aldrich), bismuth (III) iodide (BiI 3 , 99%, Sigma-Aldrich), ICBA (LT-S9030, Luminescence Technology), poly[bis(4-phenyl)(2,4,6-trimethylphenyl)amine] (PTAA, LT-N168, Luminescence Technology), chlorobenzene (C 6 H 5 Cl, TCI America), anhydrous dimethylformamide (C 3 H 7 NO, 99.8%, Sigma-Aldrich), anhydrous gamma-butyrolactone (GBL, C 4 H 6 O 2 , 99% Sigma-Aldrich), anhydrous dimethyl sulfoxide (C 2 H 6 OS, 99.9%, Sigma-Aldrich), isopropanol (IPA, C 3 H 8 O, 99.5%, Sigma-Aldrich) and methanol (99.8%, CH 3 OH, Sigma-Aldrich). Preparation of single-crystal perovskites MAPbBr 3 : flat and smooth centimetre-sized bulk MAPbBr 3 single crystals were prepared by solution-based growth 35 . The single crystal was used as the 3D perovskite substrate to grow the low-dimensional perovskite superlattice without any further treatment. MAPbI 3 : MAPbI 3 single crystals were prepared by solution-based growth 15 . The as-obtained crystals were ultrasonically cleaned in an anhydrous IPA solvent for 5 min. Then the crystals were crushed into powders for growth precursor preparation. Synthesis of low-dimensional perovskites 7.5 mmol SnO was added into 10 ml steaming hydroiodic acid (57 wt%) mixed with 2.5 ml H 3 PO 2 (50 wt%) aqueous solution until the precursor solution become transparent yellow. Then the stoichiometric n-butylammonium iodide (3 mmol)/methylammonium iodide (6 mmol) methanol solution was injected into the precursor solution under stirring. Later, the beaker was transferred into a vacuum chamber to remove the dissolved oxygen and left standing for crystallization. Crystal flakes would appear after around 2 h. Then IPA was used to wash the crystals three times. Finally, the crystals were dried in vacuum and then directly dissolved in GBL to form the growth solution (0.5 M) for low-dimensional perovskites. For the Bi 3+ -alloyed superlattice, 10% molar ratio of BiI 3 was dissolved into the growth solution under room temperature. The solution was filtered before use to remove any undissolved components. Preparation of precursors for mixed and double perovskites The mixed perovskite MAPb 0.5 Sn 0.5 Br 3 was prepared by mixing MABr, PbBr 2 and SnBr 2 with a 2:1:1 molar ratio in dimethylformamide (1.5 M). The double perovskite Cs 2 AgSbCl 6 precursor solution was prepared by directly mixing CsCl, AgCl and SbCl 3 with a 2:1:1 molar ratio in dimethyl sulfoxide (0.4 M). The as-prepared solution was stirred at 60 °C until the solution became clear. Then adding 0.4 M MAPbI 3 single-crystal powder to the solution completed precursor solution preparation for achieving a suitable lattice constant with minimal lattice mismatch between the substrate and the inorganic slab of the epitaxial layer. Device fabrication MAPbBr 3 bulk crystals were used as the 3D substrates as their synthesis is well established. To further reduce the lattice mismatch, the mixed perovskite (or double perovskite) precursor was hot casted onto the MAPbBr 3 crystal to form a smooth epitaxial layer, which was the actual substrate surface for growing the low-dimensional perovskites. The thickness of the smooth epitaxial layer did not influence the subsequent superlattice growth or device fabrication. Polyimide films (12.7 μm thick) were prepatterned (with an opening size of 1 μm × 1 μm) to serve as the growth mask by following a reported method 15 . Then a layer of Au was deposited by sputtering to serve as the bottom electrode. Later, PTAA solution (1.5 mg ml −1 in anhydrous toluene) was directly spin-coated onto the patterned polyimide/Au films at 2,500 rpm for 30 s, followed by annealing at 80 °C for 3 min. Then the growth substrate was laminated with the polyimide/Au/PTAA mask and then spin-coated by supersaturated mixed perovskite (or double perovskite) precursor at 4,000 rpm for 30 s, followed by annealing at 100 °C for 5 min. Subsequently, low-dimensional perovskite growth solution (0.5 M in GBL) was spin-coated on the substrate at 1,500 rpm for 60 s, followed by annealing at 180 °C for 2 min to form the superlattice absorber layer. After that, ICBA (20 mg ml −1 in chlorobenzene) was spin-coated onto the epitaxial layer, followed by annealing at 100 °C for 5 min. Finally, a layer of ITO was deposited by sputtering to serve as the transparent top electrode. The polycrystalline devices are fabricated by hot casting 22 . DFT calculations First-principles DFT calculations were performed using the Vienna Ab initio Simulation Package 36 . The projector augmented-wave pseudopotential was used for describing electron–ion interactions 37 . The generalized gradient approximation parameterized by Perdew, Burke and Ernzerhof was used to treat the electron–electron exchange-correlation functional 38 . The van der Waals functional DFT-D3 was applied to properly describe the long-range dispersion interactions between the organic molecules in the hybrid materials 39 . The hybrid functionals within Heyd–Scuseria–Ernzerhof formalism with 70% Hartree–Fock exchange were used to calculate bandgaps for the Sn-based perovskites 40 , 41 . The wave functions were expanded in a plane-wave basis set with a cut-off energy of 400 eV. The structures for conventionally grown single-crystal Ruddlesden–Popper perovskites and epitaxially grown perovskites were built on the basis of experimental results of the lattices. The atomic positions were fully optimized until all components of the residual forces were smaller than 0.03 eV Å −1 . The convergence threshold for self-consistent field iteration was set at 10 −5 eV. Γ-centred 2 × 1 × 4 and 4 × 4 × 1 k -point grids were used for superlattice and conventionally grown single crystals, respectively. Owing to the limited computational resources, we could only simulate the n = 3 structure, but this would not influence the device ( n = 5) because the formation mechanism of the double-bandgap structure was the same. Morphology characterization All SEM images were taken using a Zeiss Sigma 500 microscope. All optical images were taken using a Zeiss Axio Imager optical microscope. Structure characterization X-ray diffraction was measured by a Rigaku 393 Smart lab diffractometer equipped with a Cu Kα1 radiation source ( λ = 0.15406 nm) and a Ge 394 (220 × 2) monochromator. The STEM images were taken using a cryo-FEI 200 kV Sphera microscope. Samples for the STEM were prepared using a frozen focused ion beam (FEI Scios DualBeam FIB/SEM). The conventionally grown single crystal was hard to be imaged by STEM because the sample without an epitaxial substrate curled quickly owing to its instability in the scanning transmission electron microscope. X-ray photoelectron spectroscopy measurements were carried out using Kratos AXIS Supra with a He I (21.22 eV) source under 10 −8 torr chamber pressure. Optical characterizations Photoluminescence and time-resolved photoluminescence measurements were performed with a confocal microscope system by focusing a monochromatic 6-ps pulsed laser with a 4× objective lens (numerical aperture 0.13). Optical functions were measured by ellipsometry (J.A. Woollam M-2000D spectroscopic ellipsometer). Ultraviolet photoelectron spectroscopy measurements were carried out using Kratos AXIS Supra with a He I (21.22 eV) source under 10 −8 torr chamber pressure. Ultraviolet–visible spectroscopy and absorption spectra were collected using a PerkinElmer Lambda 1050 ultraviolet–visible spectroscopy system under the reflection mode. Electrical characterizations Polarized photocurrent was measured with a polarizer. Time of flight was measured by extracting the decay time of the transient photocurrent to calculate the carrier mobility. An external bias of 0.5 V was used to power the devices with a resistor connected in series. Orientation-dependent transient photovoltages were measured with an oscilloscope (Agilent MSO6104A Channel Mixed Signal) to study the carrier lifetime. A pulsed laser with a pulse width of less than 10 −10 s was used as the light source. The EBIC was collected using a FEI Scios DualBeam microscope with a Mighty EBIC 2.0 controller (Ephemeron Labs) and a Femto DLPCA-200 preamplifier. Lateral Au electrodes were deposited by electron-beam evaporation for surface measurements; a prepatterned Au-coated polyimide film was used as the bottom electrode for cross-section measurements; the top surface was deposited with a layer of Au by electron-beam evaporation to serve as the top electrode. The EBIC and SEM images of the same region of interest were collected simultaneously. The samples were several micrometres in thickness, whereas EBIC could penetrate up to several micrometres into the samples 42 . The transient absorption spectroscopy was performed using an ultrafast transient absorption system with a tunable pump and white-light probe to measure the differential absorption through the sample. The laser system consisted of a regeneratively amplified Ti:sapphire oscillator (Coherent Libra), which delivered 4-mJ pulse energies centred at 800 nm with a 1-kHz repetition rate. The pulse duration of the amplified pulse was approximately 50 fs. The laser output was split by an optical wedge to produce the pump and probe beams and the pump beam wavelength was tuned by an optical parametric amplifier (Coherent OPerA). The pump beam was focused onto the sample by a spherical lens at near-normal incidence (spot size of full width at half maximum (FWHM) about 300 µm). The probe beam was focused onto a sapphire plate to generate a white-light continuum probe, which was collected and refocused onto the sample by a spherical mirror (spot size of FWHM approximately 150 µm). The transmitted white light was collected and analysed with a commercial absorption spectrometer (HELIOS, Ultrafast Systems LLC). Pulse-to-pulse fluctuations of the white-light continuum were accounted for by a simultaneous reference measurement of the continuum. The pump wavelength was maintained at 610 nm with a pulse power of 100 nJ (or approximately 80 μJ cm −2 ). The pump and probe beams were linearly cross-polarized and any scattered pump light into the detection path was filtered by a linear polarizer. The time delay was adjusted by delaying the pump pulse with a linear translation stage (minimum step size 16 fs). The individual component kinetic traces were fit to biexponential decays by least squares means. Photovoltaic characterizations J–V measurements were carried out using a Keithley 2400 source meter under a simulated air mass of 1.5 irradiation (100 mW cm −2 ) and a xenon-lamp-based solar simulator (Oriel LCS-100). Temperature-dependent J–V measurements were performed with the sample in a liquid-nitrogen-cooled metal tank, in which one side was glass to allow illumination. The same configuration was used for both epitaxial and polycrystalline devices. EQE data were collected by illuminating the device under monochromatic light using a tungsten source (chopped at 150 Hz) while collecting the photocurrent by a lock-in amplifier in the alternating current mode. The 2D mapping of the thickness-dependent EQE was generated from the Contour-Color Fill function. Wavelength-dependent J–V measurements were carried out by applying a series of bandpass filters (FWHM about 150 nm) under the solar simulator to measure both the polycrystalline and epitaxial devices. Data availability All data are available in the manuscript or supplementary materials. | A perovskite solar cell developed by engineers at the University of California San Diego brings researchers closer to breaking the ceiling on solar cell efficiency, suggests a study published Aug. 10 in Nature. The new solar cell is a lead-free low-dimensional perovskite material with a superlattice crystal structure—a first in the field. What's special about this material is that it exhibits efficient carrier dynamics in three dimensions, and its device orientation can be perpendicular to the electrodes. Materials in this particular class of perovskites have so far only exhibited such dynamics in two dimensions—a perpendicularly orientated solar cell has never been reported. Thanks to its specific structure, this new type of superlattice solar cell reaches an efficiency of 12.36%, which is the highest reported for lead-free low-dimensional perovskite solar cells (the previous record holder's efficiency is 8.82%). The new solar cell also has an unusual open-circuit voltage of 0.967 V, which is higher than the theoretical limit of 0.802 V. Both results have been independently certified. The open-circuit voltage is a solar cell property that contributes to its efficiency, so this new solar cell "may have the potential to break the theoretical efficiency limit of current solar cells," said study senior author Sheng Xu, a professor of nanoengineering at the UC San Diego. "This might one day allow us to achieve higher efficiency with more electricity from existing solar panels, or generate the same amount of electricity from smaller solar panels at lower costs." The researchers hypothesize that the material's improved open-circuit voltage might be attributed to a new physical mechanism that they call intraband carrier relaxation. The material's unique superlattice structure allows different components of the solar cell to integrate in the vertical direction, which creates an atomic-scale double band structure. Under light, the excited electrons could relax from one component (smaller bandgap region) to another component (larger bandgap region) before equilibrating to alter the fermi levels in the superlattice solar cell. This contributes to a higher open-circuit voltage. This process is verified to be related to the built-in potential in the superlattice solar cell. The researchers also acknowledge that there are other possible mechanisms occurring in the unique superlattice structure that might be contributing to its unusually high open-circuit voltage. To create the new lead-free low-dimensional perovskite solar cell, the researchers used chemical epitaxy techniques to fabricate a superlattice crystal network. The network's structure is unique in that it consists of perovskite quantum wells that are vertically aligned and crisscrossed. This crisscrossed structure makes the material's carrier dynamics—which include electron mobility, lifetime and conduction paths in all three dimensions—more efficient than just having multiple quantum wells. These techniques can potentially be used to create perovskite superlattices of different compositions. "This perovskite superlattice demonstrates an unprecedented carrier transport performance that many researchers in the field have dreamed about," said Yusheng Lei, the lead author of this paper, who was a Ph.D. student in Xu's lab at UC San Diego and is now a postdoctoral researcher at Stanford University. The superlattice consists of a nanoengineered phase separation between Bi3+ alloyed and intact Sn-I regions in vertically aligned multiple-quantum-wells. This composition creates component variations in the atomic scale, which in turn enables hot carriers to quickly cross the multiple-quantum-wells heterostructural interface before they relax–a feat that is usually impossible to achieve, the researchers explained. Here, it is possible because of the short diffusion length required to cross the heterostructural interface. "This work opens up a lot of new exciting potential for the class of lead-free low-dimensional perovskite materials," said Xu. Moving forward, the team will work on optimizing and scaling up the fabrication process to make the superlattice crystals, which is currently still laborious and challenging. Xu hopes to engage partners in the solar cell industry to standardize the process. | 10.1038/s41586-022-04961-1 |
Biology | Early green, early brown: Climate change leads to earlier senescence in alpine plants | Patrick Möhl et al, Growth of alpine grassland will start and stop earlier under climate warming, Nature Communications (2022). DOI: 10.1038/s41467-022-35194-5 Journal information: Nature Communications | https://dx.doi.org/10.1038/s41467-022-35194-5 | https://phys.org/news/2022-12-early-green-brown-climate-earlier.html | Abstract Alpine plants have evolved a tight seasonal cycle of growth and senescence to cope with a short growing season. The potential growing season length (GSL) is increasing because of climate warming, possibly prolonging plant growth above- and belowground. We tested whether growth dynamics in typical alpine grassland are altered when the natural GSL (2–3 months) is experimentally advanced and thus, prolonged by 2–4 months. Additional summer months did not extend the growing period, as canopy browning started 34–41 days after the start of the season, even when GSL was more than doubled. Less than 10% of roots were produced during the added months, suggesting that root growth was as conservative as leaf growth. Few species showed a weak second greening under prolonged GSL, but not the dominant sedge. A longer growing season under future climate may therefore not extend growth in this widespread alpine community, but will foster species that follow a less strict phenology. Introduction In extratropical alpine environments, low temperature confines the growing season to 6–12 weeks 1 , forcing high-elevation plants to complete their annual developmental cycle within a short time. Yet, the duration of the growing season has increased considerably over the past decades due to above-average warming in mountain regions 2 , 3 , which has led to advanced snowmelt 4 , 5 . By the end of the century, snowmelt is expected to occur up to one month earlier in the Swiss Alps 5 and autumn warming may further prolong the growing season length (GSL). Early release from snow cover commonly advances flowering phenology in many alpine species 6 , 7 , but less is known about how a longer growing season affects the temporal dynamics of growth and senescence 8 , 9 . Remote-sensing studies highlighted that the greening of alpine plants tracks snowmelt within the current interannual variation 10 , 11 . When alpine vegetation responds to advanced snowmelt by growing earlier, the onset of senescence will determine how effectively the season is used for growth and resource acquisition 12 . However, leaf browning and senescence have received less attention in ecological studies than greening and growth 13 , and it is unclear how an early season start affects the onset of senescence in alpine grasslands. Early senescence in early starters may attenuate any growth-related effects in alpine and arctic vegetation 14 , 15 , 16 . And, if present, species-specific differences in the capability to delay senescence under favourable conditions may shape community composition in future. Aboveground growth and tissue maintenance commonly stops early to prepare alpine plants for winter, while roots are better screened from first frost events in autumn and could therefore continue growing. Roughly two-thirds of the world’s grassland biomass is belowground 17 , and that fraction approaches 80–90% in arctic and alpine regions 1 , 18 . Despite the importance of roots and potential divergence between root and leaf phenology 19 , 20 , there is a lack of studies that explore the temporal dynamics of root growth in alpine grassland 21 . Unlike leaves, roots are hidden from remote sensing. Hence, our understanding of belowground processes relies entirely on local observations. Mini-rhizotrons are easily installed windows to examine root growth 22 but processing the acquired images used to be extremely labour-intensive. Recently, machine learning algorithms have been developed that automatically distinguish between roots and soil in images 23 , allowing to analyze large datasets. Observations with high spatial or temporal resolution are needed to understand how above- and belowground phenology is linked 24 , 25 . This is crucial to understand current states and predict changes in alpine vegetation under climate warming. Here, we assessed whether alpine grassland is capable of extending growth and maintaining green tissues when subjected to a significantly longer GSL. We experimentally advanced the growing season by exposing monoliths of typical alpine grassland ( Caricetum curvulae , Fig. 1 ) to typical summer conditions in climate chambers—two to four months before the actual growing season started. We combined repeated censuses of above- and belowground growth parameters throughout the prolonged season and quantified leaf growth in additional field microsites with varying snowmelt timing. We hypothesize that (1) the start and rate of growth are tracking the provided temperature conditions. We assume that (2) the onset of aboveground senescence depends on season start and plant species. Further, (3) we expect root growth to continue as long as soil temperatures are high enough. By combining new methods to analyze root phenology with robust aboveground measurements, our study offers insights into the controls of seasonal growth in alpine plant species. Fig. 1: Overview of the experimental setup. A Scheme of a monolith with natural vegetation and its original soil, equipped with a transparent rhizotron tube to scan root growth. Roots grow along the tube surface (see insert below). B The dominant species Carex curvula . Photo: C. Körner. C Elongation and browning of a single Carex leaf in the course of a growing season. D Monoliths exposed to premature (+4 m, +2 m) summer conditions in climate chambers. E Monoliths at the alpine site during actual summer (July); note the advanced browning compared to the surrounding vegetation. Full size image Results Aboveground growth We experimentally initiated the growing season in climate chambers, 70 and 134 days (termed ‘+2 m’ and ‘+4 m’, respectively) before the in-situ growing season started (Fig. 1 , Table 1 ). Plants experienced similar environmental conditions in the climate chambers as in the field during summer (Fig. 2A ), albeit with fixed diurnal conditions (see Methods section). Mean soil temperature during the first 50 days of the season amounted to 10.2 ± 0.1 °C in +4 m, 11.0 ± 0.1 °C in +2 m, and 10.7 ± 0.1 °C and 11.1 ± 0.1 °C in field plots of 2020 and 2021. Snowmelt in the field plots was 2021 around 3–4 weeks later than 2020 (earlier season start than usual). In both monolith groups and the field plots, leaf elongation of the dominant sedge Carex curvula All. s.str. ( Carex hereafter) started right after the release from winter dormancy with exposure to temperatures >5 °C (Fig. 2B ). It peaked after 44 d in field plots (mean of 2020/2021) and continued 9.3 ± 2.3 d longer in +4 m and +2 m ( t 22 = 4.1, P < 0.001), a brief extension only, given the substantial increase in GSL (Fig. 3 ). Peak leaf length averaged at 9.4 cm ± 0.4 and was not affected by GSL ( F 3 = 0.1, P = 0.93). Similar to leaf length, canopy greenness (assessed from photographs) increased right after the start of the season and peaked after 39 d in +4 m and field plots (no difference), but already after 34 d in +2 m (−4.5 ± 1.3 d, t 18 = 3.4, P = 0.002, Fig. 2C ). Hence, canopy greenness was obviously not reached later when exposed to earlier summer conditions. Table 1 Characteristics of each experimental group (+4 m, +2 m, field plots) and microsites Full size table Fig. 2: Impact of growing season length (GSL) on the timing of growth and senescence. Soil temperature and growth parameters with different growing season length, experimentally advanced in climate chambers (+4 m, +2 m, in 2021) and compared to field plots (2020, 2021). Day of the year is specified for the first day of each month below the x axis of A . GSL is indicated for each group at the top of A (dotted line during snowmelt). All growth data were scaled to 0–100% to ease comparison. A Daily mean soil temperature at 3–4 cm depth, close to the plants’ meristems. B Green leaf length of Carex curvula . C Canopy greenness of the whole plant community (2021). Dashed, vertical lines show the mean date for the peak. D Seasonal gain in root area per unit image area (mm 2 cm −2 , scaled to percent). Points indicate raw data and lines are GAM smoothers in B – D (lines: mean, error band: 95% confidence interval). Full size image Fig. 3: Timepoints related to growth and senescence for different growing season lengths (GSL). Peak green leaf length and senescence down to 50% browning for the dominant species Carex curvula , peak canopy greenness of the entire community and its decline to 50% and the onset of growth, highest growth rate, 50% and 80% seasonal growth for roots. GSL amounted to 238 d (+4 m), 174 d (+2 m), 109 d (field 2020) and 103 d (field 2021). Grey points show data for each monolith and field plot (8 monoliths for +4 m and +2 m and five field plots), colored points refer to mean ± SE (SE smaller than points are not visible). Full size image Leaves of Carex brown from the tip towards the base (Fig. 1C ), such that the remaining (decreasing) green leaf length reflects the progression of senescence. The time between peak leaf length of Carex and 50% leaf browning was 45 d in field plots and 11.7 ± 3.0 d longer in monoliths ( t 22 = 3.9, P < 0.001) with no difference between +4 m and +2 m. However, this difference was largely due to field plots in 2021, when browning took only 37 d compared to 52 d in 2020 ( t 22 = 3.2, P < 0.01, Fig. 3 ). Canopy greenness faded from 100% to 50% within 33 d, independent of GSL ( F 2 = 1.5, P = 0.24, Fig. 3 ). But unlike the monotonic leaf browning of Carex , the decline in canopy greenness of the entire community was partly reversible and greenness temporarily increased again by 11% in +2 m and 36% in +4 m later in the season (Fig. 2C ). Although greenness peaked early, these very low values during the rest of the season accumulated to 49 ± 8.3% higher greenness (integrated as area under the curve) in monoliths than in field plots ( t 18 = 5.2, P < 0.001; no difference between +4 m and +2 m). Root growth We observed root growth as increases in root area using mini-rhizotron tubes (Fig. 1A ) and found that root growth started ca. 11 days after the onset of growing conditions in climate chambers (Fig. 3 ). Field plots of 2020 showed a similar delay as monoliths (8 days), but roots started 5.4 d earlier in the field in 2021 compared to monoliths ( t 21 = 2.4, P = 0.037). The majority of roots was produced within ca. two months after the start of the season: 80% of root growth was reached after 56 d in the field and after 73 d in +2 m and +4 m (Fig. 2D , Fig. 3 ). After that, root growth continued at a low rate, while +4 m even started to lose ca. −20% of its root area in the second half of the season (Fig. 2D ). Thus, the experimentally added 134 d did not translate into sustained root growth in +4 m, and only 10% of root growth resulted from the additional 70 d in +2 m. Maximum increment rates were reached after 30–41 d, coinciding with peak canopy greenness (Fig. 2 , Fig. 3 ). The total seasonal gain in root area was similar in all groups in 2021 (14–17 mm 2 cm 2 ), but significantly higher in the 2020 field plots (28 mm 2 cm 2 ; t 21 = 4.7, P < 0.001). This is presumably related to the time since tube-installation (more unrooted space), as rooting had not yet reached steady-state. Overall, root diameters did not exceed 2.1 mm and averaged at 0.21 mm. Green cover and species-specific vigour index Total green plant cover decreased from ~65% during mid-season to <15% at the end of the season (Table 2 , Supplementary Table 1 ). While green cover of all species was lower at the end of the season, some species lost more greenness compared to others. Carex was the dominant species during mid-season (28–37%), but made up only 1.4–12.7% of total green cover at the end of the season. Leaves of Ligusticum entirely disappeared within ca. 3 months, reducing green cover to zero. Green cover of Anthoxanthum , Leontodon, and Potentilla decreased to a similar degree as total green plant cover, leaving their relative contribution unchanged. In contrast, Helictotrichon and Soldanella constituted a 7% bigger fraction of the remaining green cover at the end of the season than during the mid-season (Table 2 ). Photosynthetic vigour index values (see Methods, Eq. 1 ) declined by 38–100% towards the end of the season in all species, except for the grass Helictotrichon (−24 ± 13%, t 9 = 1.8, P = 0.16) and the forb Soldanella (−8 ± 15%, t 14 = 0.5, P = 0.62; Fig. 4 , Supplementary Table 2 ). Table 2 Total green cover mid-season and at the end of the season and the contribution of the most abundant species (mean ± SE) Full size table Fig. 4: Maintenance of photosynthetically active tissue in the seven most abundant species over the season. Species-specific photosynthetic vigour index (mean ± SE) was calculated from number, size, green area, and chlorophyll content of leaves, in monoliths (+2 m, +4 m) and field plots. Data are scaled to percent of the maximum per species and group. Values were assessed for the same 1–3 individuals per experimental unit (8 monoliths for +4 m, +2 m, and 5 field plots) across the season. Arrows on the right side highlight the difference between the maximum and the last value of the season within the corresponding group. Asterisks indicate P < 0.05 (two-sided t tests, detailed statistics in Supplementary Table 2 ). Full species names are in Table 2 . Illustrations provided by Oliver Tackenberg. Full size image Temperature effects in the field Due to low snow load and heavy storms in winter, snowmelt occurred exceptionally early at wind-exposed microsites in 2020. This led to substantial differences in snowmelt date between the 24 microsites (40 × 40 cm), where we monitored leaf elongation and browning in Carex (Table 1 ). Across microsites, leaf elongation until peak leaf length took longer under earlier snowmelt ( F 2.2 = 236.8, P < 0.001, Fig. 5A , Supplementary Table 3 ). As a consequence, the variation in snowmelt timing was considerably larger (103 days) than the resulting variation in the date of peak leaf length, which encompassed 31 days only. This variation in the leaf elongation period could be explained to 92% by soil temperature close to plants’ meristems ( F 2.7 = 79.1, P < 0.001, Fig. 5B ), with faster elongation rates under warmer conditions ( F 3.4 = 17.1, P < 0.001, Fig. 5C ). Nevertheless, peak leaf length (and the onset of browning) was reached 0.21 ± 0.04 days earlier per day of earlier snowmelt ( F 1 = 36.5, P < 0.001, R 2 = 0.61). Leaf browning to 50% of maximum green length took 26 days and was independent of the date of peak leaf length ( F 1 = 3.2, P = 0.90) and soil temperature ( F 1.6 = 1.0, P = 0.40, Fig. 5C , Supplementary Figure 1 ). In contrast to experimental groups, maximum green leaf length varied across microsites but was not affected by snowmelt date or soil temperature (Supplementary Table 3 ). Fig. 5: Duration and rates of growth and senescence in microsites (2020). Leaf elongation (green) and browning (orange) duration of Carex curvula related to A the onset of the respective period ( n = 24 microsites for elongation and 20 for browning) and B to mean soil/meristem temperature ( n = 23 for elongation and 20 for browning). C Daily rates of elongation and browning (negative) in relation to soil temperature ( n = 43 measurement intervals for elongation and 22 for browning). D Exemplary data from one microsite illustrate how values in A – C were derived: elongation and browning period to 50% for A and B ; rates ( r 1–3 ) for C, calculated for individual measurement intervals (mean ± SE, n = 5 leaves). Temperature was averaged over the corresponding periods. Smoothed curves (lines: mean, error band: 95% confidence interval) and variance explained (%) of smoothers are indicated only when smoothing terms were significant ( F tests, P < 0.05, detailed statistics in Supplementary Table 3 ). DOY = day of year. Full size image Discussion We advanced the start of the alpine growing season and thus, pushed its total length to extremes: Our experiment more than doubled the available time for seasonal plant development and revealed an overarching autonomous control over growth and senescence. Whether the season was prolonged by two or four months, typical alpine summer conditions always initiated plant growth without major delay. However, early-onset of growth was accompanied by early-onset of senescence, halting above- and belowground plant growth even under ongoing, favourable summer conditions. Therefore, our findings challenge the widely assumed rise in future productivity as the thermal growing season prolongs due to climate warming. A close correlation between snowmelt and the onset of leaf greening and elongation has previously been observed in alpine 26 , 27 , 28 and arctic vegetation 29 , 30 . While climatic conditions for arctic and alpine plants differ in important aspects such as solar angle, photoperiod, precipitation and frost regime, they also share important similarities such as the short GSL 31 . The tight link between the start of growing conditions and actual growth substantiates that seasonally snow-covered plants leave endodormancy far ahead of actual snowmelt. Nevertheless, it was speculated that an unusually short photoperiod may prevent growth in early spring 31 . But in contrast to flowering 6 , 7 , 32 , there is little evidence that vegetative growth of alpine plants is delayed by photoperiod in spring. We observed normal growth rates with a day-length of 14.5 h (1–1.5 months ahead of the natural season start) and previously even initiated typical spring growth using an 11.5 h-photoperiod for the same vegetation type (unpublished data). A study across ca. 25 alpine sites and 17 years found no indication that photoperiod influenced leaf elongation after snowmelt 28 . Beside its signalling effect, a short photoperiod also encompasses lower levels of photon fluxes, possibly limiting carbon uptake. However, perennial alpine plants have large belowground reserves 33 and are not carbon-limited 34 , even under shade 35 . Following snowmelt, temperature directly influenced the rates of leaf expansion and growth and thus, affected the time needed to reach peak leaf lengths (or maximum canopy greenness) and to enter leaf senescence. A correlation between leaf growth and temperature is well established from physiological studies in various plant species (e.g., 36 , 37 ), including alpine ones 38 , 39 . Low ambient temperatures are typical when snow melts earlier in the year, prolonging the required time to complete leaf elongation. Consequently, one day advance in snowmelt was associated with only 0.2 days earlier peak leaf length in our microsite survey. This is similar to observations from an interannual remote sensing study in the Swiss Alps, where peak NDVI of alpine grassland shifted by 0.5 days per day of earlier snowmelt 10 . In our experiment, leaf elongation did not take substantially longer in monoliths than in field plots, despite extremely advanced season start, most likely due to similar temperature after snowmelt. Hence, warmer spring temperatures under earlier snowmelt will enhance elongation rates until peak leaf lengths and advance the onset of senescence. Given that senescence started after a similar timeframe in field plots and monoliths, the latter experienced a comparably long period with already senescing leaves. Moreover, the speed of leaf browning in Carex was 25% slower in monoliths compared to field plots. As leaf browning was equally slow between the two monolith groups, we do not anticipate that this difference between monoliths and field resulted from earlier snowmelt. Perhaps the maintained photoperiod or more stable temperature conditions could cause slower leaf browning. Temperature was not related to the speed of browning in our microsite survey, but a meta-analysis across 18 alpine and arctic sites of the International Tundra Experiment found that warming of 0.5–2.3 K significantly delayed leaf senescence by ca. 1 day 9 —a minor delay in relation to the projected advance in snowmelt 5 . It seems that numerous alpine plants evolved conservative controls over senescence to guarantee completion of the seasonal development cycle within the short growing season 14 , 40 , 41 . To some degree, this is reflected in the annual biomass production: there is cumulative evidence that peak photosynthetic biomass (proxies like peak standing biomass, canopy height, or NDVI) of alpine grassland is independent of GSL and conserved across seasons 10 , 27 , 42 , 43 , 44 . We found that Carex reached the same maximum leaf lengths in all three experimental groups (+2 m, +4 m, and field plots). Apparently, seasonal biomass gain is shaped by other factors than GSL, such as temperature, water, and nutrient availability 1 , 45 . Similar to leaves, root growth was initiated by the onset of growing conditions but postponed by several days. We assume that roots depend on aboveground signals to initiate growth, most likely mediated by hormones, such as auxin produced in young leaves 46 . A delay between the onset of above- and belowground growth was also observed in different arctic plant communities, where leaves always started growing prior to roots 19 , 29 , 47 . Delayed root growth in arctic regions could be a consequence of more prevalent soil frost that takes longer to melt—especially under lower solar angles. At least in alpine species, roots grew substantially less below 3–5 °C and ceased to grow in the range of 0.8–1.4 °C 39 , 48 . In contrast to our hypothesis, root growth was not stimulated by extended summer conditions. After the initial growing phase of ca. 3 months, we found either no root growth or at a minute rate. Thus, both above- and belowground phenology were mostly completed after the duration that corresponds to a natural growing season. It seems that root growth stops once aboveground demands for nutrients and water decline. Or root growth is internally controlled, following similar phenological controls as observed in leaves 49 . Compared to leaves, root senescence is difficult to document and requires chemical or molecular tools 50 , 51 . Color-changes such as browning in leaves are not a specific characteristic of senescing roots. Also, the visual distinction between dead and living roots is error-prone. Therefore, only roots that started to structurally disintegrate were considered dead, which was true for 0.3% of root area in the manually annotated mini-rhizotron images (see Methods). Such a low number of dead roots two years after the installation of the rhizotron tubes matches the commonly low root turnover rates of several years in alpine grassland 1 , 45 . Even fine roots may reach a substantial age of up to 15 years, as determined by mean residence time of carbon 52 , although carbon in roots may be older than the roots themselves 53 , 54 . While all species responded opportunistically to a variable start of the season, most species were senescent during the long favourable second half of the season. The grass Helictotrichon and the snowbed plant Soldanella maintained high photosynthetic vigour index and made up a bigger fraction of the remaining green cover at end- compared to mid-season, indicating that these species could benefit from a longer season in terms of assimilation. In contrast, senescence of the dominant Carex progressed fast and deterministically. In the long run, species with such a conservative phenology may become outcompeted when a longer GSL ‘opens’ a window for additional growth during late season 16 . Yet, a 32-year monitoring study of the same grassland type reported only very small changes in species composition over time, despite climate warming and a probable increase in GSL 55 . The authors attributed this manifest stability of species composition to a lack of unoccupied sites in this densely rooted, late-successional grassland. Moreover, clonal proliferation is the rule in alpine grasslands and alpine species can be extremely persistent. In fact, individual clones of Carex curvula were found to live up to 5000 years 56 . Thus, species composition may remain stable for the coming decades or even centuries. Our results provide experimental evidence that early snowmelt due to climate warming will trigger early senescence in this alpine vegetation type, both above- and belowground. Therefore, growth and carbon uptake do not scale with growing season length but strongly depend on internal controls that reflect an evolutionary adjustment to a short growing season. It came as a surprise that a 2–4 months earlier start resulted in a long period of senescent and brown vegetation during the second half of the growing season. This may lead to mismatches with soil microbial activities and therefore, with the nutrient cycle. Such a conservative control over seasonal development will constrain adjustments to the current pace of environmental changes, and in the longer term, promote species with a more flexible timing of growth and senescence. Methods Vegetation The study was conducted on a Caricetum curvulae Br.-Bl., which is the most common alpine grassland community on acidic soils in the Alps 57 . This grassland is widespread in European alpine environments 58 and shares traits with alpine sedge mats around the world (e.g., Kobresia grassland on the Tibetan Plateau), having a similar growth form, short stature, and persisting predominantly clonally. The sedge Carex curvula (Fig. 1B ) is the dominant species, contributing around one third to total annual biomass production 35 , 59 . Grasses like Helictotrichon versicolor Vill. and forbs such as Potentilla aurea L. and Leontodon helveticus Mérat were also very abundant (Table 2 ). Leaves of Carex occur in tillers of 2–5 leaves that originate from belowground meristems. Every year, 1–2 (rarely 3) new leaves are formed that re-sprout in the following 2–3 years and then die off 59 . Growth and leaf elongation start rapidly after snowmelt (usually late June to early July) and reach a maximum before leaf senescence materializes as progressive browning from the leaf tip towards the base (Fig. 1C ). By the end of season, the length of the green leaf part is reduced to 0.5–1.5 cm. Setup of the climate chamber experiment In July 2019, we excavated 16 circular patches of homogenous vegetation (28 cm diameter, Fig. 1A ) to a soil depth of ca. 22 cm, referred to as monoliths. They were collected in the vicinity of the ALPFOR research station at 2440 m a.s.l. in the Swiss Alps (46.577°N, 8.421°E) and fit into buckets with a perforated bottom to allow water to seep through (Fig. 1A ). Soil and root systems of the monoliths were not further disrupted during that process. A transparent, acrylic rhizotron tube (inner diameter: 5.0 cm; outer: 5.6 cm) was installed in every monolith, protruding from the soil by ~15 cm (wrapped in a layer of black and white tape to block light and reduce heat absorption) and tilted at an angle of 35–45° to the surface (Fig. 1A ). The lower opening of the tubes (in soil) was sealed with a rubber plug and the upper opening (outside of the soil) with a removable plastic cap. Polyethylene foam insulated the inside of the tubes. During three summers, 2019–2021, the monoliths remained in sand beds in the natural, alpine environment next to the weather station of ALPFOR (Fig. 1E , ). During alpine winter, monoliths were accessibly stored in a cold building at 1600 m elevation where monoliths were buffered from temperature fluctuations and screened from frost (Supplementary Figure 2 ). Monoliths were covered with cotton blankets and wooden boards to insulate plants, simulate snow pressure, and ensure complete darkness. This allowed a seamless transition to climate chambers before the start of the experiment, without exposing monoliths to freezing temperatures or sunlight. Monoliths had mean soil temperatures of 4.5 °C in the 2019/2020 winter and 3.5 °C in 2020/2021 (Oct–Feb, 3–4 cm soil depth, 3 HOBO TidBits, Onset Computer Corp., USA). In-situ, snow-covered soils rarely freeze due to the insulation by the snow pack and usually reach temperatures of around 0 °C. We do not expect that the slightly warmer soil affected temporal dynamics of plant growth, as roots and aboveground tissues remained visually dormant prior to the experiment. During a pilot study in April 2020, we exposed the monoliths to earlier summer conditions in climate chambers, but roots around the rhizotron tubes were not yet sufficiently established to permit root monitoring. Therefore, we postponed the experiment to 2021. Plants were moved to the climate chambers in February 2021, blankets still in place, and stored in the dark at 0 °C until the experiment started. Treatments The 16 monoliths were equally distributed between two walk-in climate chambers (195 × 130 × 200 cm, L × W × H), in which temperature, light, humidity, and air circulation were controlled (Fig. 1D , phytotron facility 60 , University of Basel). Light was provided by 18 LED modules per chamber, comprising four separately dimmable light channels (blue [B], green, red [R], infrared [IR]; prototypes by DHL-Light, Hannover, GER). We took care to reach B:R ratios of natural sunlight on a bright day (ca. 0.8 61 ) and set an R:FR ratio of ca. 1.4, which is above the range that characterizes vegetation shade. For summer conditions, photoperiod was set to 14.5 h, corresponding to early May in the central Alps (1–1.5 months prior to natural snowmelt), of which 12 h were at maximum light intensity (photon flux density of ca. 1000 μmol m 2 s −1 , Supplementary Figure 3 ). We set temperatures between 5 °C (night) and 14 °C (day) and logged soil temperature at 3–4 cm depth hourly throughout the experiment in six buckets per chamber (iButton DS1922L, Maxim Integrated Products Inc., USA). Monoliths in the first chamber (termed ‘+4 m’) were exposed to alpine summer conditions on 18 February 2021, ~4 months before the in-situ start of the growing season. The second chamber remained dark at 0 °C until 23 April 2021, when the same summer settings were applied (‘+2 m’ group). Monoliths were watered twice a week with 0.8 L of deionized water per monolith. On 5 July 2021, all monoliths were transported to the alpine research site, experiencing natural growth conditions for the rest of the season. As a comparison, we studied five (untreated) plots of an already existing field experiment during two seasons (years 2020 and 2021), located at the same elevation 3 km away from the origin of the monoliths 7 . Each of these plots contained two rhizotron tubes within close proximity (30–40 cm apart; installed in July 2019). These in-situ plots became snow-free mid-June to early July and underwent natural growing seasons. As in +4 m and +2 m, soil temperature at 3-4 cm depth was logged once per hour in each field plot (HOBO TidBit, Onset Computer Corp., USA). Aboveground plant traits For Carex , aboveground growth and senescence were assessed by measuring green leaf length from the soil surface to the narrow zone of incipient browning (similar to 27 ). Each time, 5–10 leaves were randomly selected among the longest leaves. In +4 m, +2 m, and field plots, we measured 6–10, and in field microsites 5 leaves. To monitor the aboveground development of the entire community, we photographed the vegetation every 3–6 weeks in 2021 (DSLR D800, Nikon Corporation, JPN). From these images, we calculated canopy greenness to track temporal variation in plant phenology 62 : canopy greenness = G/(R + G + B), where R, G, and B represent the red, green, and blue channel, respectively. For leaf lengths and canopy greenness, the period of growth was defined as the time from the onset of summer conditions until the peak (100%) was reached. Senescence was defined as the period from the peak to 50% of leaf browning. We obtained a proxy for the photosynthetically active leaf area of seven species (Table 2 ). Three individuals (in the case of graminoids: tillers) per monolith and plot were marked at the start of the growing season in 2021. Every 2–5 weeks, we assessed the number of intact leaves and the length of the longest leaf for each individual. Also, we estimated the fraction of brown leaf area compared to the total leaf area and measured leaf chlorophyll content by fluorescence ratio (emission ratio of intensity at 735 nm/700 nm) in the biggest, healthy-looking leaf (CCM-300, Opti-Sciences, Inc., USA). From these data, we calculated the following photosynthetic vigour index: $${photosyn}{thetic}\,{vigour}\,{index}= \,{max}\,{leaf}\,{length}\,\times \,(1+\sqrt{{number}\,{of}\,{leaves}})\,\\ \times \,(100\%\,-\,{brown}\,{leaf}\,{fraction})\,\\ \times {chlorophyll}\,{content}$$ (1) We used the square root of number of leaves to reflect the decrease in leaf size in each additional leaf beside the biggest leaf. To assess species-specific contributions to canopy greenness, green cover (0–100%) was estimated for each species two times: once during the season—after 11 weeks in +4 m and +2 m and after 7 weeks in field plots (in the field by eye)—and once at the end of the season (19 October 2021; from images). Root growth We used two identical root scanners to produce high-resolution images (Fig. 1A , 1200 DPI) of roots growing along the surface of the tubes (CI-602, CID BioScience, USA). The scanner is inserted into the rhizotron tube to produce a 360°-image (21.6 × 18.6 cm, W × H) that is focused on the outer surface of the transparent tube (Fig. 1A ). Each monolith and field plot were scanned throughout the growing season, twice a week during the first month and then at 7–21 days intervals. The average soil area and depth covered by the scans amounted to 330 cm 2 and 18 cm per tube, respectively. Root images were processed using Python 3 (v. 3.6.9). Vertical striping artifacts, frequent with such scanners, were removed 63 and the aboveground part of the images (sun-block tape) was replaced by black. Brightness and contrast were normalized for each image before all images per tube were aligned (planar shifts determined by phase correlation). In total, we acquired ~700 scans and each was split into 16 sub-images measuring 2550 × 2196 pixels. Two sub-images per monolith/plot (one of each tube in field plots) were randomly chosen for manual root annotation using the rhizoTrak 64 plugin (v. 1.3) for Fiji 65 . Of these 42 annotated images, half were used for training and half for validation of a convolutional neural network 66 . The training dataset was augmented with annotated images from another experiment at the site of the field plots (50 additional images, same size). Validation was performed on images from this study only. After 60 training epochs (i.e., training cycles through the entire dataset), 84% of all pixels predicted as root actually belonged to roots and 82% of the actual root pixels were identified as such. Subsequently, all original (full-sized) images were automatically segmented. Mean root area per image area (mm 2 cm −2 ) was determined using RhizoVision 67 (v. 2.0.3). Predicted root area correlated well with the actual root area in the manually annotated images ( R 2 = 0.99, Supplementary Figure 4 ). Dead-looking roots were found in 15 annotated images (0.3% of the total root area). Root data from one monolith was excluded because roots at the tube surface were scarce for unknown reasons. Microsites in the field We chose 24 microsites (40 × 40 cm) covering different snowmelt dates and tracked leaf elongation and browning of Carex . Microsites were situated within an area of ~3 km 2 around the research station (2283–2595 m a.s.l.) and were visited at irregular intervals during the growing season 2020. When microsites were measured twice within the same week (interval < 7 days), data were pooled and assigned to the mean date to reduce noise in the data. Each microsite was measured 5–10 times across the growing season (for an example, see Fig. 4D ). As we suspected temperature to be a major driver of plant growth, and to determine the exact snowmelt-date, temperature sensors (iButton DS1922L) were installed 3 cm below the soil surface (close to Carex ’s meristems) in each microsite in September 2019, logging temperature every two hours until the end of the growing season 2020. Data analysis Data analyses were performed using the statistical programming language R 68 (v. 4.0.5). To ease comparability between temporal sequences of response variables, Carex leaf length, canopy greenness, root area and photosynthetic vigour index were scaled to percent of the maximum (0–100%) for each group and species. Further, root area was set to zero at the start of the season. We fitted generalized additive models (GAM, mgcv-package 69 ) with a thin-plate smoothing spline in the form ‘response variable ~ s(day of year)’ for each experimental unit. Number of knots (k) depended on sample size but was restricted to a maximum of eight and the estimated degrees of freedom varied between 3.1 and 6.9. Goodness of fit of smoothed terms was high in all cases (mean R 2 > 0.88 for each response variable). The timepoints presented in Fig. 3 (e.g., 80% quantile of root growth) were interpolated using these GAMs, except for the day of 50% browning in green leaf length and greenness, which was linearly interpolated amid the closest measurements. Integrated area under the smoothed curve was approximated on a daily interval for greenness. The start of root growth was defined as the first date of a moving window, spanning three adjacent measurement dates, whose linear regression slope exceeded 0.5% d −1 . Means and standard errors (SE) were calculated for each group ( n = 7–8 in +4 m and +2 m, n = 5 in field plots). For visual simplicity, one GAM was fitted per group in Fig. 2 . For microsites, green leaf length of Carex was fixed at 0.5 cm at season start, which is about the amount of remaining green leaf previously observed after winter. Elongation and browning rates in microsites were calculated between consecutive measurements from season start to two weeks before the peak and from two weeks after the peak until one week following 50% browning, excluding the peak with intrinsically low rates. This yielded 43 elongation and 22 browning rates with intervals between measurements of 7–89 days. Corresponding mean soil temperature and growing degree hours (GDH) > 5 °C at 3 cm soil depth were calculated for each interval per microsite. Four microsites were not measured after 50% browning and were excluded from the analysis of browning periods. Also, one elongation period could not be related to temperature due to T-sensor failure. The start of the growing season was defined as the day when snow disappeared, indicated by soil temperatures >3 °C and diurnal temperature fluctuations. Significant snowfall on 25 September in 2020 and a cold spell after 15 October in 2021 marked the meteorological end of the growing seasons for all plots. Differences between treatments were calculated by fitting linear regressions and subsequently calculating post-hoc contrasts using the R-package ‘emmeans’ 70 . Model assumptions regarding residual distribution were verified visually. In the case of photosynthetic vigour index, maximum and last values were compared by fitting mixed effect models to account for repeated measures (package nlme). P values as well as F or t values with degrees of freedom based on the overall model are reported in text and in Supplementary Tables. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Data generated in this study and annotated images used to train the neural network have been deposited in the figshare repository under accession code 71 . Code availability R-codes are published with the data. | Global warming is leading to longer growing seasons worldwide, with many plants growing earlier in spring and continuing longer in autumn thanks to warmer temperatures. Now, however, plant ecologists at the University of Basel have been able to show that this is not the case for the most common type of alpine grassland in the European Alps, where an earlier start leads to earlier aging and leaves the grassland brown for months. Spring 2022 was extremely warm, giving many plants an early start to the growing season. And the Swiss Alps were no exception, with the snow cover melting early and the underlying vegetation being quickly roused into growth. Researchers at the Department of Environmental Sciences at the University of Basel have investigated how such an early start affects the plants' further development. For their study, they removed intact blocks of alpine grassland and placed them in walk-in climate chambers at Basel's Botanical Institute. Here, they left the vegetation to overwinter artificially in cold darkness, and then switched some of the blocks to summer conditions in February. A second group was left in the cold dark until April, before summer was introduced here as well. The researchers compared the growth and aging of these plants with their neighbors growing naturally at an elevation of 2,500 meters, which did not emerge from the snow until late June. A predetermined program The study, published in Nature Communications, shows how the majority of these alpine plants stopped growing and began the aging process after around five to seven weeks, regardless of when they had been roused. "We were amazed at how stubbornly the dominant plant species, an alpine sedge, started aging and turned brown after just a few weeks," says Dr. Erika Hiltbrunner, a scientist in Professor Ansgar Kahmen's research group at the University of Basel and head of the Alpine Research Station ALPFOR on the Furka Pass. Once the snow had melted at the end of June, the blocks were returned to their alpine location. "By the time the natural vegetation was in full growth, the plants with the earliest start to the season had already turned brown," adds doctoral student Patrick Möhl. A period of growth and aging with a predetermined length is advantageous in an alpine environment with a very short growing season. This autonomous control mechanism prevents the plants from remaining active any longer than this, even if the weather is exceptionally favorable. Winter, with its freezing temperatures and snowfall, can set in at any time from August onward. In addition to leaf growth and the "greening" of the vegetation, the researchers also studied root growth. They regularly inserted a digital camera into clear tubes below the ground to scan the root system with high precision. A new machine-learning algorithm detects the roots in the images, and traces the otherwise hidden root growth in high resolution. The analysis revealed that the growth dynamics of the roots followed that of the leaves: root growth diminished after the initial two months, despite ongoing warm root temperatures. Brown alpine grasslands in summer A few plant species remained longer active under favorable conditions, meaning that their internal clock is less strictly fixed to a certain length of the growing season. Such species could potentially become more common in the future, and displace today's dominant species. However, changes in the species composition of closed, alpine grasslands are likely to take decades or longer. Alpine grassland species reproduce primarily by vegetative means (clonally) and produce genetically identical relatives, which slows down the process of adapting to new environmental conditions through genetic change. In addition, the alpine sedge (Carex curvula) forms an extremely dense root system that leaves little space for shifts in species composition. As long as the existing vegetation is not displaced by more flexible species, alpine grasslands will therefore appear increasingly brown even in summer. | 10.1038/s41467-022-35194-5 |
Nano | Robust approach for preparing polymer-coated quantum dots | Jańczewski, D., Tomczak, N., Han, M.-Y. & Vancso, G. J. Synthesis of functionalized amphiphilic polymers for coating quantum dots. Nature Protocols 6, 1546–1553 (2011). | http://www.nature.com/nprot/journal/v6/n10/full/nprot.2011.381.html | https://phys.org/news/2012-03-robust-approach-polymer-coated-quantum-dots.html | Abstract Quantum dots (QDs) need to be attached to other chemical species if they are to be used as biomarkers, therapeutic agents or sensors. These materials also need to disperse well in water and have well-defined functional groups on their surfaces. QDs are most often synthesized in the presence of ligands such as trioctylphosphine oxide, which render the nanoparticle surfaces hydrophobic. We present a complete protocol for the synthesis and water solubilization of hydrophobic CdSe/ZnS QDs using designer amphiphilic polymeric coatings. The method is based on functionalization of an anhydride polymer backbone with nucleophilic agents. Small functional groups, bulky cyclic compounds and polymeric chains can be integrated into the coating prior to solubilization. We describe the preparation of acetylene- and azide-functionalized QDs for 'click' chemistry. The method is universal and applicable to any type of nanoparticle stabilized with hydrophobic ligands able to interact with the alkyl chains in the coating in water. Introduction We provide a protocol for the synthesis of functionalized water-dispersible semiconductor nanocrystals (quantum dots, QDs), using an amphiphilic copolymer. The copolymer is obtained from the reaction of poly(isobutylene- alt -maleic anhydride) with n -octylamine. The amphiphilic character of the copolymer stems from the hydrophobic alkyl chains grafted to the polymer backbone in combination with the hydrophilic carboxylic groups 1 . The advent of QDs substantially altered the molecular landscape of bioimaging labels and contributed significantly to advances in biology 2 , 3 , optoelectronics and sensing 4 . Applications for QDs in biology stem from their superior optical properties when compared with organic chromophores 5 . Variation of the synthetic protocol allows one to obtain different batches of QDs made of a single semiconductor material that emit anywhere from the blue to the infrared by tuning simple process parameters such as temperature or reaction time. This is in sharp contrast with traditional chromophores for which the emission properties of a dye are related directly to their molecular structure, and for which unique synthetic protocols are needed for each novel dye family. The narrow emission lines of QDs surpass those of organic chromophores, which usually have long tails toward the lower-energy part of the electromagnetic spectrum. The absorption spectrum of the QDs is broad and the absorption increases toward higher energies. Therefore, large Stokes shifts are possible, and multiplexed detection using QDs is relatively easy. In contrast, multicolor imaging with organic chromophores requires careful selection of different dyes, light sources and optical filters to minimize the overlap between the absorption and emission spectra of different chromophores. Finally, unlike organic chromophores, QDs have low rates of photobleaching, and the emission from QDs can be observed for minutes or hours under constant illumination 6 . High saturation levels also allow the use of high excitation power, a prized property for in vivo medical imaging through thick layers of tissues or skin 7 . New generations of QD probes for biological applications require the integration of many different functional groups at the nanoparticles' surfaces. For example, targeting tumors in vivo requires biocompatibility, stability under in vivo conditions, improved circulation times and the presence of specific molecules that bind to overexpressed biomarkers 8 . Preferably, diagnostic or therapeutic agents should be incorporated into every QD probe. A stable aqueous dispersion of QDs in water is the primary requirement for the application of QD in the medical and life sciences. Although many methods exist for the synthesis of QDs, high-quality QDs are often obtained through an organometallic route based on the pyrolysis of precursor compounds in the presence of a coordinating solvent. QD solubilization in water can be performed by ligand exchange reactions after 9 , 10 , 11 or during the nanocrystal synthesis 12 . For example, thiols are known to bind to the surface of QDs, and thus ligand exchange with bifunctional thiols having a hydrophilic head group is routinely performed 13 . Stringent stability of the ligand shell is required, which is important to prevent the possible release of toxic species, such as Cd 2+ , into the solution 14 . Unstable ligand shells also lead to aggregation and precipitation of the QDs, which makes the interpretation of fluorescence images of labeled cells or tissues difficult and ambiguous. Currently, there are several proven and well-researched protocols available for the preparation of water-soluble QDs based on dihydrolipoic acid 15 or PEG-based bidentate ligands for improved stability in biological media 16 . Other methods for QD solubilization without using ligand exchange have been explored, including the formation of a stable silica shell 17 , 18 and multidentate polymeric 19 or dendrimeric coatings 20 , 21 . In particular, promising solubilization methods are based on the encapsulation of hydrophobic QDs by amphiphilic molecules without perturbing the original ligands. The encapsulation is driven by hydrophobic interactions between these ligands, such as octyl chains of trioctylphosphine oxide (TOPO), and the hydrophobic parts of the amphiphile. Encapsulation of QDs using hydrophobic-hydrophobic interaction has been demonstrated with small molecules, such as amphiphilic sugar clusters 22 or phospholipids 23 , 24 , for which there are published encapsulation protocols for use in cellular and in vivo imaging 25 . Coating of QDs with amphiphilic polymers via hydrophobic interactions is a relatively easy and robust method for rendering the QDs water soluble 8 , 26 , 27 , 28 , 29 , 30 , 31 . In some cases, shell cross-linking with diamines is performed to increase QD stability in aqueous buffers 26 . The hydrophobic shell around the QDs formed by binding of the hydrophobic parts of the polymer to the hydrophobic QD ligands resist hydrolysis and enzymatic degradation, as shown by many research groups during in vivo imaging of cells and animal models 8 , 32 , 33 , 34 . Octylamide-modified poly(acrylic acid)-coated QDs were also used for labeling of subcellular components 8 , angiography 33 , in vivo imaging in mice 32 and cancer targeting and imaging 30 . Specific functionalization of the polymeric shell is often performed after transferring the QDs into water. The available protocols include, for example, coupling amine-containing biomolecules to COOH groups on the surface of the QDs using a water-soluble activator 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride (EDC) 8 . Although this method is very popular, it is expensive and suffers from problems related to the excess of EDC required for good yields and EDC-induced precipitation of QDs 35 . These drawbacks stimulated the exploration of new activating agents based on noncharged and nonpolar carbodiimides 36 . It should be noted that the colloidal stability of the bioconjugated QDs might change after functionalization because the physicochemical properties of the surface are altered. An alternative approach is to integrate the required functionalities directly into the multifunctional amphiphilic polymeric coating before solubilization 37 , 38 , 39 , 40 . In this report, we present a simple protocol for the synthesis of hydrophobically capped CdSe/ZnS nanocrystals and their transfer to water using designer amphiphilic polymers 41 . The major advantages of the presented method include the wide availability of the robust, commercially available and cheap polymeric precursors, as well as the simple functionalization of the polymer backbone based on anhydride ring opening with any nucleophilic agent. We also show that the desired functionality can be integrated into the polymeric shell at the time of the coating synthesis. Therefore, there is no need for functionalization reactions after solubilization in water. In addition, no cross-linking is needed to achieve highly stable QD dispersions. The robustness of the amphiphilic coating is demonstrated by hydrophilization of the QDs with amphiphilic polymers bearing attached polymeric chains such as PEG 1 , PNIPAM 42 and highly hydrophobic functional ligands 43 , 44 , 45 . The latter feature is unique to this method, as it allows one to obtain hydrophilic nanoparticles functionalized with inherently hydrophobic molecules that are not soluble in water, therefore precluding their attachment after the solubilization. The presented method can also be applied to other types of hydrophobic nanoparticles. Experimental design CdSe/ZnS QDs were prepared by pyrolysis of organometallic compounds in the presence of coordinating solvents (TOPO and n -hexadecylamine (HDA)). For the QD core synthesis, cadmium stearate and pure selenium powder in trioctylphosphine (TOP) were used as starting materials. The ZnS shell is synthesized immediately after the end of the synthesis of the core using diethyl zinc in TOP as the zinc source, and sulfur in TOP as the sulfur source. This procedure results in hydrophobic QDs covered with TOPO and hexadecylamine ligands. Double purification by centrifugation of the precipitated solution results in clear and transparent QD solutions. An additional centrifugation should be performed before transferring the QDs to water. The amphiphilic polymer was synthesized by the reaction of the poly(isobutylene- alt -maleic anhydride) backbone ( M w = 6,000 g mol −1 ) with n -octylamine. The number of hydrophobic n -octylamide groups can be tuned to achieve a desired ratio between the hydrophobic and hydrophilic units 1 . A typical amphiphilic polymer described here consists of 40% of repeat units bearing n -octylamide groups (index m in Fig. 1 ). Virtually any functional molecule having a nucleophilic anchor for reaction with the anhydride, e.g., amine −NH 2 or hydroxyl −OH, can be introduced at this stage of the protocol. Attachment of large hydrophobic domains is eventually limited, as the size of the attached molecules may perturb proper folding of the amphiphilic coating around the nanoparticle. Figure 1: Synthesis of the amphiphilic polymers. The functional groups were attached by the nucleophilic ring opening of the anhydride in the presence of a base catalyst (DIPEA) under mild conditions. The functional groups (R) were attached to the amphiphilic polymer and incorporated in the QD coating. An underline indicates the attachment site. The respective polymers were described in the following publications: 2a , 2b in ref. 1 ; 2c , 2d , 2e in ref. 45 ; 2f in ref. 46 ; 2g , 2h in ref. 47 ; 2i , 2j in ref. 43 ; and 2k in ref. 42 . Full size image The requirements for the hydrophilic groups are not as stringent. For example, long hydrophilic polymeric chains of M w = 25,000 g mol −1 attached to an amphiphilic backbone were also successfully used to transfer the QDs into water 42 . The water transfer of the QDs was performed by mixing a water solution of the amphiphilic polymer and a tetrahydrofuran (THF) suspension of the purified nanocrystals. Wrapping of the QDs with the polymeric micelles was induced by evaporation of THF and later by slowly evaporating water at temperatures below 10 °C ( Fig. 2 ). Formation of QD/polymer assemblies can be clearly observed by monitoring the turbidity of the solution, as the aqueous phase becomes clear, transparent and luminescent under UV excitation. The resulting assemblies have a narrow size distribution, and the luminescent properties of the original nanocrystals are largely preserved. ( Figs. 3 and 4 ) Figure 2: Scheme of the phase-transfer procedure. The hydrophobic parts of the amphiphile interact with the TOPO coating on the QD surface. The hydrophilic parts, in turn, are directed toward water and induce colloidal stability. Full size image Figure 3 Emission spectra of the initial QDs in chloroform and in water after the phase transfer using polymer 2i . Full size image Figure 4 Transmission electron microscope image of the QDs transferred into water and encapsulated by the amphiphilic polymeric micelle. Full size image Materials REAGENTS Cadmium oxide (CdO; Aldrich, cat. no. 202894) Selenium (Aldrich, cat. no. 229865) Sulfur (Aldrich, cat. no. 344621) Diethyl zinc (Et 2 Zn) 1M/heptane (Aldrich, cat. no. 406023) n -Hexadecylamine (HDA; Aldrich, cat. no. H7408) Trioctylphosphine (TOP; Aldrich, cat. no. 117854) Trioctylphosphine oxide (TOPO; Aldrich, cat. no. 223301) Stearic acid (Aldrich, cat. no. S4751) Chloroform (CHCl 3 ; Fluka, cat. no. 25693) Methanol (CH 3 OH; Aldrich, cat. no. 646377) Poly(isobutylene- alt -maleic anhydride) ( M w = 6,000, Aldrich, cat. no. 531278) THF (TEDIA, cat. no. TS 2123-001) n -Octylamine (Aldrich, cat. no. O5802) Diisopropyl ethyl amine (DIPEA; Aldrich, cat. no. 38320) 2-Aminoethyl methacrylate hydrochloride (Aldrich, cat. no. 516155) Allylamine (Aldrich, cat. no. 145831) Propargylamine (Aldrich, cat. no. P50900) 11-Azido-3,6,9-trioxaundecan-1-amine (Aldrich, cat. no. 17758) Millipore purified water (e.g., Milli-Q) Sodium hydroxide (NaOH) EQUIPMENT A glove box with O 2 and H 2 O levels maintained below 1 p.p.m. Metal-heating bath created by melting Woods metal alloy (Aldrich, cat. no. 244104) in a stainless steel pot. Caution Woods metal alloy contains lead and cadmium. Appropriate safety procedures should be followed. Schlenk line Rotary evaporator equipped with a diaphragm pump and a condenser capable of working at −10 °C Centrifuge Freeze dryer Dialysis membrane with a molar mass cutoff of 6,000–8,000 kDa (Fisher, cat. no. 21-152-4) Disposable syringes Millex PES membrane filter (Millipore) REAGENT SETUP S/TOP stock solution In a glove box, prepare a 1 M stock solution of S in TOP (S/TOP) by dissolving 0.321 g of S in 10 ml of TOP, and a 1 M stock solution of Se in TOP (Se/TOP) by dissolving 0.790 g of Se in 10 ml of TOP. TOP solutions of reagents can be stored for 12 months at room temperature (glove box) with no degradation. EQUIPMENT SETUP QD synthesis setup This consists of a three-neck 100-ml round-bottomed flask equipped with an air condenser, inert gas/vacuum adaptor, septum and a temperature probe. Polymer synthesis setup This consists of a three-neck 1,000-ml round-bottomed flask equipped with a water condenser, inert gas/vacuum adaptor, septum and a temperature probe. Procedure Reagent preparation and purification: synthesis of CdSe/ZnS core shell QDs Timing 5 h 1 At room temperature (25 °C), charge a 100-ml flask fitted with a thermocouple temperature sensor, air condenser, a vacuum adaptor and septum with CdO (0.105 g) and stearic acid (1 g; Supplementary Figs. 1 and 2 ). Dry the reagents under vacuum (0.01 mbar) for 15 min. 2 Switch to a N 2 atmosphere and increase the temperature to 240 °C by immersing the flask in a preheated Woods metal bath with an immersed thermocouple for bath temperature readings. Maintain the reagents at this temperature until all dark brown CdO is dissolved and a transparent colorless Cd stearate is obtained. Troubleshooting 3 Cool the reaction mixture down to room temperature by removing the metal bath, and subsequently add TOPO (12 g) and HDA (7 g). 4 Degas the reaction mixture at room temperature under vacuum (0.01 mbar) for 1 h. 5 Switch to an N 2 atmosphere and raise the temperature to 195 °C. Using a Woods metal bath helps maintain stable temperature in the flask. 6 In a glove box, dilute 0.8 ml of the 1 M Et 2 Zn solution with 0.8 ml TOP and transfer the solution to a 5-ml disposable syringe. Subsequently, fill a 1-ml disposable syringe with 0.8 ml of the 1 M S stock solution and a 2-ml disposable syringe with 0.8 ml of the 1 M Se stock solution. 7 Inject all of the Se/TOP solution into the reaction flask in a single fast stroke and wait for 60 s. At this stage, tuning the temperature (180–230 °C) and reaction time (5–60 s) allows one to obtain QDs with luminescence emission from 520 to 640 nm. 8 Inject 0.2 ml of S/TOP, wait for 5 s and subsequently inject 0.4 ml of Et 2 Zn/TOP; then wait for 5 s and repeat the two steps three times (or until the solutions are depleted). 9 Cool the solution down to room temperature, divide it into four portions of equal volume and transfer them into four 50-ml centrifuge tubes. 10 Add 15 ml of CHCl 3 to each of the tubes and dissolve all contents by warming up to ∼ 40 °C using a water bath or a hair dryer. 11 Add 15 ml of methanol to each tube and centrifuge the content at 9,000 g for 30 min at room temperature (25 °C). Troubleshooting 12 Discard the supernatant and repeat Steps 10 and 11 using the same amounts of fresh solvents. 13 Dissolve the CdSe/ZnS nanoparticles in 20 ml of fresh CHCl 3 . Pause point A solution of QDs in CHCl 3 in a flask flushed with N 2 can be stored in the refrigerator (dark, 4 °C) for 12 months without marked loss of properties. Reagent preparation and purification: synthesis of amphiphilic polymer for coating of QDs Timing 16 h 14 Add 2 g of poly(isobutylene- alt -maleic anhydride; M w = 6,000 g mol −1 ) to a 1,000-ml three-neck flask fitted with a thermocouple temperature sensor, air condenser, vacuum adaptor and septum. 15 Degas the polymer under vacuum for 1 h and switch to an N 2 atmosphere. 16 Through the septum, add 500 ml of dry THF, 1.7 ml of DIPEA and 0.8 ml of n -octylamine. 17 Increase the temperature to 60 °C using an oil heating bath and stir vigorously for 1 h. 18 This step can be performed using option A or option B. Option A is specific to the example synthesis (polymer 2i , bearing acetylene functional groups). Option B is a general procedure. A Specific synthesis i Lower the solution temperature to 30 °C and add 0.23 ml of propargylamine. B General synthesis i Lower the solution temperature to 30 °C and add any functional RH component (as listed in Fig. 1 ). The typical amount of the functional unit added at this step is not higher than 25 mol% with respect to the repeating unit of the polymeric anhydride used in Step 14. For details regarding polymers with functional groups, please see corresponding references listed in Figure 1 . 19 Continue to stir at 30 °C for the next 12 h. 20 Evaporate THF and DIPEA using a rotavap for ∼ 1 h. Do not exceed 30 °C in the heating bath. The following stage of polymer purification was carried out without storing the material. Reagent preparation and purification: purification of polymer for coating of QDs Timing 1 week 21 Dissolve the solid residue in 40 ml of water and add 13 ml of 1 M NaOH. 22 Evaporate the sample until dry using a rotavap. Do not exceed 30 °C in the heating bath. 23 Dissolve the residue in 40 ml of water and transfer the solution into dialysis tubes (6,000–8,000 kDa). 24 Immerse the dialysis tubes in 2 liters of deionized water and 1 ml of 1 M NaOH and dialyze for 12 h. 25 Dialyze three times against a diluted NaOH solution and three additional times against clean water, replacing the solution each time. 26 Freeze-dry the solution from the dialysis tubes in a freeze dryer. Pause point The polymeric backbone and n -octylamide groups are highly stable. The overall polymer stability is limited only by the functional groups introduced. The polymer described in Steps 18–26, featuring acetylene functional groups, can be stored as a dry crystalline powder in a closed vial flushed with nitrogen at 4 °C for 12 months without visible changes in the NMR spectrum. Suspension of hydrophobic QDs in water with an amphiphilic functional polymer Timing 6 h 27 Prepare an aqueous solution of the polymer obtained in Step 26 by dissolving 20 mg of dry polymer in 10 ml of water. 28 In a centrifuge tube, place 1 ml of the solution obtained in Step 13 ( ∼ 10 mg of QDs) and add 1 ml of CH 3 OH. 29 Centrifuge the content at 9,000 g for 60 min at room temperature (25 °C). 30 Discard the supernatant and dissolve the remaining solid in 20 ml of THF. 31 Transfer the solution to a 100-ml round-bottomed flask and add the solution prepared in Step 27. 32 Use a rotary evaporator to evaporate the THF at room temperature. Do not immerse the flask in the water bath. 33 After the evaporation of ∼ 80% of the initial THF amount, the remaining solution becomes turbid. Stop the evaporation and add 30 ml of water. Troubleshooting 34 Continue with a slow evaporation of the water from the flask until the solution becomes clear and transparent (usually 2–3 h). Do not immerse the flask in the water bath; allow the flask to cool down to ∼ 5 °C (vacuum ∼ 0.5 mbar). Troubleshooting Critical Step During this operation, the residues of THF are evaporated and the amphiphilic polymer is tightly wrapped around the hydrophobic QD. A good indicator of the progress is the solution turbidity, which disappears when larger aggregates are well suspended. 35 Filter the solution obtained in Step 34 through a 0.22-μm Millex PES membrane filter and subsequently through a 0.10-μm PVDF hydrophilic filter. Pause point In this form, the solution can be stored at 4 °C for 6 months without losing colloidal stability and fluorescent properties. 36 (Optional) The solution can be concentrated by removing water with a rotavap. Concentrating this solution to a volume of 1 ml ( ∼ 20 mg ml −1 ) will not affect the stability. Solution purification, removal of excess polymer (optional) Timing 2 h 37 Place 1 ml of the solution obtained in Steps 35 or 36 in a 1.5-ml centrifuge tube. Subsequently, add 20 μl of 1 M NaOH. 38 Centrifuge the contents at 25,000 g for 120 min at room temperature. 39 Discard the supernatant and resuspend the solid residue in pure water by shaking. Pause point The solution in this form can be stored at 4 °C for 1–2 months without losing colloidal stability or fluorescent properties. Troubleshooting Troubleshooting advice can be found in Table 1 . Table 1 Troubleshooting table. Full size table Timing Steps 1–13, Reagent preparation and purification—synthesis of CdSe/ZnS core shell QDs: 5 h Steps 14–20, Reagent preparation and purification—synthesis of amphiphilic polymer for coating of QDs: 16 h Steps 21–26, Reagent preparation and purification—purification of polymer for coating of QDs: 1 week Steps 27–36, Suspension of hydrophobic QDs in water with an amphiphilic functional polymer: 6 h Steps 37–39, (Optional) Solution purification, removal of excess polymer: 2 h Anticipated results This protocol describes the synthesis of hydrophobic CdSe/ZnS QDs and their solubilization in water using an amphiphilic polymeric coating bearing acetylene functional groups. Such acetylene groups are one of the basic reactants in click chemistry. We also expanded the protocol from this particular case to a general fabrication method of functional QD/polymer assemblies for many other chemical functional groups, as shown in Figure 1 and its legend. The procedure listed in Steps 1–13 results in bright QDs emitting at 560 nm and having a narrow emission spectrum with a full-width at half-maximum of 26 nm. Changing the temperature and/or time of the synthesis, as described in this protocol, allows one to obtain nanocrystals with emission maxima ranging from 520 to 640 nm. The QDs/polymer assemblies resulting from this protocol largely maintain the optical properties of the hydrophobic nanocrystals upon transfer into water. Transmission electron microscopy provides evidence that there is no aggregation present. Following the particular conditions described in this protocol (560 nm QDs and an acetylene-functionalized polymer) results in almost quantitative transfer of hydrophobic QDs into water. The effective procedures described herein allow one to introduce a wide range of functional groups at the stage of the polymer synthesis. For example, we introduced a class of polymerizable groups at the surface of the QDs, which are of interest in the field of materials science 4 . In principle, any water-stable chemical entity with nucleophilic character can be introduced by its reaction with the polymeric anhydride. The important advantages of the presented protocol are: the ability to introduce hydrophobic functional groups onto the surface of water-soluble QDs; the lack of a cross-linking step at the end of the procedure; easy control over the number of functional units introduced into the polymeric coating; control over the number of hydrophobic n -octyl chains, and hence over the hydrophilic/hydrophobic balance of the polymeric coating; and the lack of a carbodiimide (dicyclohexylcarbodiimide (DCC) or EDC) coupling step. The polymer-coated QDs display good colloidal stability in water, which is sufficient for many proposed applications, such as in cell imaging. We carried out successful cell imaging for the mammalian cancer cells C-6 ( Fig. 5 ; ref. 1 ). Figure 5: Fixed and live samples of mammalian cancer cells C-6 imaged with red light–emitting QDs coated with polymer 2a ; the cell nucleus was stained blue with 4,6-diamidino-2-phenylindole. ( a , b ) The fixed cells are shown in a and the live cells are shown in b . Cells were incubated with QDs for 1 h and then washed to remove the excess of free nanocrystals. Images revealed that QD-polymer assemblies were internalized by endocytosis. Adapted from reference 1 with permission from Elsevier. Full size image | Quantum dots (QDs) are tiny crystals of semiconducting material that produce fluorescence. The color or the wavelength of the fluorescence is dependent on the size, shape and composition of QDs. Larger QDs tend to emit light at the red end (longer wavelengths) of the electromagnetic spectrum. As the size of the QDs decrease, so does the wavelength of emitted light. This tunability of emission wavelength is one reason why QDs have become popular for use as fluorescent markers in biological research. For example, scientists can attach QDs to single molecules and cells and track their movements over time using fluorescence microscopy. Dominik Jańczewski, Nikodem Tomczak and Ming-Yong Han at the A*STAR Institute of Materials Research and Engineering and co-workers have now described a protocol for the preparation of quantum dots coated with an amphiphilic polymer — a polymer that contains both water-attracting and -repelling components. “Our aim is to develop a robust approach for the preparation of QD for use as fluorescent tags for bioimaging, sensing and therapeutics,” says Han. “The method we have developed is applicable to any nanoparticles, not just QDs.” Most biological applications require the use of QDs that disperse and remain stable in an aqueous solution. Conventional approaches for synthesizing QDs typically endow the QDs with a coating of hydrophobic ligands, which are repelled by water. Although it is possible to exchange the ligands after synthesis, a ligand shell that is exchangeable is, by its very nature, unstable and might result in the release of toxic materials, such as cadmium, into solution. Instead of exchanging the ligands, an alternative method to make the QDs disperse in water is to coat them with a polymer that has both hydrophilic and hydrophobic parts. This works on the simple principle that like attracts like — or in other words, hydrophobic parts of the polymer attract hydrophobic ligands that stabilize the QDs, and hydrophilic parts of the polymer attract water molecules in solution. The new protocol describes the procedure in detail and aims to provide the benefits of the research team’s experience in QD synthesis to others whose interests might be focused more on applications rather than the development of synthetic methods. The synthesis of the polymer coating allows the incorporation of a wide variety of functional groups. “In the future we hope to work towards image guided therapy,” says Han. “QDs could be prepared that not only produce an image of cancer cells, but also release drugs at such a target.” | Nature Protocols |
Biology | Arctic shrub expansion limited by seed dispersal and wildfire | Yanlan Liu et al, Dispersal and fire limit Arctic shrub expansion, Nature Communications (2022). DOI: 10.1038/s41467-022-31597-6 Journal information: Nature Communications | https://dx.doi.org/10.1038/s41467-022-31597-6 | https://phys.org/news/2022-07-arctic-shrub-expansion-limited-seed.html | Abstract Arctic shrub expansion alters carbon budgets, albedo, and warming rates in high latitudes but remains challenging to predict due to unclear underlying controls. Observational studies and models typically use relationships between observed shrub presence and current environmental suitability (bioclimate and topography) to predict shrub expansion, while omitting shrub demographic processes and non-stationary response to changing climate. Here, we use high-resolution satellite imagery across Alaska and western Canada to show that observed shrub expansion has not been controlled by environmental suitability during 1984–2014, but can only be explained by considering seed dispersal and fire. These findings provide the impetus for better observations of recruitment and for incorporating currently underrepresented processes of seed dispersal and fire in land models to project shrub expansion and climate feedbacks. Integrating these dynamic processes with projected fire extent and climate, we estimate shrubs will expand into 25% of the non-shrub tundra by 2100, in contrast to 39% predicted based on increasing environmental suitability alone. Thus, using environmental suitability alone likely overestimates and misrepresents shrub expansion pattern and its associated carbon sink. Introduction The Arctic has warmed more than twice as fast as the global average and is projected to continue outpacing lower latitudes over the 21st century 1 . Rapid climate warming in recent decades and associated feedbacks have led to shifts in Arctic vegetation composition and abundance 2 , 3 , 4 . In particular, increased tundra shrub cover has been widely observed through field surveys 5 , aerial photographs 6 , 7 , and satellite remote sensing 8 , 9 . Pervasive shrub expansion can heat the atmosphere through decreased albedo and increased greenhouse warming induced by atmospheric water vapor, resulting from increased evapotranspiration and regional ocean feedbacks 10 , 11 , 12 . Locally, shrubs can warm the soil in the winter due to the insulation of accumulated snow, which deepens the active layer and accelerates soil carbon loss compared to non-shrub tundra 13 , 14 . Moreover, the distribution of shrubs also affects nutrient cycling, animal populations 15 , and wildfire risk and associated carbon emissions 16 , 17 . Understanding controls of shrub expansion patterns is therefore crucial to predicting climate feedbacks and ecological consequences of the rapidly changing Arctic. The area where temperature limits the growth of Arctic vegetation has been declining over the past decades 18 . Increasing temperature has been identified as a major control of shrub expansion 5 , 19 , 20 . However, the influence of temperature can be attenuated or reversed by soil moisture limitation, snow distribution, and topography 4 , 21 , 22 , 23 , 24 , 25 . The majority of observational-based studies focus on environmental factors and attribute the heterogeneity of shrub expansion to spatial variation of environment-based suitability, i.e., the likelihood of shrub presence given environmental conditions 26 . Based on space-for-time substitutions, some of those studies used derived spatial environment-vegetation relationships to assess future shrub expansion 5 , 20 , 21 , assuming stationary relationships between species and the environment. Although this approach has been found effective in predicting species distributions when ecosystems are in dynamic equilibrium, e.g., under a relatively stable climate or over a sufficiently long time scale 27 , it ignores transient responses and non-stationary ecological processes, thus causing errors in projected ecosystem change 28 , 29 , 30 . As the Arctic tundra deviates from the historical quasi-equilibrium due to climate change, evaluating the dynamic roles of plant migration and disturbance becomes especially relevant. With changes in growing conditions under a warmer climate, the successful establishment of new shrub patches depends on seed dispersal. Seeds can be dispersed through many biotic and abiotic vectors, such as gravity, animals, wind, ocean currents, and drifting sea ice, which result in dispersal ranges from meters to hundreds of kilometers 31 , 32 . Seed dispersal has been investigated in previous studies to estimate species range shifts 32 , 33 , 34 , and was found important in explaining shifts in vegetation composition at sites in alpine 35 , mediterranean 36 , 37 , and tropical biomes 38 , 39 . Nonetheless, the impact of seed dispersal on vegetation patterns is also compounded by suitable environmental niches and thus is not always the limiting factor 40 , 41 , 42 . In the Arctic, long-distance seed dispersal is a critical mechanism affecting species distribution. Genetic analysis has revealed repeated long-distance seed dispersal to a remote archipelago from multiple source regions since the last glacial retreat, while the resulting species distribution is predominantly shaped by temperature that limits environmental suitability for establishment 41 . In contrast to a relatively stable climate over the past several millennia, the fast-changing climate over recent decades might lead to shifts in the relative dominance of environmental suitability and seed dispersal in shaping Arctic shrub expansion. The dynamic trajectory of ecosystems may also be affected by disturbance. Although historically rare in Arctic ecosystems, wildfire is expected to become more intense and frequent as the climate warms 16 , 43 . In the short term, wildfires may cause seed and seedling mortality, which could limit post-fire recruitment. On the other hand, wildfires can alter post-fire vegetation trajectories by heating the soil during the fire, cause long-term soil warming by removing surface litter, and improve seedbed nutrient availability, thus facilitating germination and seedling establishment 44 , 45 , 46 , 47 , 48 . For example, modeling and site-based field studies have reported both enhanced expansion and diminished recovery of shrubs from four years to two decades after wildfires at several sites in Alaska 45 , 47 , 49 , 50 , 51 . How wildfires affect shrub expansion across large gradients of environmental suitability and seed dispersal has barely been evaluated using observations. We focused on shrub expansion from 1984 to 2014 across the northwestern region of North America covering Alaska and western Canada, i.e., the NASA Arctic-Boreal Vulnerability Experiment (ABoVE) core domain. Shrub expansion was detected using an annual dominant land-cover product derived from Landsat surface reflectance and trained over field photography and very high-resolution imagery 3 . Areas classified as shrubland include prostrate dwarf-shrub tundra and erect-shrub tundra 52 , dominated by species of birch (Betula spp.), alder (Alnus spp.), willow (Salix spp.), and other dwarf evergreen and semi-deciduous shrubs. Field surveys have detected expansion of these shrub communities 53 , 54 , 55 . Here, shrub expansion is defined as shrub dominance in tundra originally dominated by non-woody species at a 30 m scale. We collected topographic and regionally downscaled bioclimatic variables across the domain to identify the variables most informative for observed shrub expansion (“Methods”). Based on these selected topographic and bioclimatic conditions, averaged over three decades prior to 1984, we estimated environmental suitability for shrubs in 1984 using a random forest model. The same model was then used to calculate the environmental suitability for shrubs in 2014 using 1985–2014 average bioclimatic conditions. We analyzed whether changes in environmental suitability could explain shrub expansion from 1984 to 2014. Seed-arrival probability, a measure of spatial proximity to existing shrub patches, was calculated through convolution of seed-dispersal kernels over 1984 shrub cover images. Given the variety of dispersal mechanisms, we considered both short- and long-distance dispersal kernels, and optimized the range and shape parameters to fit observed shrub expansion. The year and location of fires were obtained from a Landsat-derived annual burn scar product. We investigated individual and compound impacts of environmental suitability, seed dispersal, and fire occurrence on observed 1984–2014 shrub expansion. Based on the resulting sensitivities and the projection of bioclimatic conditions and fire from climate models, we estimated shrub expansion in 2040, 2070, and 2100, and explored the relative importance of environmental suitability change, fire, and seed dispersal on projected shrub expansion. We found observed shrub expansion did not follow the pattern of environmental suitability but can only be explained by considering seed dispersal and fire. Shrub expansion under the projected climate is likely overestimated if neglecting the limitation of seed dispersal and fire. Results Environmental suitability of shrubs We first estimated environmental suitability for shrubs as of 1984. Among the 27 variables, the seven most informative variables, as identified based on variance inflation factors (Supplementary Table 1 and Supplementary Fig. 1 ), were three bioclimatic variables (degree-days above 5 °C, annual heat-moisture index, and precipitation as snow) and four topographic variables (elevation, slope, aspect, and topographic wetness index). Based on the random forest model, degree-days above 5 °C were the most important variable for environmental suitability in 1984, followed by elevation and precipitation as snow (Fig. 1a ). In terms of the direction and shape of response, higher environmental suitability was associated with higher degree-days above 5 °C, lower elevation, and higher precipitation as snow (Supplementary Fig. 2 ), although environmental suitability responds to these variables nonmonotonically across different combinations of climate and topographic conditions (Supplementary Fig. 3 ). The nonmonotonic responses could be partially attributable to the coexistence of multiple shrub species that have different optimal environmental conditions, and regional collinearity among bioclimatic and topographic conditions that may not be completely disentangled using a data-driven approach. Fig. 1: Environmental suitability did not explain shrub expansion between 1984 and 2014. Environmental suitability of shrubs in ( a ) 1984 and ( c ) 2014 estimated using topographic and bioclimatic conditions, including annual degree-days above 5 °C (DD5), annual degree precipitation as snow (PAS), annual heat-moisture index (AHM), elevation, slope, aspect, and topographic wetness index (TWI). The inset of ( a ) shows the relative importance of these factors on environmental suitability. b Fraction of shrub cover in 1984. d Fraction of new shrub area at a 4 km scale in 2014, i.e., non-shrub tundra in 1984 that became dominated by shrubs by 2014. The insets of ( b , d ) show the corresponding relationships with environmental suitability, where brighter colors represent higher dot density. The gray areas are dominated by land-cover types other than shrubs and non-woody plants and are excluded from the analyses. Full size image Environmental suitability in 1984 was higher in southwestern Alaska, eastern Seward Peninsula, and northern Northwest Territories of Canada, but lower on the northern edge of the North Slope of Alaska and northern Canada and mountainous regions such as the Brooks Range and the Mackenzie Mountains (Fig. 1a , reference locations noted in Supplementary Fig. 1c ). This pattern of environmental suitability was largely consistent with observed shrub distribution in 1984 (Pearson’s r = 0.92, Fig. 1b ), suggesting that environmental suitability alone explains shrub distribution under quasi-equilibrium conditions. Due to climate warming since 1984, the domain became more suitable in 2014 on average, especially in southwestern Alaska, eastern Seward Peninsula, the North Slope, and the southeast of the domain (Fig. 1c ). The area with high environmental suitability (>0.4) increased from 13.4% to 28.3% of the region. However, these highly suitable regions experienced limited shrub expansion (Fig. 1d ). Instead, hot spots of shrub expansion were found in the west of the North Slope and northwestern Canada. Across the entire domain, environmental suitability was much less related to new (i.e., expanded) shrub area in 2014 ( r = 0.35) than to existing shrub cover in 1984 ( r = 0.92). Accounting for different initial land-cover types of the non-woody tundra and change in environmental suitability also barely contributed to explaining the pattern of shrub expansion (Supplementary Fig. 4 ). These results suggest environmental suitability was not the major limiting factor of shrub expansion between 1984 and 2014. Impacts of seed dispersal and fire The best-fitting long-distance dispersal was represented using a fat-tail kernel ( c = 0.5 in Eq. (1 ) in “Methods”) with a range parameter of 39 km (Supplementary Fig. 5 ); and the short-distance dispersal was best represented using an exponential power kernel ( c = 1.5 in Eq. (1 ) in “Methods”), which is between an exponential kernel and a Gaussian kernel and has a range parameter of 600 m. Notably, in regions not disturbed by fire, the area fraction of shrub expansion was the most sensitive to short-distance dispersal, 9.5 times more sensitive than to environmental suitability based on the regression coefficients (Fig. 2a ). The weak negative sensitivity to long-distance dispersal likely arose from the trade-off between the sensitivities to short- and long-distance dispersal, which might not be precisely separated based on the data due to their spatial correlation ( r = 0.71). However, both long- and short-distance dispersal became significantly more important in facilitating shrub expansion after fire, compared to areas without fire (Fig. 2a ). The median of the sensitivities to long- and short-distance dispersal increased from −0.010 to 0.029 and from 0.062 to 0.083, respectively, for areas that experienced fire; whereas the sensitivity to environmental suitability reduced from 0.006 to −0.007. Across the entire domain, fire disturbance enhanced the likelihood of shrub expansion, especially in highly suitable areas with high seed-arrival probability (Fig. 2b ). Accounting for seed-arrival probability and environmental suitability improved the estimation accuracy of shrub expansion from r = 0.35 (Fig. 1d ) to r = 0.61 (areas with fire, 7.8 km 2 ) and r = 0.71 (areas without fire, 149.1 km 2 ) (Fig. 2b ). We note that the shrub expansion pattern can also be influenced by other factors unaccounted for, leading to a spatial correlation pattern unexplained by the considered covariates 56 . Nonetheless, additionally accounting for spatial correlation of shrub expansion patterns using a spatial regression (Eq. (4 ) in “Methods”) only slightly improved estimation accuracy but did not fundamentally alter the estimated sensitivities (Supplementary Fig. 6 ). These findings show that, over recent decades, dispersal has been a stronger limiting factor than environmental suitability on shrub expansion. Non-woody tundra locations becoming more suitable is not sufficient for shrub expansion to occur. By contrast, fire disturbance and proximity to existing shrub patches make shrub expansion more likely in the newly suitable areas. Fig. 2: Dispersal and fire explain shrub expansion during 1984–2014. a Sensitivity of new shrub area to environmental suitability and probabilities of seed arrival via short- and long-distance dispersal at locations with (red bars) and without (blue bars) fire. Colored bars denote the average and vertical black lines denote range of the 95% confidence interval of the regression coefficients across optimal dispersal kernel parameters ( n = 533). b Observed and estimated new shrub area, i.e., areal fraction at a 4 km scale, in 2014 at locations with (red) and without (blue) fire. The lines and shaded bands represent the medians and ranges of observed new shrub area (gray dots) for each bin of estimates with a width of 2%. Full size image Predicted shrub expansion Across the domain, 6.8% of non-shrub tundra in 1984 had become dominated by shrubs by 2014. Using our established relationships (Fig. 2 ), we estimated that the shrubified area fraction would increase progressively to 25.1% ± 3.0% by 2100 (Fig. 3a and Supplementary Fig. 7 ) corresponding to 253,651 ± 30,317 km 2 more shrub cover than in 2014, with the uncertainty originating from uncertainty in the empirically derived sensitivities (Fig. 2a ). The results suggest substantial shrub expansion in southwestern Alaska, southern and eastern Seward Peninsula, south and north of the Brooks Range, and northern Northwest Territories of Canada. The Victoria Island, western Nunavut, the Brooks Range, and the Mackenzie Mountains will likely experience limited shrub expansion. Note that the projected shrub expansion estimated here originates from the combined impacts of environmental suitability under projected climate change, seed dispersal, and projected burn area. The resulting pattern (Fig. 3a ) does not account for shrub loss due to competition, pests, and herbivores 3 , 25 , 57 , 58 , which are beyond the scope of our study. By contrast, without considering the impact of dispersal and fire, the relationship between shrub presence and increased environmental suitability alone (Fig. 1a, b ) predicts a higher fraction (38.9%) of non-shrub tundra in 1984 will become shrublands by 2100. Notably, the shrub expansion pattern predicted using environmental suitability alone shows substantial increase of shrub cover in the North Slope and northern Canada (Fig. 3b ), which is significantly different from the expansion if dispersal and fire limitations are considered (Fig. 3a ). Thus, relying on environmental suitability alone likely results in predictions that overestimate shrub expansion and misrepresent the spatial patterns. As a result, observational studies and models that project shrub expansion without considering the biological and physical constraints of dispersal and fire likely overestimate the 21st-century carbon sink in the Arctic tundra due to shrub responses to warming. Fig. 3: A quarter of non-woody tundra in 1984 will be colonized by shrubs by 2100 based on the climate scenario RCP8.5. Spatial pattern of new shrub area in 2100 predicted using ( a ) environmental suitability, dispersal, and fire, and ( b ) environmental suitability alone. The inset of ( a ) shows the domain average of new shrub area predicted with (green bars) and without (blue bar) considering dispersal and fire. The vertical black lines span the upper and lower boundaries due to the uncertainty of the estimated sensitivities ( n = 533, see “Methods” for details), i.e., vertical black lines in Fig. 2a . Using environmental suitability alone overestimates shrub expansion and misrepresents the spatial pattern. Full size image Relative impact of environmental suitability, fire, and dispersal We investigated the spatial patterns of projected changes in environmental suitability, burn area, and seed-arrival probability in 2100, and used synthetic scenarios, i.e., turning off one factor at a time, to disentangle their individual impacts on projected shrub expansion shown in Fig. 3 (see “Methods”). Compared to 1984, environmental suitability in 2100 increased in most of the region due to climate warming, although it decreased in some low elevation areas experiencing reduced snow inputs (Supplementary Fig. 8a ). Environmental suitability change had a spatially heterogeneous impact on shrub expansion, i.e., increasing in most areas but decreasing in parts of southwestern Alaska, the North slope, and northern Canada. Given the low sensitivity of shrub expansion to environmental suitability (Fig. 2a ) and the spatial compensation, the net result of environmental suitability changes was small averaged across the region (~1%) by 2100. Fires burned 3.2% of the area during 1984–2014, which was projected to increase to 7% during 2070–2100 based on the CMIP6 ensemble average. Burn area in CMIP6 models was mostly concentrated in the southeast (Supplementary Fig. 8b ), which had a limited impact on shrub expansion due to low initial shrub cover and thus seed-arrival probability in that region (Supplementary Fig. 8c ). In most of the tundra in Alaska and northern Canada, the burn area was projected to be less than 3% until the end of the 21st century. As a result, although areas disturbed by fire were found more likely to experience shrub expansion (Fig. 2 ), projected fire only contributed one percentage point out of the 25% shrub expansion by 2100, equivalent to the impact of projected environmental suitability change (Fig. 4b ). The spatial pattern of seed-arrival probability mostly followed existing shrub cover, i.e., high in the majority of Alaska and middle of the Northwest Territories in Canada, and low in coastal regions of Alaska, and southeast and north of the Northwest Territories (Supplementary Fig. 8c ). Dispersal largely explained shrub expansion in these regions (Fig. 4c ). Notably, although the Brooks Range had moderate seed-arrival probability (Supplementary Fig. 8c ), shrubs were found unlikely to expand into this region (Fig. 3 ) due to the limitation of low environmental suitability (Fig. 1c and Supplementary Fig. 8 ), highlighting the compound impact of environmental suitability and seed dispersal. Across the domain, seed dispersal explained 14% out of the 25% shrubified tundra from 1984 to 2100 (Fig. 4c ). Given the dominant control of seed dispersal on the spatial pattern of shrub expansion, omitting dispersal likely leads to mis-represented shrub cover change. Fig. 4: Shrub expansion over the 21st century was primarily attributed to seed dispersal. Shrub expansion driven by ( a ) environmental suitability change from 2014 to 2100, ( b ) projected fire, and ( c ) short- and long-distance seed dispersal. The insets show the domain average of shrub expansion from 2014 to 2100; the gray sections show shrub expansion in a scenario where the corresponding factor was turned off and the green sections represent its contribution. Full size image Discussion Climate warming has made the Arctic tundra substantially more suitable for shrubs over recent decades. However, we demonstrate that more suitable areas do not necessarily experience more extensive shrub expansion, which, instead, is found in areas close to existing shrub patches and/or disturbed by fire. In contrast to previous findings that suggest a stronger limitation of environmental suitability than seed dispersal over the past millennia 41 , the results here indicate dispersal processes limit shrub expansion over recent decades. Our findings provide observational evidence for the importance of seed dispersal in Arctic shrub expansion under rapid warming as the ecosystem deviates from its historical equilibrium. The fact that shrubs did not expand into all suitable areas implies shrub establishment might not have kept up with the pace of recent climate change. Although a high rate of environmental suitability change under the RCP8.5 scenario was used for prediction, in a contrasting scenario where environmental suitability is kept the same as in 2014 through 2100, shrub cover is still predicted to substantially increase across the domain (gray bars in Fig. 4a ). Therefore, shrubs will likely continue to expand across the Arctic tundra due to lagged response, even under a net-zero emission scenario, where global warming will be limited to 1.5 °C by 2050 and stabilized by 2100 59 . Complex ecosystem processes introduce uncertainties in predicted environmental suitability, identified shrub expansion, and the relationship to seed dispersal and fire disturbance. Uncertainties related to future environmental suitability can be influenced by future bioclimatic conditions exceeding the historical ranges used to establish their relationships with environmental suitability (Supplementary Fig. 9 ). For example, nutrient availability could increase much faster with temperature in a warmer climate due to an exponential increase of N mineralization rate and deepening active layer 45 . Thus, the data-driven environmental suitability model trained using historical data could underestimate future environmental suitability. The nonlinear impacts of bioclimatic conditions on environmental suitability (Supplementary Figs. 2 and 3 ) should also be interpreted as specific to the domain configuration and are subject to uncertainty as the climate shifts beyond the historical regime. Likewise, seed production and dispersal could also deviate from historical regimes due to biotic and abiotic interactions 60 , 61 . For example, a recent study suggested declined population of animals as dispersal vectors likely further limits long-distance dispersal of plants under future climate 62 , thus leading to underestimated dispersal limitation relying on empirical relationships. However, mechanistic models could contribute to addressing these uncertainties. Shrub expansion was identified based on remotely sensed shrub dominance at a 30 m scale and over 30 years, which is subject to land-cover classification errors especially with coexistence of multiple growth forms 3 . Notably, shrub expansion detected at a 30 m resolution may not precisely distinguish the underlying causes of seed dispersal from increased coverage of preexisting shrubs due to enhanced growth or new establishment from very local dispersal (within the 30 m pixel) 3 , 9 . However, because shrub growth and local seed production are expected to be controlled by environmental suitability, the low impact of environmental suitability supports seed dispersal being the dominant cause of shrub expansion across the domain. Moreover, as dispersal is estimated based on spatial proximity, our results highlight the importance of spatially connected processes. Although seed dispersal is the originating mechanism and has been recognized as a dominant spatial process controlling vegetation range shifts 35 , 36 , 37 , 38 , 39 , the impact of spatial proximity identified here might also be partially attributed to other spatially connected factors, such as active-layer depth, soil thermal-hydro conditions, surface litter, nutrient availability, and herbivore activities 63 , 64 , 65 , 66 . These factors may contribute to the spatial connectivity of shrub expansion via rates of seed germination and seedling establishment. However, these factors are unlikely to be the dominant explanation for the identified impact of spatial proximity, as they are partially related to environmental suitability via climate and topographic conditions, and they tend to exhibit smaller spatial ranges than those identified for long-distance dispersal (~40 km). Furthermore, because the remotely sensed land cover that we used cannot distinguish different shrub species while dispersal influences the expansion of each single species, the results based on the aggregation of all shrub species likely overestimate spatial proximity, thus providing conservative estimates of dispersal limitation. Field surveys and measurements are required to investigate the confounding roles of these spatial processes. Although fires can either enhance or inhibit plant regeneration depending on local soil and climate conditions 25 , our results suggest fire enhances shrub expansion where it does occur, consistent with paleoecological studies 44 and model simulations 45 , 67 across a large scale. The strong compounding effect of fire and seed dispersal on shrub expansion (Fig. 2 ) highlights that fire promotes shrub expansion especially at locations close to preexisting shrub patches, where seeds are more likely to arrive and establish after fire. Because the impact of dispersal can be attenuated by competition with preexisting species such as long-lived perennials in the tundra, the enhanced impact of dispersal by fire could be partially attributable to lowered competition through removal of preexisting species. Because fire is projected to be rare in the Arctic tundra based on climate models, we find it only marginally contributes to shrub expansion by 2100. However, a recent study suggests lightning in Arctic tundra, the dominant source of burning 68 , will significantly increase to a rate similar to that in boreal forests 16 . Lightning-driven fire increases could trigger positive vegetation-fire feedbacks, leading to twofold more burn area by 2100 than the ensemble average of CMIP6 models (Supplementary Fig. 10 ) 16 , 69 . Therefore, fire likely exerts greater impacts on shrub expansion compared to the estimates here when considering these positive feedbacks, though further investigation is required to constrain the large uncertainty (Supplementary Fig. 10 ). As post-fire regeneration strongly controls how much fire-induced carbon loss is attenuated 17 , future work on the strength and spatial heterogeneity of the feedback between fire and shrub expansion will contribute to a better assessment of the carbon budget in Arctic tundra. Our results highlight that predicting shrub expansion cannot be based on climate alone. Models that do not account for fire disturbance and seed dispersal may misrepresent future shrub cover. In Earth system models, seed production and dispersal have been recognized as the most under-developed vegetation demographic processes 70 . Representing seed dispersal, especially over long distances, requires seed transport across spatially discretized grids, which does not exist in most land models. Improved representation of seed dispersal therefore could contribute to better prediction of vegetation shifts. In addition to the factors investigated here, shrub expansion is also modulated by species competition for water, nutrients, and light 71 , 72 , 73 . Recent observational evidence suggests climate change can result in different competitive abilities across species due to divergent shifts of plant functional traits in Arctic tundra 74 , 75 , 76 , highlighting the potential of employing dynamic vegetation models that explicitly represent competition. These findings motivate improving process-based representations of seed dispersal, fire disturbance, and species competition in dynamic vegetation models as a fundamental component to better prediction of Arctic shrub change and corresponding climate feedbacks. Methods Datasets Shrub expansion was identified based on the Landsat-derived product of annual dominant land cover across ABoVE core domain from 1984 to 2014 77 . The dataset provides annual dominant plant functional type at a 30 m resolution derived from Landsat surface reflectance, very high-resolution imagery, and field photography across the ABoVE domain. We focused on pixels dominated by shrubs and non-woody species, i.e., excluding boreal forests. Pixels consistently classified as shrublands during 1984–1986 and 2012–2014 were considered as shrub cover in 1984 and 2014, minimizing the uncertainty of noise in the annual time series of land-cover types. New shrub area was identified as pixels that had been dominated by non-woody species in 1984 and became dominated by shrubs in 2014. We used climate and topographic conditions to estimate environmental suitability for shrublands. The climate conditions listed in Supplementary Table 1 came from ClimateNA 78 , a product locally downscaled for North America at a 4 km resolution. The historical data (1955–2014) was downscaled from the gridded Climatic Research Unit Time-series data version 4.02 (CRU TS4.02), and the projected data (2014–2100) was downscaled from CMIP5 under the RCP8.5 scenario, which is broadly consistent with recent trends of global carbon emissions 79 . The elevation data were obtained from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model Version 3 with a 30 m resolution 80 . Slope, aspect, and the topographic wetness index were derived from elevation using a terrain analysis software RichDEM 81 . Fire occurrence during 1985–2009 was identified using the annual product of differenced Normalized Burned Ratio (dNBR) at a 30 m resolution 82 , where the perimeters came from the Alaskan Interagency Coordination Center and the Natural Resources Canada fire occurrence datasets. Only fires that occurred at least 5 years prior to 2014 were considered to allow vegetation recovery. Burn area during 2015–2100 was obtained from CMIP6 projections under a SSP585 scenario. Datasets at coarse resolutions (climate and projected burn area) were resampled to a 30 m resolution using the nearest neighbor method. Estimation of environmental suitability We considered 23 bioclimatic variables (Supplementary Table 1 ) and 4 topographic conditions, i.e., elevation, slope, aspect, and topographic wetness index. To reduce the risk of overfitting, we identified the most informative variables based on the variance inflation factor, which measures the multicollinearity among the explanatory variables. Starting from all 27 variables, we excluded the variable with the highest variance inflation factor, i.e., the variable that can be best represented by a linear combination of other variables, one at a time, until the variance inflation factors of all variables are below the commonly used threshold of five 83 . This procedure ensured that the identified variables are most statistically informative in representing the bioclimatic and topographic conditions across the domain. Based on the identified variables, we applied ten species distribution models to estimate whether a pixel was shrubland. The models include generalized linear model, generalized additive model, boosted regression trees, classification tree analysis, artificial neural network, surface range envelope, flexible discriminant analysis, multiple adaptive regression splines, random forest, and maximum entropy, all applied using the biomod2 84 software in R 85 . Due to the large computation load, we trained each model using 5% of the pixels randomly selected within the target area, including both shrub and non-shrub pixels. The model accuracies were evaluated using all pixels across the entire domain. The random forest model had the highest accuracy based on the true skill statistic and the area under the receiver operating characteristic curve. Therefore, the environmental suitability, i.e., the probability of a given 30 m pixel being shrubland given its bioclimatic and topographic conditions, was calculated using only the random forest model. We assumed a relatively stable climate prior to 1984. Thus, the average bioclimatic conditions during 1955–1984 were used to train the random forest model and assess environmental suitability in 1984. Environmental suitability in 2014, 2040, 2070, and 2100 was estimated by replacing the bioclimatic conditions to the averages over the previous 30 years, respectively. We evaluated the relative importance of each variable in explaining environmental suitability. We also analyzed the response curve of environmental suitability to the variation of each variable, and the response surfaces to the covariation of the most important three variables, while setting other variables as the domain average. Seed-arrival probability The impact of seed dispersal was quantified using the probability of seed arrival at a given location, calculated using kernel convolution over the spatial pattern of shrublands. The following exponential power kernel was used to describe the relationship between seed-arrival probability and distance to parent shrub patches. $$k({x}_{i})=\frac{b}{2{\pi a}^{2}\varGamma (2/b)}{{\exp }}\left(-{\left(\frac{{x}_{i}}{a}\right)}^{b}\right)$$ (1) where x i is the distance to the i th shrubland pixel within a maximum range, which is considered as the distance where the kernel function first falls below 10 −9 ; a and b are the range and shape parameters, respectively. Large a represents high seed-arrival probability from distant parent shrub patches and vice versa. Large b denotes a fast decay rate of seed-arrival probability with distance and vice versa. The exponential power kernel is a generalized form of the Gaussian ( b = 2), exponential ( b = 1), and fat-tailed ( b = 0.5) kernels, and has been widely used in literature 86 . The seed-arrival probability of a given location ( s ) is calculated as follows: $$p({{{\bf{s}}}})=\frac{1}{{P}_{{{\max }}}}\mathop{\sum }\limits_{i=1}^{N}k({x}_{i})\Delta x$$ (2) where N is the total number of shrub pixels within the maximum range; \(\Delta x=30\) m is the width of a pixel; and \({P}_{{{\max }}}\) is the normalization factor such that \(p({{{\bf{s}}}})=1\) when the location is completely surrounded by shrublands within the maximum range. The seed-arrival probability \(p({{{\bf{s}}}})\) measures the spatial proximity to existing shrublands. Assuming the same seed production of all shrublands, the seed arrival probability \(p({{{\bf{s}}}})\) is also proportional to the expectation of the arriving seed amount. Based on the shrub cover in 1984, we calculated the seed-arrival probability during 1984–2014 using the above-described algorithm implemented in the multidimensional image processing software of scipy.ndimage 87 in Python. As seeds can arrive via multiple dispersal vectors, the seed-arrival probability results from the integral of multiple dispersal kernels with distinct ranges and shapes 32 . To parsimoniously account for various dispersal vectors, we considered the integral of a short-distance dispersal kernel and a long-distance dispersal kernel. For short-distance dispersal, we evaluated all combinations of 100 m \(\le a\le\) 1000 m with an interval of 100 m and \(0.5\le b\le 2.5\) with an interval of 0.5. For long-distance dispersal, we evaluated all combinations of 1 km \( < \, a\le\) 60 km with an interval of 2 km and \(0.5\le b\le 2.5\) with an interval of 0.5. Using a larger interval of 2 km facilitates optimization efficiency for the long-distance dispersal kernel. We identified the parameters that resulted in the best 5% accuracy in estimating shrub expansion during 1984–2014. The uncertainty of estimated shrub expansion sensitivity using different kernel parameters across the best 5% was quantified. The relative weights (sensitivities) of the two kernels were identified as those that best explain shrub expansion patterns, using Eqs. ( 3 ) and ( 4 ). The spatial pattern of the combined kernel density (Supplementary Fig. 7 ) shows an estimate proportional to seed-arrival probability resulting from both short- and long-distance dispersal. Sensitivity of observed shrub expansion to control factors Observed shrub expansion was quantified as the fraction of 30-m pixels that were not identified as shrublands in 1984, i.e., non-shrub tundra, but became shrublands in 2014 within each 4 km by 4 km gridcell. Environmental suitability and seed-arrival probabilities through short- and long-distance dispersal were aggregated by average to a 4-km scale and used to explain the spatial pattern of shrub expansion using the following multivariate linear regression. $$y\left({{{{{\bf{s}}}}}}\right)={a}_{0}+{a}_{1}{{{{{\rm{ES}}}}}}\left({{{{{\bf{s}}}}}}\right)+{a}_{2}{{{{{\rm{LD}}}}}}\left({{{{{\bf{s}}}}}}\right)+{a}_{3}{{{{{\rm{SD}}}}}}\left({{{{{\bf{s}}}}}}\right)+\delta ({{{{{\bf{s}}}}}})$$ (3) \(y\left({{{{{\bf{s}}}}}}\right)\) is the new shrub area; \({{{{{\rm{ES}}}}}}\left({{{{{\bf{s}}}}}}\right),{{{{{\rm{LD}}}}}}\left({{{{{\bf{s}}}}}}\right),{{{{{\rm{SD}}}}}}\left({{{{{\bf{s}}}}}}\right)\) are the z-scores of environmental suitability, long-distance dispersal, and short-distance dispersal at location \({{\bf{s}}}\) , respectively; and \(\delta \left({{{{{\bf{s}}}}}}\right)\) is the noise. The sensitivities \({a}_{1},{a}_{2},{a}_{3}\) were estimated for gridcells with and without fire occurrence, respectively. The 95% confidence intervals of the sensitivities were estimated. In addition to the considered explanatory variables, shrub expansion may also be influenced by other unconsidered confounding factors that lead to a spatial correlation pattern independent from that induced by dispersal. To account for such spatial correlation, we also conducted a spatial regression, i.e., $$y\left({{{{{\bf{s}}}}}}\right)={b}_{0}+{b}_{1}{{{{{\rm{ES}}}}}}({{{{{\bf{s}}}}}})+{b}_{2}{{{{{\rm{LD}}}}}}({{{{{\bf{s}}}}}})+{b}_{3}{{{{{\rm{SD}}}}}}({{{{{\bf{s}}}}}})+w\left({{{{{\bf{s}}}}}}\right)+\sigma ({{{{{\bf{s}}}}}})$$ (4) where \(w\left({{{{{\bf{s}}}}}}\right)\) represents a spatial correlation structure of shrub expansion following a stochastic Gaussian process, which has a zero-mean and is independent from the considered explanatory variables; \(\sigma ({{{{{\bf{s}}}}}})\) is an uncorrelated error. The sensitivities to the environmental suitability and dispersal \({b}_{1},{b}_{2},{b}_{3}\) were jointly estimated with \(w\left({{{{{\bf{s}}}}}}\right)\) and \(\sigma ({{{{{\bf{s}}}}}})\) using the spBayes software 88 in R 89 , for grid cells with and without fire occurrence, respectively. An exponential covariance model and the following prior distributions were used based on empirical variogram 90 : \(\phi \sim\) Uniform (30 m, 2000 m), \({\sigma }^{2}\sim\) Inverse Gamma (2, 0.05), and \({\tau }^{2}\sim\) Inverse Gamma (2, 0.05), where the \(\phi ,{\sigma }^{2},{\tau }^{2}\) are the range parameter, covariance and nugget effects. Due to the large computing load, the spatial regression was trained using randomly selected 5% of the 4 km pixels and tested on another randomly selected independent set with the same size. The correlation r was estimated for the test set. Sensitivities were calculated as the mean of 5000 posterior samples after convergence (1000 samples). Prediction of shrub expansion by 2100 Based on the estimated empirical relationships, we predicted shrub expansion by 2040, 2070, and 2100. For each 30-year period, environmental suitability was estimated using topographic conditions and the averages of projected bioclimatic conditions. Seed-arrival probability was estimated using shrub cover at the start of the period. The fire was assigned for each 30 m pixel with a probability represented by the cumulative burn area fraction, ensuring the aggregation from a 30 m scale consistent with the climate model projection at a coarser scale. Each non-shrub pixel at the start of the 30-year period was changed to shrubland at the end with a probability calculated using the estimated sensitivities to its environmental suitability, seed-arrival probability, and fire occurrence. We further quantified the uncertainty of projected shrub expansion due to the uncertainty in the estimated sensitivities. Instead of a computationally expensive bootstrapping approach, we used the lower and upper boundaries of the 95% confidence intervals for all the regression coefficients in each 30-year period, which provided an overestimate of the uncertainty range of projected shrub expansion. To disentangle the impacts of environmental suitability change, seed dispersal, and fire on the projected shrub expansion, we used synthetic scenarios where each of the three factors was turned off, i.e., environmental suitability kept the same as in 2014, zero seed-arrival probability, and no fire occurrence, respectively. The difference between the synthetic scenarios and the actual projection illustrated the contribution of the corresponding factor on the projected shrub expansion. To diagnose potential bias and spatial patterns of predicted shrub expansion using the environmental suitability-based approach in previous studies, the shrub expansion by 2100 predicted here was also compared to the prediction without considering dispersal and fire, i.e., by applying 2100 environmental suitability to the relationship established between shrub presence and environmental suitability alone in 1984. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All datasets used in this study are publicly available. The annual land-cover product is available at . The historical and projected climate conditions and the application to downscale (ClimateNA) is available at . The ASTER DEM is available at . The dNBR product is available at . Processed data used to produce the main figures are available at 91 . Code availability The source code used to calculate environmental suitability and seed dispersal is publicly available at 92 . | Scientists investigating the growth of arctic vegetation have found that seed dispersal and fire will slow its land expansion in the long term, despite more favorable conditions from a warming planet. Previous estimates predicted that arctic shrubs—stubby, dense bushes that cover much of the tundra region—would eventually conquer about 39% of the non-shrub area in the Arctic. But a new analysis suggests the flora will only be able to expand into 25% of the area's tundra by the year 2100, said Yanlan Liu, lead author of the study and an assistant professor of earth sciences at the Ohio State University. The study, published in the journal Nature Communications, examined patterns of shrub growth in the past and then applied an array of environmental variables, including precipitation, elevation and days when the temperature was above 5 degrees Celsius, to determine what may happen in the future. Researchers concluded that while shrub expansion has been positively correlated to environmental suitability—the probability of a species' survival in a certain location—for the past few decades, that isn't the case anymore. Instead, changes in shrub expansion can now mostly be attributed to both fire and seed dispersal, or when these shrub seeds are carried to non-shrub areas by gravity, animals, wind, or even ocean currents and ice floes. "Dispersal and fire are much more important than other environmental conditions, in that even if a given place is warm enough, shrubs won't necessarily grow in that region," Liu said. "Instead, it's limited by whether seeds can arrive at that location, or whether the seed bed or the soil condition is nutritious enough to sustain shrub growth." Liu added that while fire may cause seed and seedling death in the short term, wildfires also improve soil nutrient availability and facilitate germination. But since the Arctic region is warming up twice as fast as the global average, scientists have been able to detect a marked change in vegetation distribution over the course of several decades. Whereas previous studies typically relied on observational studies and field models to assess this growth, Liu's team used high-resolution satellite imagery from NASA's Arctic-Boreal Vulnerability Experiment (ABoVE) data products to observe shrub expansion across Alaska and much of western Canada between 1984 and 2014. Based on this data, they estimated shrub expansion in the year 2040, 2070 and 2100. The study concluded that the observed pattern of shrub spread would not follow warming patterns in the future, and that previous models had, in fact, overestimated future shrub growth. This discrepancy could only be explained by considering factors like seed dispersal and fire, Liu said. It's especially important to keep track and predict arctic shrub expansion because its growth alters the global energy and carbon budget, or how much solar energy and carbon dioxide gets absorbed in Earth's surface, she added. But Liu said her research opens up brand new avenues to explore how plant life proliferates in other locations around the world. "This study is the very first step to look at large-scale shrub expansion in Arctic-Boreal regions at a high resolution," said Liu. "Our data-driven analysis explains what happened, but the next step is to explain why." Going forward, Liu plans to use dynamic vegetation models to simulate how vegetation distribution and structure will look in the future, and then combine the observations to make those projections more accurate. | 10.1038/s41467-022-31597-6 |
Chemistry | More exact, ethical method to tell the sex of baby chickens | Roberta Galli et al, In ovo sexing of chicken eggs by fluorescence spectroscopy, Analytical and Bioanalytical Chemistry (2016). DOI: 10.1007/s00216-016-0116-6 Journal information: Analytical and Bioanalytical Chemistry | http://dx.doi.org/10.1007/s00216-016-0116-6 | https://phys.org/news/2016-12-exact-ethical-method-sex-baby.html | Abstract Culling of day-old male chicks in production of laying hen strains involves several millions of animals every year worldwide and is ethically controversial. In an attempt to provide an alternative, optical spectroscopy was investigated to determine nondestructively in ovo the sex of early embryos of the domestic chicken. The extraembryonic blood circulation system was accessed by producing a window in the egg shell and the flowing blood was illuminated with a near-infrared laser. The strong fluorescence and the weak Raman signals were acquired and spectroscopically analyzed between 800 and 1000 nm. The increase of fluorescence intensity between 3.5 and 11.5 days of incubation was found to be in agreement with the erythropoietic stages, thus enabling to identify hemoglobin as fluorescence source. Sex-related differences in the fluorescence spectrum were found at day 3.5, and principal component (PC) analysis showed that the blood of males was characterized by a specific fluorescence band located at ∼910 nm. Supervised classification of the PC scores enabled the determination of the sex of 380 eggs at day 3.5 of incubation with a correct rate up to 93% by combining the information derived from both fluorescence and Raman scattering. The fluorescence of blood obtained in ovo by illumination of embryonic vessels with a IR laser displays spectral differences that can be employed for sexing of eggs in early stage of incubation, before onset of embryo sensitivity and without hindering its development into a healthy chick Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction The possibility to determine the sex of birds “in ovo” is driving increasing attention as a potential method to overcome the culling of day-old male chicks in poultry industry. Laying hen strains of modern breeding differ from broiler strains, so that male birds of the laying strain are not profitable for meat production. Therefore, day-old cockerels are culled directly in the hatchery. This practice involves a very large number of animals: approximately 370 million in North America and 420 million in Europe every year [ 1 ]. Only in Germany, 40 millions of chicks are killed every year according to animal welfare legislation by asphyxiation with carbon dioxide or by grinding [ 2 ]. Killing of day-old chicks is considered as problematic and ethical issues have triggered increasing research aimed to provide alternatives [ 3 ]. The optical spectrum contains information about the biochemical composition and/or the structure of a biological sample and can provide the information about the sex as well. Vibrational spectroscopic techniques have been applied to the sexing of birds, either of hatched birds as well as of incubated and unincubated eggs. UV resonance Raman spectroscopy [ 4 ] and Fourier-transform infrared absorption spectroscopy [ 5 , 6 ] were used to retrieve the sex of hatched animals based on DNA differences by analyzing cell material extracted from the feather pulp. The use of Fourier-transform infrared spectroscopy was also reported earlier by our group for sex recognition based on the spectrum of germinal cells obtained from unincubated eggs [ 7 ]. Optical methods for in ovo sexing have the advantage of being applicable in situ without taking samples, and can provide real-time sexing by eliminating the need to wait for the results of chemical or genetic analyses performed on previously extracted samples. Therefore, optical methods have clear advantages toward an industrial exploitation when compared to other non-destructive approaches proposed for in ovo sexing that are based on measurement of hormone [ 8 – 10 ] and DNA analysis [ 11 – 13 ], which all require the extraction of egg material or fluids. Totally non-invasive methods, like the selection of unincubated chicken eggs based on morphometric parameters, can only bias to a low extent the ratio between sexes [ 14 ]. There is also evidence that egg odor encodes sex information; however, this was not exploited for sexing [ 15 ]. We recently reported near-infrared Raman spectroscopy for in ovo sexing of incubated chicken eggs, showing that it provides correct sexing up to 90% without hindering embryo development [ 16 ]. We demonstrated that, already during the fourth day of incubation (i.e., 84 ± 4 h), Raman measurements can be performed directly on the blood that flows in the extraembryonic vessels of the vitelline circulation avoiding all damage to the embryo. The Raman spectra delivered the biochemical fingerprint of the embryonic blood, from which the sex information was obtained by use of supervised classification algorithms. In this study, we observed that the fluorescence background intensity was significantly different between sexes, but this information was not exploited because of the very high overlap between sexes. In Raman spectroscopy, the presence of an intense fluorescence background is normally considered a pitfall rather than a source of information and thus suppressed or removed during data processing [ 17 ]. Fluorescence spectra of biological tissue are broad compared with Raman and FT-IR spectra, and they do not carry such detailed biochemical information. However, spectral fluorescence-based methods have continuously evolved and improved enabling to address new cellular features [ 18 ]. Here, we show that spectral analysis of the near-infrared fluorescence signal of the blood flowing in the extraembryonic vessels can indeed provide sex information of the domestic chicken eggs ( Gallus gallus f. dom.). For this purpose, we analyzed spectroscopically the backscattered radiation of blood illuminated with a 785 nm laser, repeating the measurements at different time points of the incubation and comparing with the time course of erythropoiesis to identify the source of the fluorescence signal. Afterwards, we exploited the fluorescence spectral profile for sex determination and compared the results with the combination of fluorescence and Raman scattering. Methods Egg handling Fertilized eggs of a white layer strain (LSL—Lohmann Selected Leghorn) were used in all experiments. They were obtained from Lohmann Tierzucht GmbH (Cuxhaven, Germany). The freshly laid eggs were inspected for any shell damage and then stored at approximately 14 °C. Immediately before starting the incubation, egg shells were windowed using a 30 W CO 2 laser (Firestar v30, Synrad, Mukilteo, Washington, USA) equipped with scanning head (FH Flyer, Synrad, Mukilteo, Washington, USA). The shells were laser-scribed at the pointed end along a round path of 12 mm diameter, without removing the shell window. The incubation was performed in an automatic egg incubator (Favorit Olymp 192, Heka-Brutgeräte, Rietberg, Germany) at 37.8 °C and humidity of 53%, in vertical position with the pointed end downward. A ±45° tilting at an interval of 3 h was applied during the incubation time until day 3.5 (i.e., 84 h). At this time point, the eggs were subjected to the measurement. The shell window was gently removed using a scalpel and the spectrum was acquired. After measurement, embryonic tissue samples were isolated for subsequent molecular sexing, or the shell windows were closed using a biocompatible adhesive tape (Leukosilk, BNS Medical GmbH, Hamburg, Germany) and further incubated to perform time-course experiments. Measurements were repeated at day 4.5, 5.5, 6.5, 7.5, 9.5, and 11.5. At each time, the tape was removed before the measurement and reapplied afterwards. The incubation between the measurements was performed with the egg in upright position to maintain the embryo and the main blood vessels optically accessible. Before measurement, the eggs were inspected and the spectrum was acquired only on blood vessels of vital embryos. Otherwise, samples of embryonic tissue were isolated and kept frozen at −80 °C for subsequent sex determination. After the last measurement at day 11.5, all embryos were isolated and kept frozen for sex determination as well. Molecular sexing Reference sexing was obtained with genetic analysis on embryonic tissue based on polymerase chain reaction (PCR). Alkaline extraction of DNA was performed as described elsewhere [ 19 ]. The samples were treated in NaOH for 20 min at 75 °C and subsequently neutralized with Tris–HCl (pH = 7.5). Afterwards, the samples were centrifuged for 10 min at 14,000 rpm and the supernatants transferred to reaction tubes. The DNA content of the supernatant was measured using the Genesys 10 Bio UV–vis spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA, USA). The amplification of the CHD-1 gene was performed in the PCR cycler T Gradient (Biometra GmbH, Göttingen, Germany). The primers and the temperature profile are described elsewhere [ 20 , 21 ]. Finally, the amplified PCR products were separated by ethidium bromide agarose gel electrophoresis and visualized by UV light. Spectroscopy Spectroscopy was performed with a spectrometer RamanRxn (Kaiser Optical Systems Inc., Ann Arbor, USA). The excitation was obtained with a diode laser emitting at a wavelength of 785 nm (Invictus 785-nm NIR, Kaiser Optical Systems Inc., Ann Arbor, USA). A fiber optic probe (MR-Probe-785, Kaiser Optical Systems Inc., Ann Arbor, USA) was used in the experiments. The excitation fiber had a core diameter of 62.5 μm and the collection fiber of 100 μm. The fiber probe was coupled to a self-build microscopy system that enabled coaxial vision. The microscopy system was composed by commercial elements and it is pictured in Fig. 1 . Fig. 1 Schema of the microscopy system with inset showing the egg shell window and the main vitelline vessels suited for in ovo spectroscopic measurement Full size image A Keplerian telescopic system was used as beam-expander to collimate the laser beam and to match the diameter to the objective pupil. The beam-expander was connected to a 45° filter cube that contained a short-pass dichroic mirror with edge at 670 nm (FF670-SDi01, Semrock Inc., Rochester, New York, USA). The mirror reflected to 90° the laser excitation in the microscope objective ×20/0.4NA Plan Apo NIR (Mitutoyo Corp., Kanagawa, Japan), which was also mounted on the filter cube. The laser spot in the focus had a diameter of ∼55 μm and the measured laser power was 160 mW. The light was collected in reflection mode by the objective. The collected near-infrared light with wavelength above 670 nm was reflected by the dichroic mirror back to the fiber probe and propagated to the spectrograph. The collected visible light was transmitted through the dichroic mirror and used to obtain the image of the sample. For this purpose, a CCD camera with 50 mm focal length lens and a NIR hot mirror were mounted on the aperture of the filter cube that faces the objective. In order to maximize the visibility of perfused blood vessels, side illumination with green LEDs was employed. A motorized x-y-z micrometer stage was used to move the egg and bring a blood vessel in the laser spot. Blood vessels with diameter larger than 100 μm were manually chosen for the measurement. An autofocus system based on blood flow detection in the camera images was used to set the laser focus inside the blood vessel. A tracking system remained active during the whole acquisition to compensate all movements of the blood vessel. The autofocus and tracking software modules are described elsewhere [ 22 ]. An enclosure of the system provided rejection of ambient light and protection from reflected or scattered laser light during acquisition. Total acquisition time was set to 40 s (20 accumulations of 2 s). The acquired spectral range was from 794 to 1054 nm (i.e., from 150 to 3250 rel. cm −1 ) and the spectral resolution was 0.3 nm (i.e., approximately 4 cm −1 ). Data analyses Spectroscopic data were analyzed using the MATLAB package (MathWorks Inc., Natick, USA) and statistics were calculated with Prism 6.0 (Graph Pad Software Inc., La Jolla, CA, USA). The intensity of the fluorescence was calculated as sum area under the spectra in the range 820–1000 nm. Principal component analysis was performed on the raw spectra by using the MATLAB function princomp . Classification was performed using supporting vector machine. The MATLAB functions svmtrain was employed to train the classifier, by using a quadratic kernel and least-square method to find the separating hyperplane. Afterwards, the function svmclassify was employed in order to retrieve the classification. Results and discussion Spectral analysis of blood fluorescence In the chicken egg, blood islands appear toward the end of the first day of incubation and already at day 2 of incubation the blood circulation starts driven by the primitive heart. Between day 3 and 4, the vascularized area of the yolk sac is roughly circular and reaches a diameter between 3 and 4 cm at day 4. The major blood vessels are the paired lateral vitelline arteries and veins, and the unpaired anterior and posterior vitelline veins. The branching pattern of arteries and veins is dichotomous, creating a treelike topology. Around day 8, the respiratory function is transferred to the blood vessels of the newly formed chorioallantoic membrane [ 23 ]. When the egg is brought in vertical position at day 3.5 of incubation, the vascularized area moves on the top and, after opening of the shell window, it remains on the surface, so that the vessels can be optically sampled through the shell window (Fig. 1 , inset). The position of measurement was always chosen on a large vessel (diameter larger than 100 μm), preferably in main lateral or anterior/posterior veins. The backscattered signal generated upon irradiation of extraembryonic blood vessels with a laser beam at 785 nm was spectroscopically analyzed daily on a set of 27 eggs from 3.5 to 7.5 days of incubation, and then at 9.5 and 11.5 days. Six embryos died during the time course of experiments because of egg incubation in upright position without the required periodic tilting, repeated egg removal from the incubator, or opening of the shell window in order to perform the measurements. The mean spectra calculated for each day of measurement are shown in Fig. 2a . A weak Raman signal is superimposed on the strong fluorescence. The fluorescence intensity was calculated as area under the curves and is shown in Fig. 2b . The signal above 1000 nm (2740 rel. cm −1 ) was not included in the calculation as it is dominated by the Raman bands of CH and OH. The fluorescence increased with the incubation time until day 7.5 and tended to decrease afterwards. Fig. 2 a Mean blood spectra acquired in ovo from day 3.5 until day 11.5 of incubation. b Time course of total blood fluorescence acquired in ovo in the range 820–1000 nm (mean ± SD). The number of eggs included in the statistics is indicated. c Spectra of plasma and erythrocytes (red blood cells— RBC ) separated by centrifugation of embryonic chicken blood extracted at day 11.5 of incubation; the RBC spectrum was acquired using a twentieth of the laser power to avoid thermal damage Full size image Different erythropoietic phases take place starting at day 2. The primitive erythrocytes undergo six rounds of mitosis until day 5. In the middle of day 5, the definitive erythrocytes start entering the circulation and gradually replace the primitive line by day 7. By day 8–9, the erythrocytes have completed their maturation [ 24 ]. The time course of fluorescence is in agreement with the time course of hematocrit observed during erythropoiesis [ 25 ], and mimics the increase of hemoglobin measured in embryonic and mature erythrocytes [ 26 ]. A linear increase of hemoglobin content was reported until day 5, followed by a plateau between day 5 and 6, corresponding to the transition from the proliferative to the post mitotic phase of primitive erythrocytes. This plateau can be seen also in the time course of fluorescence. Afterwards, the hemoglobin content increases again until day 9 and slightly decreases afterwards, when the erythrocyte maturation is concluded. Fluorescence intensity displays a similar trend at day 9.5 and 11.5 as well. The comparison between the time courses of fluorescence intensity and hemoglobin content indicates that hemoglobin is the main source of the observed NIR fluorescence. Heme in red blood cells is a molecule that belongs to the family of porphyrin complexes. Porphyrins normally display strong reddish-orange fluorescence. However, heme constitutes an exception, as the fluorescence of the porphyrin ring is quenched by the coordinated iron [ 27 ]. Steady state fluorescence of heme in the UV range is nevertheless observed and attributed to fluctuations in the protein structure that partly neutralize the quenching. The source of steady state fluorescence is identified in the tryptophan residues [ 28 ]. Moreover, it was already shown in vitro that excitation at 785 nm of methemoglobin over the concentration range spanning the normal capillary blood range produces a fluorescence that increases linearly with increasing hemoglobin volume percent [ 29 ], although fluorescence detected in vivo was attributed to both plasma and hemoglobin [ 30 ]. In our experiments, the fluorescence was observed only during irradiation of blood (i.e., acquiring the signal from perfused blood vessels and from the heart chambers), while other embryonic tissue, egg components (egg white and yolk), as well as extraembryonic tissue (vitelline and corioallantoic membranes) did not display any fluorescence, but only Raman scattering. Moreover, blood taken from embryos at day 11.5 of incubation was centrifuged to separate erythrocytes from plasma and then the two components were subjected to spectroscopy. While the erythrocytes displayed strong fluorescence signal, plasma displayed Raman scattering only (Fig. 2c ). It is also worth to note that any bleaching of fluorescence during irradiation of perfused blood vessels is prevented by the blood flow itself. The blood flow velocity in the center of large vitelline vessels is around 1 mm/s already at early development stages [ 31 ], and therefore the exposure time of blood cells traveling through the laser spot is as short as 0.05 s. The embryos were sexed by PCR and the time course of the fluorescence for both sexes was analyzed on a subset of eggs that survived until the end of experiments. The mean spectra for both sexes are shown in Fig. 3a from day 3.5 until day 11.5 on incubation. Male and female egg fluorescence exhibited the largest difference and lowest variability at day 3.5 (Fig. 3b ). Starting at day 4.5, the intensity difference progressively declined. Male blood was characterized by more intense fluorescence until day 7.5. While the fluorescence intensity of female blood steadily increased with the incubation time, fluorescence intensity of male blood did not increase from day 3.5 to day 4.5. At this point of the incubation, the main changes in the fluorescence of male blood affected the spectral shape. After day 9.5 of incubation, the mean fluorescence intensities of both sexes became similar. Higher fluorescence of male blood at early incubation stages may be likely related to faster erythropoiesis in male embryos. It is known that in the first phases of incubation (∼30 h), male embryos display faster development [ 32 ]. Moreover, it was reported that at later incubation stages (after day 13), male embryos possess higher hematocrit [ 33 ]. Our data evidenced that developmental differences between sexes exist for the whole first third of incubation. Fig. 3 a Mean blood spectra of male ( blue ) and female ( red ) embryos ( n = 7 for each sex) acquired in ovo from day 3.5 until day 11.5 of incubation. b Intensity difference (male − female), mean and SD Full size image The spectral differences between male and female blood fluorescence detected at day 3.5 may be used for in ovo sexing. We also have shown previously that at this time point, it is possible to measure in ovo without impairing embryo development [ 16 ]. In ovo sexing by means of fluorescence In order to verify whether the differences between the fluorescence spectra of male and female blood at day 3.5 of incubation enable a reliable sexing of eggs, the measurements were performed on 380 fertilized eggs incubated until 84 ± 4 h. Reference PCR showed that 199 eggs contained a male and 181 eggs a female embryo. Figure 4a shows the mean spectra and the corresponding standard deviations. The spectral differences of the fluorescence between the sexes were confirmed, being the average intensity of fluorescence much stronger for males. However, there is overlap between the two groups. In Fig. 4b , the signal intensity calculated as area under the spectra in the range 820–1000 nm is shown for all measured eggs, together with the mean value and the standard deviation. It becomes evident from these data that the lower signal intensities are rather close for both sexes, while higher intensities are characteristics of males, with only one exception in the female group. This indicates that, although carrying sex information, the evaluation of the fluorescence intensity alone is not sufficient for sex recognition with high correct rate. Moreover, the intensity of males is affected by a larger variability compared to the one of females, as indicated by the larger standard deviation (SD). Fig. 4 a Mean spectra of blood acquired from female and male eggs in the range 820–1000 nm; the overlapping range of SD is indicated in gray . b Total area intensity calculated in the same range as shown in ( a ) with mean value and SD Full size image Principal component analysis was applied in order to gain insights in the different spectral contributions that are related to the sex. Mathematically, principal component analyses enables extraction of the most important information from the observation data table, by performing a procedure of linear decomposition and computing new variables called principal components (PC). The values of these new variables are called scores, and can be interpreted as the projections of the observations onto the PCs [ 34 ]. This approach allows discrimination among spectral groups using scatter plots of scores, and the loading vectors convey the spectral variations that differentiate the data [ 35 ]. Score intensities for the PC #1 to #8 are shown in Fig. 5a , and the corresponding loading vectors in Fig. 5b . The mean scores of PC #1, #2, #3, and #7 are significantly different between the sexes (two-tailed t test with Welch correction, p < 0.001). The first PC score is higher for males and accounts for the overall higher fluorescence intensity. The loading vector of PC #1 represents the mean spectrum and includes both fluorescence and Raman scattering signals. The higher components describe the “spectral” variance of the data. For instance, PC #2 accounts for a variance as high as 97.5%. The score is higher for males and the loading vector represents a fluorescence band which is typical of male blood. PC #3 accounts for a variance of 1.3%; the score is higher for females and accounts for differences in the Raman signal. The Raman bands of the loading vector indicate presence of proteins (at 1003, 1085, 1304, 1445, and 1665 rel. cm −1 [ 36 ]) and of nucleic acids (at 780 and 826 rel. cm −1 [ 36 ]), while bands of hemoglobin were not observed. Therefore, this component might be interpreted as blood cells different from erythrocytes. The overall spectrum appears very similar to the one reported for blood immune cells [ 37 ]. Although the immune system of the embryo is not yet developed, early presence at least of macrophages is known [ 38 ]. Moreover, hematopoietic stem cells and progenitor cells of myeloid and lymphoid lineages were found in the embryonic blood during the fourth day of incubation [ 39 ]. Therefore, the interpretation of this component remained tentative. PCs #4 and #5 account for a variance of 0.6 and 0.2%, respectively. Both loading vectors of PC #4 and PC #5 are dominated by Raman bands of unsaturated lipids at 1300, 1440, 1656, and 1748 rel. cm −1 [ 36 ]. For instance, the loading vector of PC #5 was found fully consistent with the Raman spectrum of yolk, which has been shown earlier [ 16 ]. This component likely accounts for sampling of egg material outside the blood vessel and is justified by the large laser spot and non-confocal microscope configuration that was used in the experiments. PCs #6 and #7 account for variances close to 0.1% and were interpreted as variations of the protein profile: the loading vector of PC #6 is characterized by bands typical of proteins with cyclic ring at 1003, 1210, 1545, and 1606 rel. cm −1 , and the one of PC #7 by Raman bands of amide vibrations at 1225, 1563, and 1637 rel. cm −1 [ 36 ]. Finally, the loading vector of PC #8, which accounts for a variance lower than 0.1%, contains vibration bands of protein cyclic rings at 1210 and 1545 rel. cm −1 [ 36 ] and thus represents changes in the protein profile, too. Fig. 5 a First eight PC scores; two-tailed t test with Welch correction, *** p < 0.001. b First eight PC loading vectors; the position of Raman bands discussed in the text is indicated in rel. cm −1 Full size image In order to test a classification for in ovo sexing based on PC scores, the dataset was randomly split in two groups, which were used as training set ( n = 190) and test set ( n = 190). The training set was used to create the classification model, while the test set was used to determine model performances. Supporting vector machine (SVM) was used as solution to the two-class (that is to say, sexes) problem. SVM creates a boundary in the form of a hyperplane between two groups that maximize the margin between the most similar samples in each group [ 40 ]. For instance, SVM was applied on the PC scores to find the quadratic hyperplane that separates the sexes of the training set. By using the scores of the first two PCs only—i.e., the fluorescence information alone (Fig. 6a )—the training set was reclassified with a correct rate of 81% (females 76/91, males 78/99), and the test set was classified with a correct rate of 85% (females 75/90, males 86/100). By including higher PCs, the correct rate of classification increased with the dimensionality (Fig. 6b ), and the best performances were attained by using the first eight PCs. In this case, the training set was reclassified with a correct rate of 93% (females 85/91, males 91/99), and the test set was classified with a correct rate of 91% (females 81/90, males 91/100). By further increasing the dimensionality, the algorithm became overfitted: the classification rate of the training set further increased, while the one of the test set decreased. Fig. 6 a Scatter plot of PC #2 vs. PC #1 scores and separating quadratic function found with supporting vector machine. b Classification rate as function of the dimensionality Full size image The accuracy is limited by the variability of the data, which was likely related to intrinsic variability of blood composition. Both intensity and spectral shape of fluorescence depended from the development stage of the embryo. The measurements spanned over ∼8 h, and it is expected that developmental variations may occur in embryos even though all eggs are placed in the incubator at the same time. These differences exist inside the same setting of eggs depending from differences in the viability and vigor of embryos, size of individual eggs, or from local temperature gradients inside the incubation chamber [ 41 ]. Experimental effects related to different volume sampling due to blood vessel dimensions or variation of the focal position did not affect the classification. This is proven by lack of any correlation between the score of PC #5 (representative of yolk) and the misclassified spectra. The mean score intensity calculated for wrongly classified spectra is −0.002 ± 0.062 (mean ± SD, n = 31) vs. 0.002 ± 0.050 calculated for the correctly classified ones (mean ± SD, n = 349). The mean score intensities are not statistically different (two-tailed t test, p = 0.7). Compared to our previous research, the present study provides significant improvements. We first proposed infrared absorption spectroscopy of the feather pulp to distinguish the sex of hatched birds [ 5 , 6 ]. The studies were performed on a small number of animals (turkeys and pigeons), as pilot studies aimed to demonstrate that optical spectroscopic methods can retrieve sex-related biochemical differences in birds. Furthermore, we investigated infrared spectroscopy of the germinal disk extracted from unincubated chicken eggs [ 7 ]. Sexing was achieved with high accuracy, but the approach is not transferrable in the practice because windowing of unincubated eggs causes high occurrence of embryo mortality and morbidity [ 42 , 43 ]. By analyzing the Raman spectra of embryonic blood, we achieved sex determination with an accuracy comprised between 88% and 90% [ 16 ]. Moreover, the use of NIR laser excitation avoided phototoxic effects on embryos, so that healthy chicks hatched from the measured eggs. Here, we retained the experimental configuration with NIR excitation in order to rule out negative effects on embryos, but all spectral features of the backscattered spectra were used for sexing, thus exploiting the sex information conveyed by fluorescence too, and increasing the correct classification rate well over 90%. The exploitation of multiple information contained in the backscattered light might open new possibilities for improvement of in ovo sexing accuracy. Fluorescence intensity, fluorescence spectral shape, and Raman scattering encode sex-related information that are associated to different blood components. Several approaches of spectral preprocessing could be applied in order to “amplify” a specific feature and afterwards a multi-parameter classification strategy developed to better comply with intrinsic data variability, thus further increasing the sexing accuracy. Moreover, the exploitation of fluorescence may contribute to overcome some disadvantages associated with Raman spectroscopy of biological samples, as Raman scattering is a low-intensity process which requires highly efficient laser sources, low-noise detectors, effective Rayleigh filters, and high-throughput optics [ 19 ]. As fluorescence intensity is some orders of magnitude larger than the Raman scattering signal, it might enable to largely simplify the experimental setup and facilitate the future industrial deployment of optical techniques for in ovo sexing. Conclusion The results show that the sex information can be extracted from intensity and spectral shape of the near-infrared fluorescence of embryonic blood. Source of fluorescence was identified in the hemoglobin and the spectral features of fluorescence depend from the hematopoietic stage. The intrinsic variability of development stages affected the accuracy of sexing based exclusively on fluorescence. As the Raman scattering was simultaneously acquired with the fluorescence signal, it could be employed to further increase the sexing correct rate well over 90%, with a significant improvement compared to sex determination based on Raman spectra only. The ideal criteria for sex determination have been defined since years and include lack of negative effects on embryo development and practicability on a large scale [ 44 ]. In ovo sexing based of spectral analysis of the backscattered radiation satisfies these requirements, as it is non-invasive, does not require extraction of egg material, and does not use consumables. Moreover, the method is applicable during the fourth day of incubation, before onset of embryo sensitivity at day 7 [ 45 ], thus in agreement with animal welfare. The exploitation of fluorescence offers the potential to develop industrial systems for egg sexing that are not based on expensive spectrometers, but just make use of few light detectors with suited bandpass filters to measure the signal intensity in selected spectral ranges. Afterwards, the sex information may be retrieved with simple calculations, such as ratio of intensities in order to make the method robust toward variations of laser power and detection efficiency. The availability of a cheap and easier approach alternative to spectroscopy might contribute to a broader diffusion of optical sexing in the hatchery practice. On an international scale, development of a practicable technique for in ovo sex determination has the potential to contribute to the prevention of annual culling of 7 billion male layer hybrids, whose female siblings produce the current global demand of about 68.3 million tons of eggs per year. | Thanks to an imaging technique called optical spectroscopy, it is possible for hatcheries to accurately determine the sex of a chick within four days of an egg being laid. This non-destructive method picks up on differences in the fluids contained in an egg from which a cockerel will develop, compared to one from which a hen will hatch. Having such a reasonably cheap method by which to sex eggs can lead to more ethical practices in the poultry industry. It could prevent the annual culling of seven billion day-old cockerels worldwide that have little economic value, but whose female siblings help produce the current global demand of about 68.3 million tons of eggs per year. This is according to Roberta Galli of TU Dresden (Germany) and Gerald Steiner of TU Dresden and Vilnius University (Lithuania), lead authors of an article in Springer's journal Analytical and Bioanalytical Chemistry. The meat of modern laying hen strains differs from that of broiler strains in that it is not as edible. Because their meat is therefore of little economic value, many producers choose to cull day-old cockerel chicks that will not add to egg production. In North America and Europe alone, approximately 790 million chicks are therefore culled annually. The killing of day-old chicks by asphyxiation or by grinding is a problematic and ethical issue that has triggered increasing research aimed at providing suitable alternatives. The current study is an extension of previous work by the German research team of which Galli and Steiner are part that showed that imaging techniques can be used to sex incubated chicken eggs. This can be done by noting gender-specific biochemical differences in the embryonic blood contained within an egg shell. In this study, a laser emitting at a wavelength of 785 nanometres was used to investigate 27 eggs up to 11 days after they were laid. The researchers were able to already note sex-related differences in the near-infrared fluorescence spectrum within 3½ days after incubation. Further analysis showed that the blood of male eggs is characterised by a specific fluorescence band located at ~ 910 nanometers. Galli and Steiner's team tested whether these fluorescence characteristics, together with changes in the wavelength of light, could be used to classify whether a hen or a cockerel will develop from an egg. When tested on 380 eggs, they accurately did so in 93 percent of cases. "In ovo sexing based on spectral analysis is non-invasive, does not require extraction of egg material and does not use consumables. Moreover, the method is applicable during the fourth day of incubation, before onset of embryo sensitivity at day seven, and is therefore in agreement with animal welfare," notes Galli. Steiner says that there is potential to use such fluorescence techniques to develop industrial systems for egg sexing that are not based on expensive spectrometers. It can be done using a few light detectors with suited bandpass filters to measure the signal intensity in selected spectral ranges. | 10.1007/s00216-016-0116-6 |
Biology | Spontaneous intake of essential oils: Long-lasting benefits for chicks | Aline Foury et al. Spontaneous intake of essential oils after a negative postnatal experience has long-term effects on blood transcriptome in chickens, Scientific Reports (2020). DOI: 10.1038/s41598-020-77732-5 Laurence A. Guilloteau et al. Spontaneous Intake and Long-Term Effects of Essential Oils After a Negative Postnatal Experience in Chicks, Frontiers in Veterinary Science (2019). DOI: 10.3389/fvets.2019.00072 Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-020-77732-5 | https://phys.org/news/2020-12-spontaneous-intake-essential-oils-long-lasting.html | Abstract Chicks subjected to early stressful factors could develop long-lasting effects on their performances, welfare and health. Free access to essential oils (EO) in poultry farming could mitigate these effects and potentially reduce use of antimicrobial drugs. This study on chicken analyzed long-lasting effects of post-hatch adverse conditions (Delayed group), and the impact of EO intake on blood physiological parameters and transcriptome. Half of the Control and Delayed groups had free access to EO, while the other half had only water for the first 13 days post-hatching. Blood analyses of metabolites, inflammation and oxidative stress biomarkers, and mRNA expression showed sex differences. Long-lasting effects of postnatal experience and EO intake persisted in blood transcriptome at D34. The early adverse conditions modified 68 genes in males and 83 genes in females. In Delayed males six transcription factors were over-represented (NFE2L2, MEF2A, FOXI1, Foxd3, Sox2 and TEAD1). In females only one factor was over-represented (PLAG1) and four under-represented (NFIL3, Foxd3, ESR2 and TAL1::TCF3). The genes showing modified expression are involved in oxidative stress, growth, bone metabolism and reproduction. Remarkably, spontaneous EO intake restored the expression levels of some genes affected by the postnatal adverse conditions suggesting a mitigating effect of EO intake. Introduction In poultry production systems, broiler chicks can be exposed to various stressful factors in the hatchery, during transportation to rearing houses and in their first days of life. These early-life stresses have long-lasting effects on their performances, but also on welfare and health 1 , 2 often requiring the use of antibiotics. Alternatives to antibiotics treatment used in animal production is a field of intense research, driven by the emergence of resistance to bacteria and the societal demand for more sustainable rearing practices and veterinary medicines. An appealing alternative to antibiotics/chemical antimicrobial drugs is the use of specific plants or substrates with medicinal properties to control or to prevent diseases. In particular, essential oils (EO) extracted from aromatic plants are known to offer multi-functional medicinal properties including antimicrobial, antioxidant, anti-inflammatory, immune and nervous regulatory properties 3 , 4 , 5 , 6 . These regulatory properties are linked to the terpenoids (monoterpenes, sesquiterpenes) and aromatic compounds present in EO. Phenols, alcohols and ketones have also been reported to have an antibacterial action 7 . In chickens, EO have been employed as feed additives to promote growth and health 8 , 9 . However, in most studies, the EO were included in the feed of the animal and their ability to mitigate long-term effects of early-life stresses was never analyzed. In a previous study, we developed an experimental model reproducing adverse perinatal conditions in chicks including the deprivation of feed and water, unpredictable shaking in transportation boxes and temperature changes 24 h after hatching, summarized as “negative postnatal experience”. These conditions resulted in reduced growth up to 34 days of age 10 , fecal-specific odors detectable by rats 11 and a fecal metabolomic signature highlighting persistent differences in adaptive response, energy metabolism and microbiota composition compared to optimal conditions 12 . Interestingly, we reported that chicks provided with EO independently of their feed were able to self-select which EO they consumed in response to their postnatal experience and that this EO intake mitigated some detrimental effects on growth parameters induced by this negative postnatal experience 10 . To complete these data, we examined whether the spontaneous intake of EO would mitigate the long-lasting consequences of negative postnatal experience on health parameters. More precisely, we hypothesized that the negative postnatal experience would alter blood physiological parameters such as metabolites, inflammation and oxidative stress biomarkers and reveal a specific transcriptomic signature in blood cells, but also that EO intake may restore these physiological parameters. We focused on gene expression profiling to assess health status because it is a non-invasive approach that is increasingly used to obtain a general, non-biased view of the biological pathways that are affected by a given condition. This approach has been used to assess the effects of early life stress in humans 13 and also in animals 14 including laying hens 15 but not in broiler chickens. Based on our previous study, three EO were chosen for their complementary properties to control microbial infections, reduce stress response, and improve digestive and immune system functions. Among the numerous biological activities of Cardamom EO, antioxidant, antispasmodic, anti-inflammatory, gastroprotective and antibacterial properties have been reported 16 , 17 , 18 . Marjoram EO demonstrates a variety of biological activities including a hepatoprotective role 19 , and lemon verbena EO has been shown to have analgesic, anti-inflammatory, sedative, and digestive properties 20 . Thus, the objectives of the present study were i) to characterize the immediate and long-lasting effects of a negative postnatal experience on blood physiological parameters and the long-lasting effects on whole gene expression in blood cells of fast-growing broiler chickens and ii) to analyze the effects of EO intake on blood metabolites, inflammatory and redox statuses and mRNA expression of a selection of genes in chicks exposed or not to this negative postnatal experience. Results In this section, to facilitate the reading, the postnatal negative experience is called “delayed placement” and the group concerned as the “Delayed group”. Immediate effects of delayed placement on physiological parameters of blood The chicks of the Delayed group demonstrated a significant immediate effect on metabolism shown by a decrease in plasma glucose and triglyceride concentrations (p < 0.0001 for both) and on the redox balance shown by a significant increase in thiobarbituric acid reactive substances (TBARS; p < 0.001) in the liver, and as a tendency there was a rise in blood total antioxidant status (TAS; p = 0.091) (Fig. 1 ). A significant sex effect (p = 0.009) and interaction (p = 0.011) was found for triglycerides only. Figure 1 Effects of delayed placement and sex on metabolic and physiological parameters of plasma from one-day-old chicks: ( A ) glucose concentration; ( B ) triglyceride concentration ( C ) Thiobarbituric acid reactive substances (TBARS) and ( D ) Total Antioxidant Status (TAS). Bars represent the mean ± SEM of values measured in different groups. Two-way ANOVA results are shown for each parameter. D = delayed effect, X = interaction between delayed placement and sex effects. Full size image Long-lasting effects of delayed placement and EO intake on physiological parameters of blood Metabolite plasma concentrations and redox balance differed significantly between males and females at D34. Triglyceride concentrations were higher in males than in females, whereas uric acid, TAS, ferric reducing ability of plasma (FRAP) and haptoglobin-like activities were higher in females than in males (Table 1 ). Table 1 Metabolites and redox balance in blood of 34-day-old broiler chickens. Full size table Data are presented as mean ± SEM. Different letters correspond to significant differences ( p < 0.05) between males and females (ANOVA). At D34 in males, we observed no interaction between EO intake and the delayed placement after hatching and no delayed placement effects. However, EO intake had significant effects on two physiological parameters, namely uric acid and FRAP activity, which were increased with EO intake (Fig. 2 A,B). At D34 in females, an interaction between EO intake and the delayed treatment was observed on haptoglobin-like activity (p = 0.034), but without significant differences between groups (Fig. 2 D). No effect of the delayed treatment was observed, but triglyceride concentration was decreased with EO intake independently of the postnatal experience (Fig. 2 C). Figure 2 Effects of delayed placement and EO intake on physiological parameters of blood in 34-day-old male and female chickens: ( A ) urid acid concentration; ( B ) FRAP activity ( C ) triglyceride concentration and ( D ) haptoglobin-like. The bars represent the mean ± SEM of values measured in different groups. Two-way ANOVA results are shown for each parameter. D = delayed effect, EO = essential oil, X = interaction between delayed placement and EO effects. Full size image Long-lasting effects of delayed placement on blood transcriptome in broiler chickens Microarray gene expression profiling was performed on blood cells from chickens at D34 in the Delayed and Control groups that had access to water only, to obtain a global biological signature of the long-lasting effects of the delayed placement. Microarrays consisted of 60,000 oligonucleotide probes, representing 12,349 annotated genes, i.e. 80% of the Gallus gallus genome. A principal component analysis of gene expression for each individual showed a strong sex effect (Fig. 3 A), therefore sexes were analyzed separately thereafter. No genes were found to be expressed differentially when using an adjusted p-value < 0.05. A non-adjusted p-value of p < 0.005 was thus used (Fig. 3 B Volcano plots). This analysis enabled 68 differentially regulated genes in males (40 up-regulated and 28 down-regulated in Delayed compared to Control group) and 83 differentially regulated genes to be detected in females (33 up-regulated and 50 down-regulated in Delayed compared to Control group). None of the differentially expressed genes were shared by males and females (Fig. 3 C Venn diagram). The lists of differentially expressed genes are provided in Tables S1 (males) and S2 (females) in the supplementary files. Figure 3 Microarray analysis of blood cells from 34-day-old chickens after delayed placement. Principal Components Analysis (PCA) conducted on microarray gene expression (normalized data), projection of individuals from each group and sex on the first two dimensions of the PCA ( A ). Volcano plots in male (blue) and female (orange) using –log10(non-adjusted p-value) on y-axis and log2(Delayed vs. Control chickens Fold Change) on x-axis. The red dotted line corresponds to a non-adjusted p-value = 0.005. Genes selected for further analyses are above this line ( B ). Venn diagram showing the number of differentially expressed genes between Delayed and Control groups for each sex and the lack of overlap between sexes (zero genes in common). UP = genes upregulated for Delayed chickens, DOWN = genes downregulated for Delayed chickens ( C ). Full size image The Gene Ontology (GO) analysis of the differentially expressed genes for each sex indicated modification of broad biological processes (Fig. 4 ). Interestingly, in Delayed males the GO terms are related to an increase in response to stress and to external stimuli and in defense response and down-regulation of metabolic processes. In Delayed females, the GO terms suggest microtubule-based processes were modified, which would impact cell communication and signaling and decrease in mitochondrial gene expression. Figure 4 Heatmaps displaying 68 and 83 differentially expressed genes among Delayed and Control groups in males and females respectively. Enriched GO terms (non-adjusted p-value < 0.05) and genes are shown on the right for each group. The p-values of GO terms were determined on the WebGestalt website. Full size image To investigate the regulatory pathways that drive the expression of the differentially expressed genes we used the “single site” analysis of oPOSSUM software to detect over-represented conserved transcription factor binding sites in the set of differentially expressed genes for each sex. The transcription factors significantly over-represented in the Delayed compared to Control groups for each sex together with the corresponding target genes are shown in Fig. 5 . In males, in the Delayed group no transcription factors were found significantly under-represented, but six transcription factors were over-represented, Foxd3, FOXI1, MEF2A, NFE2L2, Sox2 and TEAD1. In females, PLAG1 was the only transcription factor over-represented in the Delayed group. The other transcription factors NFIL3, ESR2, TAL1::TCF3 and Foxd3 were under-represented in the Delayed group. It should be noted that Foxd3 was over-represented in Delayed males, but under-represented in Delayed females. Figure 5 Circos plots (GOPlot R package) of over-represented conserved transcription factors (on the right side) and corresponding target genes (left side) enriched in Delayed compared to Control chickens for each sex. Up-regulated and down-regulated genes are indicated by a red and blue square, respectively. LogFC = log2 Fold Change. Significant over- or under-represented transcription factors were detected using oPOSSUM software. Full size image Long-lasting effects of EO intake on blood gene expression in broiler chickens after the delayed placement To evaluate the effect of chick’s EO intake on the blood transcriptome according to their postnatal experience, we performed quantitative PCR (qPCR) on a selection of genes based on the microarray results. We focused on the oxidative stress pathway identified in males (represented by NFE2L2 and MEF2A transcription factors) and on the PLAG1 pathway identified in females by selecting gene targets of either NFE2L2 or MEF2A (n = 18) and targets of PLAG1 (n = 5) according to oPOSSUM. To complete the gene selection, eleven genes involved in oxidative stress or inflammation not tested on the microarray were also analyzed and called “oxidative stress” or “inflammation” genes respectively. Additionally, other genes which were significantly differentially expressed between Control and Delayed groups in males (n = 10) and females (n = 4) were selected and called “other responsive genes”. The list of these 48 selected genes, their primer sequences used for quantitative real time PCR and their associated signaling pathways are presented in Table S3 . Quantitative PCR were conducted on blood cell cDNA from D34 chickens tested on the microarray analysis (Control and Delayed groups) and also those tested for EO or water intake (EO and W groups). Significant interactions of gene expression were found between the delayed placement and the chickens’ intake of EO. Among all the genes tested, a significant interaction or tendency was found in 19 out of 33 genes in males and in three out of 15 genes in females. In males provided with water only, a significant effect of the delayed placement on gene expression was confirmed in five genes involved in oxidative stress (C1QTNF7, SLC16A10, ANKH, HUS1 and SOST), but in only one gene (XDH) in water-only females (Fig. 6 A,B). A significant effect (or tendency) of the delayed placement was also confirmed for the expression of four other genes in males and five genes in females, independently of the EO intake (Table 2 ). These genes were significantly differentially regulated on the microarray as “other responsive genes” (ASPM, BRD9 and PTPLB) in males, or involved in oxidative stress (SOD3) or inflammation (COX2 and PIT54) and in the PLAG1 pathway (MEX3A, TMEM125 and ZSWIM1) in females. Figure 6 Effects of EO intake on blood-cell gene expression in 34-day-old male ( A ) and female ( B ) chickens after the delayed placement (qPCR analyses). Bars represent the mean ± SEM of values measured in different groups. Different letters between bar-plots indicate significant or tendency differences (p < 0.05 or p < 0.10) between groups detected by post-hoc analyses following two-way ANOVA. Full size image Table 2 Placement and EO effects on blood RNA expression in 34-day-old chickens after a negative postnatal experience. Full size table Interestingly, the Delayed group showed a clear favorable effect of EO intake for the SOST gene in males and females, for the SLC16A10 gene in males, and for the RARRES2 gene in females (Fig. 6 A,B). Indeed, for these genes, EO intake in the Delayed group significantly restored the gene expression level observed in the Control group supplied with water only. Unexpectedly, the intake of EO significantly modified the gene expression in the Control group for 13 genes in males and for one gene (XDH) in females. These genes were involved in oxidative stress (C1QTNF7, KIAA1217, HUS1, RTN4, TBC1D12, ANKIB1, POLR1D and INOS) or qualified as “other responsive genes” in the microarray analysis (AP3M1, NDFIP1, OTUD3, TPPP, BTBD1 and XDH) (Table S3 , Fig. 6 A,B). A significant effect of EO intake was also observed in males on the expression of ASPM, COX2 and PIK3AP1 genes, whatever the postnatal experience (Table 2 ). Discussion This study presents immediate and long-lasting effects of a delayed placement of chickens considered as a negative postnatal experience, mimicking early-life stresses encountered in poultry production, and the potential beneficial effects of EO intake on blood physiological parameters and gene expression profiling of chicken blood cells. In a previous study, we showed that chicks spontaneously chose to consume EO or not, according to their postnatal experience when different EO were presented individually 10 . In this study, we detected significant differences between sexes for several blood parameters. Triglyceride concentrations were higher in females than in males after hatching, but this was reversed at D34 indicating lower lipid metabolism in females at that age. At D34, higher anti-oxidative activity (uric acid, TAS and FRAP) and higher haptoglobin-like activity were observed in females than in males. In both sexes, the negative postnatal experience resulted in an immediate decrease in blood glucose (typically around 2 g/L in control broiler chickens as commonly observed in this species 21 ) and triglycerides associated with an increase in blood and liver lipid peroxidation (TBARS). This indicates that chicks exposed to these postnatal conditions adapted metabolically and suggests that they used the nutrients present in the internalized yolk sac 22 . Such a decrease in glycaemia was not observed in a previous study where chicks were subjected to early post-hatch delayed feeding 23 suggesting that in our study, this decrease could be the result of the combined effects of delayed feeding, variations in ambient temperatures and transport box shaking that may increase chick activity within the box. It could also be linked to a decrease in insulinemia as reported by Bigot et al. 23 , not measured in the present experiment. This adaptive metabolic profile was present in fecal metabolome of chicks for at least for 13 days after the delayed placement, suggesting that the adaptive changes in metabolism due to the postnatal experience persisted 12 , but no such effect was observed on blood physiological parameters at D34 in the present study. An in-depth view of the long-lasting effects of the negative postnatal experience on the physiology of the chickens was investigated by analyzing gene expression profiles of whole blood at D34 using microarray and qPCR. Very strong sex differences in the response to early external stimuli were observed at D34 in the blood gene expression profiles for which overall sex differences explain 28.7% of the variability in gene expression and from the microarray analysis none of the differentially expressed genes were shared between males and females. Other studies have reported sex differences in chickens regarding stress response, including early stress, on behaviour, sex hormones and hypothalamic gene expression 15 , 24 , 25 . The results of the GO analysis were limited by the low number of genes differentially expressed between groups. Nevertheless, it is remarkable that the terms ‘response to external stimuli’, ‘response to stress’ and ‘defense response’ were enriched in Delayed males. The oPOSSUM analysis provided more precise data on the pathway involved. In particular, the negative postnatal experience of chicks induced changes in the expression of genes belonging to the oxidative stress pathway (as represented by the transcription factors NFE2L2 and MEF2A) in males and to a much lower extent in females. Using qPCR, seven gene targets of these two transcription factors were confirmed to be differentially regulated (BRD9, PTPLB, C1QTNF7, SLC16A10, ANKH, HUS1 and SOST) in Delayed males and three genes involved in oxidative stress response (XDH, SOD3 and PIT54) were also differentially regulated in the Delayed group in females. MEF2A is also known to influence muscle development and has been described in chicken 26 . Thus the MEF2A pathway may be activated to counter the observed decrease in P. major muscle yield observed in the Delayed chick group 10 . Besides oxidative stress, a number of the differentially regulated genes in males have interesting properties, such as the up-regulated genes related to bone metabolism: ANKH has a role in osteogenic differentiation of bone marrow stromal cells and is a vascular calcification inhibitor 27 , 28 and SOST is a negative regulator of bone growth 29 , 30 . Moreover, the transcription factor TEAD1 over-represented in the Delayed males supports an effect of delayed placement on bone metabolism by regulating organ size and skeletal muscle mass 31 . In line with this, SLC16A10 gene over-represented in the Delayed males is a transporter gene belonging to a family of transporters involved in energy metabolism, including thyroid hormone metabolism and possibly skeletal development 32 . In females, the PLAG1 pathway was the only pathway over-represented following the negative postnatal experience. PLAG1 is a regulator of growth and reproduction 33 and associated with variability in egg production in laying hens 34 . Through qPCR, three genes targets of PLAG1 (MEX3A, TMEM215, ZSWIM1) were shown to be up-regulated in the female Delayed group. This is interesting in view of the reduced growth observed in the chicks of the Delayed group. Some other significant pathways were under-represented following the negative postnatal experience and involved gene targets of NFIL3, Foxd3 and ESR2. NFIL3 is involved in the circadian rhythm in the chick’s pineal gland and is essential as a regulator of the immune response 35 . ESR2 plays a central role in folliculogenesis and therefore in reproduction 36 . Foxd3 is important in embryogenesis being involved in regulating the lineage choice between neural crest-derived glial cells 37 and in immunity by regulating IL-10-positive B cells 38 . Foxd3 under-represented in Delayed females was in contrast over-represented in Delayed males. This may be related to the higher immune resistance of female compared to male chicken as reported in a large study on mortality following infectious 39 . Remarkably, the spontaneous intake of these EO by chicks for 12 days after hatching had long-lasting beneficial effects on the expression of several of these genes modified by the negative postnatal experience. The expression of SOST (both in males and females), SLC16A10 (only in males) and RARRES2 (only in females) genes was restored to the expression observed in the Control group when chickens had access to EO. Interestingly, RARRES2 is an adipokine that plays a major role in the regulation of metabolic and reproductive processes recently studied in the chicken 40 , in line with the list of PLAG1 gene targets, and is also a chemokine for leucocyte populations and participates in antimicrobial activity. In our study, the beneficial role of EO intake after the negative postnatal experience could be to regulate the growth and bone metabolism in male and female chicken, and also the energy metabolism and reproduction in females. Surprisingly, EO intake by control chickens modified the expression of 17 genes, 16 in males and one in females. Most of them are involved in oxidative stress, immune regulation and microtubule network integrity. The majority of the genes studied and up-regulated in the Control group were different to those up-regulated in the Delayed group suggesting that the reasons why chicks ingested EO differed between these two placement conditions. To our knowledge, this is the first study in chickens to show long-lasting effects of spontaneous intake of EO on gene expression, especially changing some blood expression patterns affected by the chick’s post-hatch experience. Further studies are needed to improve the understanding of these effects on blood transcriptome, in tissues of the liver and gut for example and on gut microbiota which could be considerably modified by the antimicrobial properties of EO. The self-medication behaviour in domestic animals warrants further investigation, especially regarding infectious challenges. It could be greatly beneficial for animals like chickens raised in large groups to be allowed to manage their health and welfare individually through free access to EO, hence limiting the use of antibiotics in the context of the One Health concept. Methods Essential oils (EO) Cardamom ( Elettaria cardamomum ) (1480CQ, batch S12A, Herbes et Traditions, Comines, France), marjoram ( Origanum majorana ) CT thujanol (2507CQ, batch S12D, Herbes et Traditions), and lemon verbena ( Lippia citriodora ) (FLE094, batch H181013MA, Florihana, Caussols, France) were chosen based on the properties of their major components and on our previous study which showed spontaneous intake and long-term effects after a negative postnatal experience in chicks 10 . Each EO was diluted in water (0.001%), mixed and shaken vigorously before being made available in a drinking bottle. The main components obtained by gas chromatography coupled to mass spectrometry for each EO are listed in Table 3 . Table 3 Essential oils composition (from [10]). Full size table Experimental design A model of postnatal negative experience was previously developed to analyze the consequences of this experience over the whole growing period of broiler chickens 10 . This model involved reproducing suboptimal transportation conditions for chicks after hatching. The chicks (Hubbard Classic, Quintin, France) were either placed immediately in pens in the rearing facility after their removal from the incubator (Control group, C, n = 192) or they were removed and subjected to 24 h of negative experience before being placed in pens (Delayed group, D, n = 192). This delayed group was deprived of both food and water and subjected to irregular movement and various room temperatures 10 . Half of the chicks (six pens each for C and D groups) had access to water only in four bottles (W–C and W-D groups). Besides feed and water supplies, the other half of the chicks had ad libitum access to the three EO (EO-C or EO-D) from D1 until D12 post-hatching. One bottle containing water, and the three others each containing one of the EO, were placed in each pen. Chicks were reared at the Experimental Poultry Facility (PEAT, INRAE, 2018, 37,380 Nouzilly, France, ) under standard temperature and light conditions with ad libitum access to water and with a wire mesh platform and a perch for environmental enrichment. At D13, the chickens were transferred to another poultry building for the growth phase until D34. They had ad libitum access to feed without anticoccidial drugs. They were fed with a standard starting diet (metabolizable energy = 12.8 MJ/kg, crude protein = 22%) until D19 and then a rearing diet from D19 to D34. Physiological analyses were performed at D1 (first experiment), and D34 post hatching (second experiment) on plasma and liver samples from 12 chickens (6 males and 6 females) per condition (2 conditions × 2 experiments × 12 chickens = 48 chickens). Transcriptomic analysis was performed at D34 on blood samples from the same chickens as those sampled for physiological parameters. All procedures used in these experiments were approved by the local ethics committee (Comité d’Ethique en Expérimentation Animale Val de Loire, Tours, France; permission no 01730.02 and 2015070815347034v2 (APAFIS#1082) and carried out in accordance with current European legislation (EU Directive 2010/63/EU). Physiological parameters Different metabolic, antioxidant and oxidative statuses and inflammation parameters were measured at D1 and D34 after hatching. Commercial kits (THERMO FISHER DIAGNOSTICS SAS, Courtaboeuf, France) were used to determine plasma glucose (g/L) (MG981780), uric acid (mg/L) (MG981788) and triglycerides (mg/L) (MG981786). Total plasma antioxidant activity was determined through Total antioxidant status (TAS) (mmol/L) (NX 2332, RANDOX LABORATOIRES, Roissy, France). Protocols were used in accordance with supplier instructions and adapted to the automated Thermo Scientific Arena 20XT photometric analyzer (THERMO FISHER DIAGNOSTICS). The Ferric reducing/antioxidant power (FRAP) was determined as described by Benzie and Strain 41 and results expressed as µmol Trolox/L. Plasma superoxide dismutase (SOD) activity was measured with a commercial kit (19,160, Sigma-Aldrich Chemie GmbH, Buchs, Switzerland), using a microplate reader (TECAN infinite 200, Tecan Group Ltd, Männedorf, Switzerland) and expressed as an inhibition activity (%). Haptoglobin-like activity described as PIT54 in chicken 42 was evaluated in plasma using a colorimetric assay (TP-801, Tridelta Development Ltd, Maynooth, Ireland) measuring inhibition of hemoglobin peroxidase activity and expressed as mg/mL. Lipid peroxidation was determined using spectrophotometry of thiobarbituric acid reacting substances (TBARS) as previously described by Lynch and Frei for the liver 43 , and adapted from Lin et al. 44 for plasma TBARS. RNA extraction and gene expression analyses RNA extraction Blood was collected from chickens at D34 using 5 ml EDTA vacutainer tubes. Blood (v = 100 µL) was immediately suspended in 1 mL of Invitrogen TRIzol reagent (FISHER SCIENTIFIC, Illkirch-Graffenstaden, France) and vigorously shaken for 5 min on ice. The samples were kept at -80 °C until RNA extraction. Total RNA was extracted using TRIzol (FISHER SCIENTIFIC) following the manufacturer’s protocol modified in accordance with Desert et al. 45 . Ten µl of acetic acid 5 N were added with chloroform to reduce DNA contamination. The quality of total RNA was assessed using RNA Nano chips on a 2100 Bioanalyzer Instrument (Agilent, Waldbronn, Germany). All the samples had an RNA Integrity Number (RIN) score of > 8.0. Microarrays Gene expression profiles were performed at the GeT‐TRiX facility (GenoToul, Génopole Toulouse, Toulouse, Midi-Pyrénées) using Agilent SurePrint G3 gallus_exp_microarray_SLagarrigue_8 × 60k_V2_july2012 microarrays (8 × 60 K, design 042,004) following the manufacturer's instructions. For each sample, Cyanine-3 (Cy3) labeled cRNA was prepared from 200 ng of total RNA using the One-Color Quick Amp Labeling kit (Agilent) in line with the manufacturer's instructions, followed by RNA clean-up using Agencourt RNAClean XP (Agencourt Bioscience Corporation, Beverly, Massachusetts). Dye incorporation and cRNA yield were checked using Dropsense 96 UV/VIS droplet reader (Trinean, Ghent, Belgium). Aliquots of 600 ng of Cy3-labelled cRNA were hybridized on the microarray slides following the manufacturer’s instructions. Immediately after washing, the slides were scanned on an Agilent G2505C Microarray Scanner using Agilent Scan Control A.8.5.1 software and the fluorescence signal was extracted using Agilent Feature Extraction software v10.10.1.1 with default parameters. Microarray data and the experimental details are available in the NCBI’s Gene Expression Omnibus 46 and are accessible through GEO Series accession number GSE102358 ( ). Real-time quantitative PCR (qPCR) To avoid genomic DNA amplification, primer pairs were designed in two different exons (thus spanning an intron) using the Primer Express software (Applied Biosystems, THERMO FISHER DIAGNOSTICS). The sequences of primers used are provided in Table S3 . The specificity of the PCR reaction was validated according to MIQE (Minimum Information for publication of Quantitative real time PCR Experiments) guidelines 47 . An aliquot of 2,000 ng of total RNA was reverse-transcribed in cDNA with Superscript III (Invitrogen, THERMO FISHER DIAGNOSTICS) and random hexamers in accordance with the manufacturer’s protocol. High throughput real time quantitative PCR was performed on the Biomark HD System (Fluidigm, Les Ullis, France) following the manufacturer’s protocol. The chip was placed into the IFC Controller, where 6.3 nl of Sample Mix and 0.7 nl of Assay Mix were mixed. Real-time PCR was performed on the Biomark System as follows: Thermal Mix at 50 °C, 2 min; 70 °C, 30 min; 25 °C, 10 min, UNG at 50 °C, 2 min, Hot Start at 95 °C, 10 min, PCR Cycle of 35 cycles at 95 °C, 15 s; 60 °C, 60 s and Melting curves (from 60 °C to 95 °C). Results were analyzed using Fluidigm Real-Time PCR Analysis software v.4.1.3. (Fluidigm) to control specific amplification for each primer, then the raw results of the qPCR were analyzed using GenEx software (MultiD analyses AB, Götegorg, Sweden) in order to choose the best reference gene to normalize mRNA expression and measure the relative expression of each gene between groups. HPRT was found to be the best reference gene in this experiment and was thus used for the normalization of gene expression. Statistical analyses The effects of a negative postnatal experience and EO intake and their interaction on physiological responses and qPCR analyses were determined using 2-way analysis of variance (ANOVA) after having checked the normality of data distribution. When there was an interaction between variables, a Fisher (LSD) test was used to determine the statistical significance of the difference. Differences were considered significant when p-values < 0.05 and a tendency for p < 0.10. Analyses were performed using XLSTAT software (version 2015, Addinsoft, Paris, France). Microarray data were analyzed using R 48 and Bioconductor packages ( , v 3.0) 49 as described in GEO accession GSE102358. Briefly, raw data (median of pixel intensity) were filtered, log2 transformed, corrected for bath effects (washing and labeling serials) and normalized using the quantile method 50 . Principal Component Analysis (PCA) of the normalized microarray data was used to create a factor map of individual global expression using R package FactoMineR 51 . A model was fitted using the limma lmFit function 52 considering sex as a blocking factor. A correction for multiple testing was then applied using the Benjamini–Hochberg procedure (BH) 53 for the false discovery rate (FDR). Probes with FDR ≤ 0.05 were considered to be differentially expressed between conditions. For bioinformatic analyses we selected probes that displayed a non-adjusted p value of p < 0.005 as too few probes were found with a FDR ≤ 0.05. All differentially expressed genes, screened on conditions of showing a ≥ 20% difference in mean expression levels between samples from delayed and control conditions, and a non-adjusted p-value < 0.005, were subjected to GO classification [GO-BP (biological process)] to assign these genes to relevant GO Terms using WebGestalt website 54 . oPOSSUM website was used to determine the over-representation of transcription factor binding sites within the differentially expressed genes 55 . | In poultry production systems, chicks after hatching may be exposed to different stress factors such as being moved without food and drink to rearing houses or changes in temperature. These stressful early-life experiences have long-term effects, notably in terms of slowing their growth and affecting their welfare and health, which often requires the use of antibiotics. The determination of rearing practices that will preserve animal health and welfare is crucial to limiting the use of medicinal products. Essential oils are produced by plants to defend themselves from pathogens. They are known to have numerous medicinal properties, such as antimicrobial or anti-inflammatory activities or an ability to regulate the immune system. During this study, the scientists analyzed the long-term effects of stressful postnatal conditions affecting chicks and the impact of ingesting essential oils. Freely available essential oils The study involved two groups of 192 chicks: the first group did not experience any stressful conditions in the hatchery while the second was subjected for 24 hours to negative experiences such as being deprived of food and water, temperature changes or movements simulating transport to a rearing house. Each group was divided into two subgroups: one subgroup had free access to four bottles that only contained water, and the second subgroup had access to four bottles: one contained water only while in the three others each containing water mixed with one of the essential oils of either verbena or cardamom or marjoram. All groups were studied for twelve days. The chicks with free access to the essential oils could therefore choose to consume them, or not. The first observation, which confirmed the previous study, was that chicks that had experienced stressful postnatal conditions chose spontaneously to drink water mixed with an essential oil (Guilloteau et al., 2019). Chicks that did not experience stress after hatching also consumed the water containing an essential oil but the quantities differed from those seen in the group that had been stressed. The long-term effects of postnatal stress on gene expression and the beneficial effects of essential oils The regulation of gene expression is a mechanism that is fundamental to the functioning of an organism and affects numerous functions such as bone and muscle development, the immune response and metabolism. Through a global analysis of gene expression in the blood of chicks 34 days after hatching, the scientists observed that those which had been subjected to post-hatching stress displayed long-term changes to the expression of certain genes; these changes were sex dependent. Males were affected more markedly, with changes to the expression of genes involved in the cellular response to oxidative stress (essential to the internal equilibrium of cells), energy metabolism and bone metabolism, which influences their growth. Females were less affected by post-hatching stress than males, but they nevertheless displayed modifications to the expression of genes involved in growth and reproduction. Among the groups with access to bottles containing an essential oil, consumption from these bottles enabled the long-term attenuation or even suppression of some of these gene expression changes induced by post-hatching stress, and regulated the expression of other genes involved in the same functions. In the group that did not experience stressful conditions, the scientists observed that the consumption of essential oils also induced long-term changes to gene expression, most of them differing from those induced by the stressful conditions. This finding requires further study to understand the effects of these modifications. This work has therefore demonstrated that chicks are able to spontaneously consume a product that benefits their health and welfare if it is made available to them. Essential oils may contribute to attenuating the effects of post-hatching stress and could thus participate in reducing the use of antibiotics in poultry units. These findings thus open new perspectives for more sustainable livestock management practices which offer animals an opportunity to care for themselves. | 10.1038/s41598-020-77732-5 |
Nano | Researchers measure the electrical charge of nano particles | Mojarad, N, and Krishnan, M., Measuring the size and charge of single nanoscale objects in solution using an electrostatic fluidic trap. Nature Nanotechnology (2012) doi:10.1038/nnano.2012.99 Journal information: Nature Nanotechnology | http://dx.doi.org/10.1038/nnano.2012.99 | https://phys.org/news/2012-07-electrical-nano-particles.html | Abstract Measuring the size and charge of objects suspended in solution, such as dispersions of colloids or macromolecules, is a significant challenge. Measurements based on light scattering are inherently biased to larger entities, such as aggregates in the sample 1 , because the intensity of light scattered by a small object scales as the sixth power of its size. Techniques that rely on the collective migration of species in response to external fields (electric or hydrodynamic, for example) are beset with difficulties including low accuracy and dispersion-limited resolution 2 , 3 , 4 . Here, we show that the size and charge of single nanoscale objects can be directly measured with high throughput by analysing their thermal motion in an array of electrostatic traps 5 . The approach, which is analogous to Millikan's oil drop experiment, could in future be used to detect molecular binding events 6 with high sensitivity or carry out dynamic single-charge resolved measurements at the solid/liquid interface. Main Single charged nanoscale objects in a fluid can be trapped at high densities in a landscape of electrostatic traps on a chip 5 . Objects in suspension introduced onto the chip sample the landscape by Brownian motion and fall into local potential wells, remaining trapped in them for a time period that scales as exp(Δ F / k B T ), where the Helmholtz system free energy F is a function of the spatial location of the particle, k B T is the thermal energy at temperature T , and k B is the Boltzmann constant. Here, we demonstrate that analysing the thermal motion of each trapped object enables a direct measurement of its size and charge. The viscous drag on the object yields a measure of its size, and the stiffness of its confinement can be compared with a free energy calculation to reveal the total charge it carries. Indeed, measuring the net charge of discrete objects in an ensemble by direct observation of their individual motion in an electrostatic landscape is reminiscent of Millikan's oil drop experiment 7 . A hundred years later, our nanoscale thermodynamic equilibrium version of the experiment exploits the statistical properties of the trapped object's motion, rather than its ballistic behaviour in an applied electric field, and gravity no longer plays a role. Gold particles (diameter, 80 nm) serve as test objects in the current study. These particles demonstrate a signal-to-noise ratio (SNR) of ∼ 100 that is suitable for particle tracking with a localization precision of ∼ 2 nm. They also carry a substantial amount of charge, of the order of −100 e , making them amenable to trapping for long periods of time (several minutes to hours) and therefore convenient to study. We used high-speed interferometric scattering detection (iSCAT) to image the three-dimensional motion of individual particles trapped in harmonic potential wells created by pockets (diameter D = 200 nm) in a fluidic slit of depth 2 h = 215 nm, using a laser scanning microscope set-up 5 ( Fig. 1 ). Although in principle there are no restrictions on the morphology of the trap for the methodology we propose, a harmonic potential facilitates measurements by permitting certain experimental simplifications, discussed later. Figure 1: Experimental set-up and device geometry. a , Set-up of the laser scanning microscope used for iSCAT imaging. b , Schematic representation of an electrostatic fluidic trap, showing a cross-sectional view of a single trapping nanostructure or ‘pocket’. The location of an 80-nm-diameter trapped bead is represented in Cartesian coordinates ( x – z plane) and the various geometric parameters are as follows: channel depth, 2 h ; pocket depth, d ; pocket diameter, D ; SiO 2 thickness, l . c , Scatter plot of positions in the x – z plane of a trapped particle, superimposed on the calculated two-dimensional electrostatic potential distribution in a nanostructure where D = 200 nm and 2 h = 215 nm. The electrostatic potential is presented on a colour scale going from high (black) to low (yellow) energy for a unit negative charge. For emphasis, only the bottom of the potential well is shown. Full size image A detailed examination of the three-dimensional motion of single trapped particles is presented in Fig. 2 . The red data symbols in Fig. 2 b are temporal contrast data—a measure of the axial location of the particle—for five representative particles. The data reveal two interesting features: a variation in the mean contrast level as well as the range of contrast from one particle to another. In fact, the amount of light scattered by an object can itself serve as a sensitive measure of its size 8 . The instantaneous contrasts were corrected for particle size variability and converted to absolute axial displacements to obtain full three-dimensional information on particle motion (blue data, Fig. 2 b,c; see Supplementary Information for details). The variation in the range of axial motion from one particle to the next still persists, however ( Fig. 2 b,c), and warrants further scrutiny. The spatial sampling of an electrostatic potential well by a particle strongly depends on the charge it carries: the higher the charge on the particle, the greater the expected stiffness of its confinement, which manifests in the experiment as a smaller root mean square (r.m.s.) spatial displacement. Importantly, any increase or decrease in stiffness arising from particle charge would be expected to appear in all spatial dimensions. A scatter plot of the r.m.s. displacement of each particle in the axial versus radial dimension ( Fig. 2 d) convincingly demonstrates just this correlation, and presents a simple and rapid route to measure the relative charge dispersion in a sample at the single object level. Figure 2: Analysis of the three-dimensional motion of particles trapped in a harmonic potential well. a , Three-dimensional scatter plot of positions for a representative particle and corresponding histograms of position in r and z . P ( r ) is defined according to ∫ P ( r )2π r d r = 1, and the distribution rescaled such that P max ( r ) = 1. b , Instantaneous contrast values: I (red symbols, top) and corresponding axial positions, z (blue symbols, bottom) for five representative particles. Each data set consists of 1,000 points acquired over ∼ 1 s for a given particle. c , Mean contrast (red) and corresponding mean axial position (blue) for 18 different particles. Error bars represent the r.m.s. values of a given data set. d , Plot (r.m.s. values) of axial S z versus radial S r displacement for each of the 18 particles presented in c . Numerals adjacent to each data symbol represent particle serial numbers. A linear fit to the data (green), excluding particles 2 and 3, highlights the correlation between the two quantities. S z < S r confirms the higher trap stiffness in the axial dimension compared with the radial dimension. The solution ionic strength in these measurements was 0.04 mM. Full size image To measure the net charge carried by any given particle, we need to take the further step of measuring the shape of the confining potential and comparing this with calculations. The measured radial probability density distribution of particle displacement P ( r ) yields a spatially dependent potential via the Boltzmann relation, F ( r )/ k B T = −ln P ( r ). If F ( r ) is known from the experiment, the charge of the particle can be directly obtained by comparison with a free energy calculation. We used COMSOL Multiphysics to calculate the spatial distribution of the electrostatic potential in the trapping nanostructure by numerically solving the nonlinear Poisson–Boltzmann equation 5 in three dimensions. As shown in Fig. 3 a, the system consists of a sphere of fixed surface charge density embedded in an electrolyte, which in turn is bounded by surfaces of a given charged density representing the walls of the trapping nanostructure. Supplementary Movie S1 presents the electrostatic potential distribution in the trapping nanostructure as the spatial location of the particle is scanned radially across the trap. The inputs to the calculation are the wall charge density, the solution ionic strength and the size and surface charge density of the object. The background ionic strength of the electrolyte and an estimate of the wall charge density can be obtained from conductivity and electro-osmotic flow measurements, respectively. Thus, for a particle of a given diameter, its charge remains the only free parameter in the calculation. The free energy of the system as a function of particle position was calculated by summing the electrostatic field energies and entropies over all charges in the system 9 , 10 , 11 . Figure 3 b presents a series of radial free energy curves as a function of particle charge q for the conditions of the experiment in Fig. 2 ; each of these curves yields a value for the spring constant of confinement, k . The relationship between k and q ( Fig. 3 c) thus enables a direct readout of the charge of a particle once its relaxation time (spring constant) is measured in the experiment. Furthermore, the linearity of the relation for q < −100 e and the low uncertainty in the single fit coefficient ( ∼ 0.5%) implies that if the relaxation time of the particle is measured with a comparable accuracy, the measurements could be very close to single-charge resolved ( Fig. 3 c, inset). This raises prospects for carrying out fundamental studies on (dis)charging processes on matter in solution as well as ultrasensitive single-nanoparticle-based molecular binding sensors. Figure 3: Dependence of trapping free energy on particle charge. a , Electrostatic potential distribution in a three-dimensional cylindrical half-space around a particle (diameter, 80 nm) in a trapping nanostructure ( D = 200 nm and 2 h = 215 nm). The displayed range is truncated at a potential of ψ = −2.0 k B T / e . Supplementary Movie S1 demonstrates the change in the spatial potential distribution as a function of particle location. b , Radial free energy profiles calculated along the contour of the axial energy minimum for particle charges q = −40 e (grey), −88 e (red), −133 e (blue), −177 e (black), −221 e (yellow), −398 e (green) and −1,105 e (brown), for a solution ionic strength of 0.04 mM. The free energy curves were fit to a function F ( r ) = (1/2) kr 2 for r < 50 to obtain k in each case. c , Variation of the spring constant of confinement k with particle charge q . The black curve displays a fit of the data to a polynomial function of the form k = 9.6 × 10 −4 + (1.228 × 10 −4 q ) − (1.2752 × 10 −7 q 2 ) + (4.8399 × 10 −11 q 3 ) pN nm −1 . Inset: linear relationship between k and q for q < −100 e , given by k = 1.07 × 10 −3 + (1.1 × 10 −4 ± 4.98 × 10 −7 ) q (black line). d , Charge deduced for each particle in Fig. 2 (black symbols) based on the k versus q relationship in c (red line). These values are presented in Supplementary Table S1 . Full size image Provided that snapshots of the particle can be acquired with a high SNR and small exposure times, optical imaging is an excellent calibration-free method for direct mapping of potential landscapes of arbitrary shape and large range, and offers distinct advantages in high-throughput analysis of a dense array of trapped objects 12 . Direct measurement of F ( r ), however, requires high SNR imaging with an exposure time much smaller than the relaxation time of the particle, which can be challenging. In a harmonic confining potential, F ( x ) = (1/2) kx 2 , the spring constant of confinement k and the relaxation time τ of the particle are related via the drag coefficient γ as 13 where γ = 3π η a for a sphere of diameter a in a solution of viscosity η . A spring constant of k = 7.5 × 10 −3 pN nm −1 for example, easily achieved for the particles under consideration, corresponds to a relaxation time of ∼ 100 µs, assuming η = 1 × 10 −3 kg ms −1 , the viscosity of water in free solution. Focusing on the statistical properties of the object's motion in a harmonic well, however, frees the methodology of the need for a direct measurement of F ( r ). This principle is illustrated in Fig. 4 a, which presents a detailed examination of the charge measurement on a representative particle. We imaged the motion of particle (i) using different exposure times σ > τ , and evaluated the corresponding mean squared displacements (MSD), 〈[Δ x ( t , σ )] 2 〉, as a function of lag time t ( Fig. 4 , orange and blue series). Rather than continuously increasing as a function of t , the MSD of a particle harmonic trapping potential eventually saturates at 〈[Δ x ] p 2 〉 = 2 k B T / k , for times t >> τ (refs 14 , 15 ). Furthermore, this plateau value of the MSD measured with an exposure time σ , 〈[Δ x ( σ )] p 2 〉, approaches the true value asymptotically as a function of the σ / τ ratio 16 : Figure 4: MSD measurements and deducing the charge on a single particle. a , Red: MSD measurements in x , 〈[Δ x ( t , σ )] 2 〉, as a function of lag time t for a particle trapped by a D = 500 nm pocket imaged with an exposure time of σ = 1 ms (red). A linear fit to the D = 500 nm data for t < 3 ms gives a diffusion coefficient of 1.48 µm 2 s −1 and a static particle localization accuracy of 2.7 nm. Orange/blue: MSDs in x for particle (i) trapped by a D = 200 nm pocket and imaged using exposure times of σ = 200 µs (orange) and 1 ms (blue). The corresponding scatter-plot data are presented in Supplementary Fig. S3 . The obtained fit values of 〈[Δ x ( σ )] p 2 〉 for the D = 200 nm series are 465 nm 2 (orange data) and 184 nm 2 (blue data), respectively. b , Local electrostatic potential F ( r ) for particle (i), derived from the P ( r ) data shown in Supplementary Fig. S3 , for σ = 200 µs (orange) and 1 ms (blue). The black curve represents a harmonic potential of the form F ( r ) = (1/2) kr 2 , with k = 8.8 × 10 −3 pN nm −1 inferred from the experimental MSD data. The square symbols represent the calculated free energy for a particle of charge −62 e . c , Log–log plot of the experimentally inferred potential and calculated free energy shown in b . Inset: zoomed view of the experimental potential (black line) and free energy calculations (symbols) for particles of charge q = −54 e (red), −62 e (black) and −68 e (green). Full size image Using the value of 〈[Δ x ( σ )] p 2 〉 from the MSD measurements and equation (1) gives the relaxation time and hence the spring constant of confinement for the particle under consideration. We obtain a relaxation time of τ = 85 ± 5 µs for particle (i). Note that the same result may be obtained from an analysis that uses the variance S x 2 of P ( x ), the measured probability distribution in x (or y ), and the true variance k B T / k , in place of 〈[Δ x ( σ )] p 2 〉 and 〈[Δ x ] p 2 〉, respectively 17 . A value of τ = 85 µs corresponds to a spring constant of confinement k of 8.8 × 10 −3 pN nm −1 , depicted by the black lines in Fig. 4 b,c. Having experimentally deduced the true spring constant of confinement, we compare the measurement with the calculation to obtain the charge of the particle. The black squares in Fig. 4 b,c represent the free energy as a function of radial distance r from the trap centre, calculated for a particle (diameter, 80 nm) carrying a total surface charge of −62 e (wall charge density and background electrolyte concentration are −0.01 e nm −2 and 0.03 mM, respectively). The uncertainty in the measured relaxation time implies that the charge on a single particle can be determined to within ±10% ( Fig. 4 c, inset). The precision in the relaxation time can be enhanced using measurements at additional exposure times, albeit at the possible expense of time resolution in the overall charge measurement. Furthermore, the charge on the particle deduced from its lateral motion can be independently confirmed by a similar analysis of its motion in the axial dimension. The variance S z 2 of the P ( z ) distribution acquired using σ = 1 ms for the same particle implies a relaxation time of τ z = 18 µs for axial confinement (blue data series, Supplementary Fig. S4 ). This corresponds to a spring constant of k z = 4.42 × 10 −2 pN nm −1 , with the experiment thus suggesting that the particle is trapped about five times more stiffly in the axial dimension than the radial. Indeed, the experimentally deduced value compares favourably with the calculated axial stiffness k z = 5.56 × 10 −2 pN nm −1 for a particle of charge −62 e ( Supplementary Fig. S4 ). Experimental relaxation times and deduced net charges for four different particles are presented in Table 1 . Supplementary Table S1 lists the charge on single particles inferred from a single measurement of P ( r ) (Figs 2 and 3d), and compares these values with those from ensemble zeta potential measurements. Table 1 Representative single-particle measurements. Relaxation times and net charges for four different particles measured as described in Fig. 4 . The solution ionic strength was 0.03 mM for particles (i)–(iii) and 0.04 mM for particle (iv). Full size table We conclude with a final point on size measurements on single trapped particles. One way to do this is by means of an analysis of the MSD of the trapped particle. The measured diffusivity ( k B T / γ ) of the particle yields its hydrodynamic diameter, which could serve as an input to the free energy calculation. An alternative route is to leave the quantity 〈[Δ x ] p 2 〉 (= 2 k B T τ / γ ) in equation (2) as a free parameter and obtain both γ and τ from the fit to measurements at multiple exposure times. The red data series in Fig. 4 a shows MSD data for a particle trapped in a well created by a D = 500 nm pocket, which can be approximated roughly by a square well 5 . Fitting the linear portion of the data 18 , we obtain a translational diffusivity of 1.62 ± 0.17 µm 2 s −1 averaged over 10 different cases. This is a factor of 2 smaller than the value expected after correcting for the proximity of the walls, and in reasonable quantitative agreement with reports of dramatically lower diffusion coefficients for particles confined in parallel-plate geometries in solutions of low ionic strength 19 , 20 . Accounting for the enhanced drag indicated by these measurements with a higher effective viscosity would raise the experimentally estimated value of k , and hence the charge deduced for particles (i)–(iii) and (iv) by 30% and 43%, respectively. Interestingly, the free energy calculations suggest that, for a given particle charge, the size of the particle starts to contribute more strongly to the shape of the trap at longer range, say r > 80 nm, than it does closer to the centre ( r < 50 nm). An accurate long-range spatial map of the potential could therefore also independently corroborate and aid in fine-tuning the particle diameter estimated by the methods described above. We thus show that a few seconds worth of high spatiotemporal resolution imaging of electrostatically trapped objects could yield both size and charge information on thousands of individual entities trapped in parallel in high-density arrays. Existing electrokinetic methodologies to measure the charge of an object focus on determining the electrical potential (zeta potential) at the poorly defined shear plane. These measurements are often associated with large uncertainties and their interpretation is fraught with ambiguities 21 , 22 . Our equilibrium measurement directly addresses the surface of a single nano-object, raising prospects for measuring charge fluctuations in matter 23 , 24 , monitoring the progress of chemical reactions in real time 25 and fostering the elucidation of fundamental phenomena at the poorly understood solid/liquid interface. A drawback of the current realization is, however, that higher solution ionic strengths require smaller slit depths ( h ∼ 1/√ C ) for effective trapping, ultimately placing an upper limit on the size of object(s) to be studied. For example, at ∼ 30 mM ionic strength, optimal measurements on weakly charged matter are likely to be limited to small objects <10 nm in diameter. Furthermore, the time resolution in the overall charge measurement is set by the relaxation time of the trapped particle and the need to acquire sufficient statistics on its position. We estimate the theoretical time resolution in the current study to be ∼ 100 ms, but this should improve for smaller objects. Although we have chosen iSCAT for this proof-of-principle demonstration, a variety of other imaging techniques that deliver ∼ 10 nm spatial and sub-millisecond temporal resolution (such as wide-field 26 or confocal fluorescence 27 , or dark-field 28 microscopy) may be used to the same end. Given further advances in high-speed, high-sensitivity imaging technology, weakly scattering but labelled entities that are only transiently trapped (<1 s)—such as small and/or weakly charged matter, or biological macromolecules in solutions of higher ionic strength—could be studied with this technique. Progress in imaging based on scattering or absorption would go a long way towards fostering label-free measurements of this nature on nanoscopic entities 29 , 30 . Methods iSCAT imaging set-up As illustrated in Fig. 1 a, the Gaussian output beam of a 30 mW diode-pumped solid-state laser (TECGL-30, WSTech) at λ = 532 nm was expanded by a ×4 telescope lens system and passed through a half-wave plate for polarization adjustment, followed by a two-axis acousto-optic deflector (AOD; DTSXY, AA Opto-Electronic). The deflected beam was delivered to the back focal plane of the microscope objective (1.4 NA, ×100 UPLASAPO-Olympus), which was mounted on an inverted microscope equipped with a three-dimensional piezoelectric translation stage (NanoMax 300, Thorlabs). The fluidic device was positioned using the three-dimensional stage so that the scanned beam illuminated the area of interest. The scanning rates of the AODs were between 50 kHz and 100 kHz and were adjusted to achieve a uniform wide field of illumination for a given exposure time. Light scattered by the particle and reflected by the device was collected by the same microscope objective and imaged onto a CMOS camera (MV-D1024E-160-CL-12, Photonfocus). Sample preparation and device fabrication A single device consisted of several fluidic slits in parallel, each slit having a width of 20 µm and depth of ∼ 200 nm. The slits were fabricated by lithographically patterning the surface of a ∼ 400-nm-thick SiO 2 layer on a p-type silicon substrate and subsequent wet-etching of the SiO 2 layer to a depth of ∼ 200 nm in buffered HF (ammonium fluoride–HF mixture, Sigma-Aldrich). The floors of these trenches were then patterned with submicrometre-scale features using electron-beam lithography and subsequent reactive ion etching of the SiO 2 to a depth of 100 nm. Fully functional fluidic slits were obtained by irreversibly bonding the processed SiO 2 /silicon substrates with glass substrates compatible with high-NA microscopy (PlanOptik, AG) using field-assisted bonding. Gold nanospheres (80 nm in diameter; British Biocell International) were centrifuged and resuspended in deionized water (18 MΩ cm −1 ) twice to remove traces of salt or other contaminants. Nanoslits, loaded with an aqueous suspension of the nanometric object of interest (number density, ∼ 1 × 10 10 particles/ml for gold particles) by the capillary effect, were allowed to equilibrate at room temperature for 1–2 h before commencing with optical measurements. Surface charge and solution conductivity measurements Solution conductivities as well as particle zeta potentials were measured by phase analysis light scattering (PALS) using commercial instrumentation (Zetasizer Nano, Malvern Instruments). The measured zeta potential ζ was used to obtain an estimate of particle surface charge density σ p (in C m −2 ) using the semi-empirical equation σ p = − ɛ ɛ 0 κ ( k B T / je )[2sinh( jy /2) + (8/ κ a )tanh( jy /4)], where y = ζ / k B T is the dimensionless zeta potential, j = 1 is the valence of the counterions, and a is the diameter of the particle. The measured zeta potentials over nine different realizations were −20 ± 9 mV in a background electrolyte concentration of 0.03 ± 0.006 mM. Although the measurements show a large variation—between −9 and −37 mV (corresponding to −35 e to −146 e per particle)—they do provide a useful initial estimate of particle charge for the calculations. | Nano particles are a millionth of a millimeter in size, making them invisible to the human eye. Unless, that is, they are under the microscope of Prof. Madhavi Krishnan, a biophysicist at the University of Zurich. Prof. Krishnan has developed a new method that measures not only the size of the particles but also their electrostatic charge. Up until now it has not been possible to determine the charge of the particles directly. In order to observe the individual particles in a solution, Prof. Madhavi Krishnan and her co-workers «entice» each particle into an "electrostatic trap". It works like this: between two glass plates the size of a chip, the researchers create thousands of round energy holes. The trick is that these holes have just a weak electrostatic charge. The scientists than add a drop of the solution to the plates, whereupon each particle falls into an energy hole and remains trapped there. But the particles do not remain motionless in their trap. Instead, molecules in the solution collide with them continuously, causing the particles to move in a circular motion. "We measure these movements, and are then able to determine the charge of each individual particle," explains Prof. Madhavi Krishnan. Put simply, particles with just a small charge make large circular movements in their traps, while those with a high charge move in small circles. This phenomenon can be compared to that of a light-weight ball which, when thrown, travels further than a heavy one. The US physicist Robert A. Millikan used a similar method 100 years ago in his oil drop experiment to determine the velocity of electrically charged oil drops. In 1923, he received the Nobel Prize in physics in recognition of his achievements. "But he examined the drops in a vacuum," Prof. Krishnan explains. "We on the other hand are examining nano particles in a solution which itself influences the properties of the particles." Electrostatic charge of 'nano drugs packages' For all solutions manufactured industrially, the electrical charge of the nano particles contained therein is also of primary interest, because it is the electrical charge that allows a fluid solution to remain stable and not to develop a lumpy consistency. "With our new method, we get a picture of the entire suspension along with all of the particles contained in it," emphasizes Prof. Madhavi Krishnan. A suspension is a fluid in which miniscule particles or drops are finely distributed, for example in milk, blood, various paints, cosmetics, vaccines and numerous pharmaceuticals. "The charge of the particles plays a major role in this," the Zurich-based scientist tells us. One example is the manufacture of medicines that have to be administered in precise doses over a longer period using drug-delivery systems. In this context, nano particles act as «packages» that transport the drugs to where they need to take effect. Very often, it is their electrical charge that allows them to pass through tissue and cell membranes in the body unobstructed and so to take effect. «That's why it is so important to be able to measure their charge. So far most of the results obtained have been imprecise», the researcher tells us. "The new method allows us to even measure in real-time a change in the charge of a single entity," adds Prof. Madhavi Krishnan. "This is particularly exciting for basic research and has never before been possible." This is because changes in charge play a role in all bodily reactions, whether in proteins, large molecules such as the DNA double helix, where genetic make-up is encoded, or cell organelles. "We're examining how material works in the field of millionths of a millimeter." | doi:10.1038/nnano.2012.99 |
Nano | Scientists observe directed energy transport between neighboring molecules in a nanomaterial | Antonietta De Sio et al, Intermolecular conical intersections in molecular aggregates, Nature Nanotechnology (2020). DOI: 10.1038/s41565-020-00791-2 Journal information: Nature Nanotechnology | http://dx.doi.org/10.1038/s41565-020-00791-2 | https://phys.org/news/2020-11-scientists-energy-neighboring-molecules-nanomaterial.html | Abstract Conical intersections (CoIns) of multidimensional potential energy surfaces are ubiquitous in nature and control pathways and yields of many photo-initiated intramolecular processes. Such topologies can be potentially involved in the energy transport in aggregated molecules or polymers but are yet to be uncovered. Here, using ultrafast two-dimensional electronic spectroscopy (2DES), we reveal the existence of intermolecular CoIns in molecular aggregates relevant for photovoltaics. Ultrafast, sub-10-fs 2DES tracks the coherent motion of a vibrational wave packet on an optically bright state and its abrupt transition into a dark state via a CoIn after only 40 fs. Non-adiabatic dynamics simulations identify an intermolecular CoIn as the source of these unusual dynamics. Our results indicate that intermolecular CoIns may effectively steer energy pathways in functional nanostructures for optoelectronics. Main In a basic example of intramolecular conical intersections (CoIns) 1 , 2 , 3 , crossings between multidimensional potential energy surfaces (PESs) of an optically bright (S 2 ) and a dark (S 1 ) excited electronic state arise from the coupling of these states to at least two vibrational modes of the molecule (Fig. 1 ). In the simplest topology, coupling to a symmetric mode (Q 1 ) displaces the CoIn (star) from the Franck–Condon region (dot) of the molecule 4 . The two electronic states are vibronically coupled via an asymmetric vibrational mode (Q 2 ), resulting in an essentially conical shape of the PES near the CoIn and a complete breakdown of the Born–Oppenheimer approximation in this region 1 , 3 . From a dynamical perspective, the impulsive optical excitation of such a molecule launches a vibrational wave packet in the Franck–Condon region of the bright state S 2 and triggers its coherent motion towards the CoIn. Here, vibronic couplings induce an ultrafast non-adiabatic transition into the dark state S 1 . This transition is essentially barrierless and reflection-free 5 and may be accompanied by substantial wave packet spreading on surface crossing 1 , 2 . Hence, intramolecular CoIns provide channels for efficient, directional energy and charge flow within molecules that are actively governed by molecular vibrations. CoIns control the dynamics and yield of elementary reactions underlying many chemical and biological functions 6 , 7 , 8 . Important examples are the photoisomerization of retinal, initiating the primary event of vision 6 , 9 , 10 , 11 , 12 , the photochemistry of synthetic molecular switches 13 , 14 , 15 , 16 , 17 , intramolecular vibrational relaxation in carotenoids 18 , DNA protection against photodamage 19 , 20 , singlet exciton fission 21 and potentially even the primary steps in the magnetoreception of birds 22 , 23 . Fig. 1: Schematic illustration of wave packet motion through a conical intersection (CoIn) in the potential energy surface V (Q 1 , Q 2 , Q 3 ) of an A–D–A oligomer aggregate. a , b , Driven by vibronic coupling to the tuning modes (Q 1 , Q 3 ), a vibrational wave packet on the bright S 2 excited state of the aggregate potential, optically launched in the Franck–Condon region (dot), moves towards the CoIn (star), oscillating along Q 3 ( b ). Near the CoIn, strong vibronic coupling to the coupling mode (Q 2 ) induces an ultrafast and efficient non-adiabatic transition from S 2 to the optically dark excited state S 1 . c , Chemical structure of the oligomer aggregate (upper panel) showing an antiparallel arrangement, together with the measured linear absorption spectrum (lower panel) of the thin film. The yellow shaded area indicates the excitation energy range used in the experiments. Full size image In principle, CoIns may also be relevant for controlling energy or charge transport in extended condensed-phase assemblies, such as polymer or molecular aggregates. In such systems, vibronic couplings promote delocalization of the excitations across many molecular units and control their coherent transport on the nanoscale 24 , 25 . Intermolecular CoIns may thus affect this transport. While recent theoretical work started to investigate the potential role of intermolecular CoIns for exciton transfer dynamics 26 , 27 , 28 , experimental observations have not been presented so far. Here we report distinct signatures of the passage of a wave packet through an intermolecular CoIn in the two-dimensional (2D) electronic spectroscopy (2DES) maps of an oligomer thin film relevant for organic photovoltaics. We find that the optically launched vibrational wave packet crosses the CoIn after just around 40 fs and that this passage is accompanied by an abrupt change in the 2DES maps. Atomistic non-adiabatic dynamics simulations confirm the intermolecular nature of the CoIn. Ultrafast 2D electronic spectroscopy of thin films We investigate thin films of a solution processable acceptor–donor–acceptor-type (A–D–A) oligomer 29 consisting of two terminal dicyanovinyl groups as acceptor and a central dithienopyrrole-thiophene unit as donor (Fig. 1c , upper panel). These oligomers represent an important class of organic building blocks that have recently emerged as efficient photoactive materials in different optoelectronic applications such as organic photovoltaics, light-emission, sensors and transistors 30 , 31 , 32 . Our A–D–A oligomer finds application as the main light absorber and p-type molecular semiconductor in efficient solution-processed organic solar cells reaching power conversion efficiencies of >8% (refs. 29 , 33 ). In the thin films, the A–D–A oligomers build highly ordered nanoscale aggregates with grain sizes of roughly 20–50 nm (ref. 33 ). The linear absorption spectrum of the thin film (Fig. 1c , lower panel) shows two broad absorption bands around 1.9 eV (shaded yellow) and 2.8 eV reflecting optically bright exciton transitions of the aggregate. On the broad, low-energy resonance at 1.9 eV, we note a very slight peak structure reflecting a vibrational progression of the S 0 –S 2 transition (Supplementary Fig. 6 ). For the 2DES 34 experiments, the thin films are excited by broadband 8-fs pulses resonant with the low-energy resonance at 1.9 eV. The spectra are recorded at room temperature in a partially collinear geometry, using a pair of phase-locked excitation pulses, time-delayed by the coherence time τ and the probe beam, arriving at waiting time T (Fig. 2a and Supplementary Fig. 1 ). At selected T , differential transmission spectra are obtained as a function of τ and of the detection energy E D . This time-domain signal S ( τ , T , E D ) is then evaluated according to the procedure outlined in the Methods section to obtain the 2DES energy–energy maps (Fig. 2 ). Briefly, at each T , we perform a Fourier transform of S ( τ , T , E D ) along τ , which yields complex valued 2D maps as a function of the excitation energy E X and E D . By taking the real part of these Fourier transform maps at each T , we obtain the absorptive 2DES energy–energy maps 34 , 35 A 2D ( E X , T , E D ) as a function of E X and E D . All experimental details are reported in the Methods section. Fig. 2: 2DES probes wave packet motion through an intermolecular conical intersection (CoIn) in a thin film of molecular aggregates. a , Scheme of the pulse-sequence interacting with the sample, indicating the coherence τ and waiting T times. The field E s (blue) re-emitted by the sample is proportional to the nonlinear polarization P (3) induced in the sample by the interaction with the pulse sequence. b – d , For T < 20 fs, the 2DES maps show a grid-like peak pattern revealing coherent vibrational wave packet motion on the S 2 state with a roughly 30-fs period (white dashed lines). e – g , For 20 fs < T < 40 fs, the peak spacing along the detection axis gradually reduces, pointing to anharmonicities of the S 2 state potential. h – j , At T ≅ 45 fs, the grid pattern disappears and the 2DES spectra become essentially featureless. This transition marks the passage through the CoIn. k – l , Waiting time dynamics of the period ( k ) and amplitude ( l ) of the dominant vibrational modes, extracted from a Fourier transform (FT) analysis of the 2DES maps along the detection axis. Initially, T < 20 fs, only one 30-fs mode ( k , orange) dominates, before its oscillation period gradually increases. At T ≅ 45 fs, the amplitude of this mode drops completely ( l , orange) and new, distinctly different oscillatory modes with ~24 fs ( k , blue) and 35 fs ( k , green) periods appear. norm., normalized. Full size image At early waiting times, T < 20 fs, the 2DES spectra of the aggregate thin film show a well-defined, grid-like peak pattern with two dominant diagonal peaks at 1.87 eV and roughly 2.0 eV and a series of cross peaks (Fig. 2b–d ). Such grid-like patterns generally reflect the impulsive optical excitation of coherent vibrational wave packets with one dominant oscillation period on either the ground or excited-state PES 36 , 37 . As we will argue below, the peak spacings of roughly 132 meV along E D indicate an oscillation with a period of around 30 fs as the dominant Franck–Condon-active mode 36 , 38 , 39 on the bright excited S 2 state of the aggregate (Fig. 2b , dashed white lines). For larger waiting times, 20 fs < T < 45 fs, significant spectral modifications of the 2DES pattern occur. In particular, we observe a rapid reduction of the peak spacing along E D (Fig. 2e–g and Supplementary Fig. 4 ). After T ≅ 45 fs, the 2DES maps are fundamentally different, as a much broader diagonal peak around 1.94 eV is seen (Fig. 2h and Supplementary Fig. 4 ). While the peak spacings along E D have almost completely washed out, a weak structure remains along E X (Fig. 2i,j ). No further spectral modifications can be observed for longer T (Supplementary Fig. 4 ). The white dashed lines in Fig. 2b–j indicate the grid pattern seen at early times and underline the substantial spectral changes. At all investigated waiting times, we observe positive peaks A 2D suggesting that the dominant contributions to the 2DES signal arise from bleaching and stimulated emission transitions. The pump–pulse pair in our experiment (1–2 in Fig. 2a ) mainly induces optical transitions between the ground and bright S 2 excited state in the Franck–Condon region. For systems that are reasonably well described by displaced harmonic oscillator PESs, this results in 2DES maps with equidistantly spaced peaks along both the excitation E X and detection E D axes 36 , 39 , 40 . These peak spacings are determined by the frequency of the coupled vibrational mode(s). Such a grid-like peak pattern is indeed seen in our 2DES experiment (Fig. 2b–d ). Fourier transform along E X and E D thus provides the dominant period(s) of the Franck–Condon-active vibrations. Coupling of these vibrations to the electronic states displaces the electronic excited-state PES with respect to the ground state equilibrium geometry and these vibrations hence represent tuning modes. Our 2DES data thus indicate a fast tuning mode (Q 3 ) frequency of roughly 1,100 cm −1 (about 30 fs). In our experiment, we observe reductions of the peak spacing along E D for 20 fs < T < 45 fs (Fig. 2e–g ). Such surprising changes cannot be easily rationalized purely on the basis of harmonic oscillator PESs. Raman spectra of our thin film (Supplementary Fig. 15 ) show a series of well-resolved peaks, as is typical for a multimode, basically harmonic ground state PES. Wave packet motion on such a PES cannot account for the time-dependent peak shifts 36 , 40 . We therefore deduce that this variation of the peak spacing results mainly from the motion of the optically launched excited-state wave packet in our sample. Specifically, a transient change in the peak spacing, and thus in the instantaneous vibrational frequency, may arise from the motion of the excited-state wave packet into a PES region with different local curvature. Fourier transform of our 2DES data along E D thus reveals a marked, rapid increase of the instantaneous oscillation period (Fig. 2k shaded), pointing to a significant anharmonicity of the excited S 2 PES experienced by the excited-state wave packet. During this waiting time window, we also observe a substantial drop of the 2DES peak amplitude A 2D (Supplementary Fig. 5 ), which reaches a quasi-stationary value after T ≅ 45 fs. Concurrently, also the Fourier transform intensity at the period of the Q 3 mode drops and vanishes (Fig. 2l , orange). Both these observations suggest an ultrafast depopulation of the excited S 2 bright state. For T > 45 fs, distinctly new oscillatory components, with periods of roughly 24 and 35 fs, and much weaker Fourier transform intensity, are seen (Fig. 2k–l , blue and green). They persist essentially unchanged for longer waiting times, at least up to around 200 fs. Since the ground state bleaching contribution to the 2DES map remains even after depopulation of the S 2 state, the new oscillatory features most probably reflect coherent ground state vibrational motion. Together, all these experimental signatures strongly indicate that we follow a surface crossing from an optically bright S 2 to a dark excited state S 1 that proceeds through a CoIn. The time evolution of the grid pattern in the 2DES maps suggests that it takes roughly 45 fs for the optically launched S 2 wave packet to reach and pass the CoIn. Hence, taking this timescale as one half of the vibrational period of the tuning mode Q 1 , we deduce a vibrational frequency of at least \({\it{\upnu }}\left( {{\rm{Q}}_1} \right) \le 2{\it{\uppi }}/90\) fs −1 for the reaction coordinate. The pronounced anharmonicity in S 2 wave packet motion and the abrupt changes in vibrational frequencies at T ≅ 45 fs provide additional evidence for the existence of a CoIn in the multidimensional PES of the aggregates. We have compared these dynamics with those of a dichloromethane solution of the A–D–A oligomer, where aggregation is completely suppressed. In pump-probe spectra, we observe broad and featureless resonances showing no significant changes on a sub-100-fs timescale (Supplementary Fig. 3 ). Since these measurements probe the intramolecular dynamics of the A–D–A backbone, we find that the peculiar, sub-50-fs dynamics are a distinct feature of the solid-state nanostructure of the aggregated thin film and they are of an intermolecular nature. All these experimental results strongly indicate non-adiabatic dynamics induced by vibronic couplings in the vicinity of an intermolecular CoIn in the potential energy landscape of the aggregate oligomer thin film. Non-adiabatic dynamics simulations To validate these arguments, we investigate the photoinduced pathways by non-adiabatic excited-state molecular dynamics simulations 41 , 42 . In a single A–D–A oligomer, the low-energy intramolecular optical excitations mainly have contributions from the lowest excited state S 1 M of the monomeric, oligomer backbone, which is optically bright (Supplementary Fig. 7 ). All other excited states lie higher in energy and we find no evidence for the existence of intramolecular CoIns in the energy range of interest. We therefore consider dimers of the A–D–A oligomer as a minimal model system for the aggregates forming in our thin films (Fig. 1c , upper panel). In contrast to a single oligomer, here, we find that the lowest-lying excited state S 1 is optically dark (Supplementary Fig. 8b ). The second excited state S 2 is optically bright and gives the largest contribution to the main absorption resonance at roughly 2.5 eV (Fig. 3a , red line). Both S 1 and S 2 states describe delocalized intermolecular excitations of the dimer. They arise from symmetric (S 2 ) and antisymmetric (S 1 ) combinations of the first excited state (S 1 M ) intramolecular wavefunctions of the two oligomer units placed in a H-aggregate-like geometry. A weaker resonance at roughly 3 eV derives from higher-lying bright states that are not optically excited in our experiment. Fig. 3: Non-adiabatic excited-state molecular dynamics simulations of an A–D–A oligomer dimer. a , Absorption spectrum and (inset) lowest-lying electronic states. The solid (dashed) lines indicate bright (dark) states. b , Dynamics of the electronic energy gap Δ E 2,1 between S 1 and S 2 states for all trajectories (black) and an exemplary one (red, insets). A jump in Δ E 2,1 ( t − t hop = 0) is the signature of a non-adiabatic transition from S 2 to S 1 . c , Fourier transform of Δ E 2,1 for the exemplary trajectory showing the vibrational mode spectrum (left) before and (right) after the surface crossing. a.u., arbitrary units. d , Transition density dynamics on one oligomer unit for (solid) an exemplary trajectory and (dashed) the ensemble average. Snapshots of the orbital plots at selected times show the initial spatial delocalization of the transition density in the S 2 state (red) and its transient localization on one of the oligomers after surface crossing (blue). Transition density oscillations between the two oligomer units persist until the system gets trapped on one of them within roughly 500 fs. e , Non-adiabatic coupling vector (NACV, d 2,1 ) between S 1 and S 2 comprising an antisymmetric vibration of the dimer and its projections onto the excited-state instantaneous normal modes for (blue) S 1 and (red) S 2 , respectively. Full size image In the dimer simulations, we approximate the dynamics of an optically excited wave packet as an ensemble average over semiclassical trajectories of the nuclear motion that are launched in the Franck–Condon region 43 . Along these trajectories, the relative energy difference between the S 2 and S 1 states, Δ E 2,1 , initially decays monotonically until, suddenly, it rapidly increases (Fig. 3b ). The steep increase in Δ E 2,1 indicates a sudden change in slope of the PES experienced by the wave packet. It is thus a direct signature of the unidirectional, non-adiabatic population transfer from S 2 to S 1 and strong evidence for an intermolecular CoIn connecting them. In our simulations, the nuclear configuration at which this hopping takes place depends on the specific trajectory and may lie in a finite region around the CoIn. After the crossing, large-amplitude oscillations of Δ E 2,1 initiate (Fig. 3b ). Their frequency spectrum (Fig. 3c , right panel) shows a main peak at roughly 1,500 cm −1 together with a distribution of other components. This broad spectrum is consistent with the washout of the 2DES pattern in our experiments. In contrast, the frequency spectrum of the weak oscillations before the crossing (Fig. 3b inset) displays fewer components with the dominant peak now being at roughly 1,700 cm −1 (Fig. 3c left panel). Hence, the periods of the optically excited tuning modes are markedly different before and after the crossing. We further examine the time evolution of the electronic transition density matrix, reflecting the spatial distribution of the optically excited wavefunction 44 , 45 . Orbital plots of the diagonal elements of the transition density matrix, representing the optically induced changes on each atom, show that the optical excitation is initially delocalized over both oligomer units (Fig. 3d ). As long as the system is in the S 2 state, rapid periodic oscillations (roughly 12 fs period) of very small transition density fractions between the oligomer units are seen (Fig. 3d , shaded red). The excitation remains, however, delocalized over the entire dimer. On transition to S 1 , both the amplitude and period of these transition density oscillations dramatically increase, indicating an abrupt change of the PES (Fig. 3d , blue shaded) and a dissipation of an excess of electronic energy into vibrational motion. Moreover, we find that this transition initiates a dynamic localization process of the electronic density on one of the oligomers, followed by pronounced oscillations of the transition density between the two units. This transient localization eventually transforms the coherent intermolecular excitation across the dimer into an intramolecular excitation localized on a single oligomer unit in roughly 500 fs. To probe the characteristic vibrations driving the electronic S 2 –S 1 transition, we calculate the non-adiabatic coupling vector (NACV, d 2,1 , Fig. 3e ). It indicates the instantaneous direction of the main driving forces along the PES and hence the direction of the population transfer during an electronic transition 42 . Its projection onto the normal modes at the time of the crossing thus reveals an asymmetric high-frequency (roughly 2,600 cm −1 ) vibration of the dimer as the dominant mode (Fig. 3e and Supplementary Fig. 10 , red). We deduce that this asymmetric vibration is directly involved in driving the non-adiabatic transition from S 2 to S 1 . Vibronic coupling to such an asymmetric mode is an essential ingredient of CoIns. Thus, we identify this asymmetric vibration as the dominant coupling mode Q 2 . In these low temperature simulations, it takes roughly 400 fs for the population to transfer from S 2 to S 1 (Supplementary Fig. 9b ). At room temperature, this transfer time reduces to only around 40 fs (Supplementary Fig. 9a ), in excellent agreement with our experiments. This points to the role of thermally induced vibrational fluctuations in bringing the wave packet to the CoIn seam. Conclusions Taken together, our experimental and theoretical results provide strong evidence that the passage of a coherent vibrational wave packet through an intermolecular conical intersection governs the ultrafast, sub-100-fs energy transfer dynamics in functional molecular aggregates. The initial impulsive optical excitation populates the lowest optically bright state, spatially delocalized across the aggregate. Coupling to both low- (Q 1 ) and high-frequency (Q 3 ) modes drives this coherent intermolecular excitation towards the CoIn. Transition from the bright to the dark state through an asymmetric high-frequency coupling mode (Q 2 ) triggers dissipation of electronic into vibrational energy and localization within the aggregate. The timescale for initiating this trapping process is governed by the time it takes for the optically launched wave packet to reach and pass the CoIn. Hence, it strongly depends on the details of the vibronic couplings determining the potential energy landscape in the aggregate. As such, intermolecular CoIns control the transition from a coherently moving vibronic wave packet, spatially delocalized across several oligomer units, towards a localized, trapped exciton, whose transport proceeds by classical diffusive hopping. Controlling these initial coherent dynamics thus provides new opportunities for steering the flow of energy and charges and their pathways on the nanoscale in functional assemblies. This requires strategies for precisely shaping vibronic couplings in molecular aggregates and solid-state nanostructures in general. These strategies may range from synthetic chemistry methods, for example to selectively exchange molecular groups in parts of the system 46 , to altering the PES by coupling molecular excitations to confined electric fields in for example, cavity structures 47 , 48 , 49 , or by controlled structural variations of the molecular arrangement. To this end, donor–acceptor-type oligomer aggregates may represent a versatile platform on which to explore intermolecular CoIns in guiding the ultrafast energy redistribution in technologically relevant systems. Precise control of intermolecular CoIns in such aggregates may enable new routes for the manipulation of nanoscale coherent energy transport in functional nanostructures and may lead to new approaches to the design of optoelectronic devices. Methods Sample preparation The A–D–A oligomer, comprising dithieno(3,2- b :2′,3′- d )pyrrole-thiophene as the central donor and two dicyanovinyl units as terminal acceptors, is synthesized and processed according to ref. 29 . To prepare the thin films, the oligomer is dissolved in O -xylene in a concentration of 30 mg ml −1 and is stirred at 80 °C for >1 h. The thin-film fabrication is conducted by doctor-blading from the hot solutions on 170-µm-thin glass substrates followed by a 30 s solvent–vapour–annealing post-deposition treatment with tetrahydrofuran 29 . This results in thin-film layers with thickness of roughly 100 nm in which the A–D–A oligomer forms small ordered aggregates with typical grain sizes of 20–50 nm (refs. 29 , 33 ). The linear absorption spectrum of the thin-film samples is recorded at room temperature with a Shimadzu SolidSpec-3700 spectrophotometer and is shown in Fig. 1c . Experimental pump-probe and 2D electronic spectroscopy setup The differential transmission spectra Δ T / T and 2DES maps are recorded using a home-built femtosecond spectrometer in a partially collinear pump-probe configuration (Supplementary Fig. 1a ). A home-built non-collinear optical parametric amplifier, pumped by the second harmonic of a regeneratively amplified Ti:Sapphire laser (Spectra Physics Spitfire Pro), delivering 150-fs pulses centred at 800 nm with a repetition rate of 5 kHz, is used to generate ultrabroadband pulses with a spectrum ranging from roughly 1.7 to 2.4 eV (Supplementary Fig. 1b ), matching the low-energy absorption band of the aggregate thin film (Fig. 1c ). Chirped mirrors (Laser Quantum DCM9, not shown in Supplementary Fig. 1 ) are used to compress the optical pulses. Second harmonic generation frequency-resolved optical gating (Supplementary Fig. 1c ) is used to characterize the optical pulses, resulting in a pulse duration of about 8 fs. A broadband beam splitter is used to separate the beam into pump and probe arms. For 2DES, the collinear pump–pulse pair is generated by a common path delay line (TWINS) 50 on the basis of birefringent wedges. To compensate the dispersion introduced by the wedges, an additional pair of chirped mirrors (Laser Quantum DCM10) is put in the pump arm of the setup before the TWINS. The time delay between the two phase-locked pump pulses (coherence time) τ is scanned with a motorized translation stage (Physik Instrumente M122.2DD). To calibrate τ , during the experiment a small fraction of the pump beam is sent to a photodiode that records the time-domain interference signal of the two pump pulses during the measurement 35 . For the two-pulse differential transmission measurements, τ is set to zero. The waiting time T , defined as the time delay between the pump–pulse pair and the probe beam, is controlled by another motorized translation stage (Physik Instrumente M-111.1DG). Pump and probe beams are focused onto the sample to a spot size of roughly 70 µm with a spherical mirror. The relative polarization between the linearly polarized pump and probe pulses is set to roughly 55°. After the sample, the transmitted probe beam is dispersed in a monochromator and recorded with a 1,024-pixel CCD-array (Entwicklungsbüro Stresing), whereas the transmitted pump beam is blocked. A mechanical chopper in the pump arm modulates the presence of the pump pulses at a frequency of 500 Hz, such that experimentally normalized differential transmission spectra \(\frac{{{\Delta} T\left( {\tau ,T,E_{\mathrm{D}}} \right)}}{{T\left( {\tau ,T,E_{\mathrm{D}}} \right)}} = \frac{{T_{\mathrm{{on}}}\left( {\tau ,T,E_{\mathrm{D}}} \right) - T_{\mathrm{{off}}}\left( {\tau ,T,E_{\mathrm{D}}} \right)}}{{T_{{\mathrm{off}}}\left( {\tau ,T,E_{\mathrm{D}}} \right)}}\) are recorded as a function of the time delays τ and T , and of the detection energy E D . Here T on ( T off ) denotes the transmitted probe beam intensity spectrum after the sample recorded when the pump is switched on (off). Absorptive 2DES maps A 2D ( E X , T , E D ) are obtained by taking the real part of the Fourier transform of the measured signal \(\frac{{{\Delta} T\left( {\tau ,T,E_{\mathrm{D}}} \right)}}{{T\left( {\tau ,T,E_{\mathrm{D}}} \right)}}\) along the coherence time τ , \({\it{A}}_{\mathrm{{2D}}}\left( {E_{\mathrm{X}},T,E_{\mathrm{D}}} \right) = {\mathrm{Re}}\left\{ {\mathop {\int}\limits_0^\infty {\frac{{{\Delta} T\left( {\tau ,{\it{T}},E_{\mathrm{D}}} \right)}}{{T\left( {\tau ,T,E_{\mathrm{D}}} \right)}}{\mathrm{e}}^{ - i\frac{{E_{\mathrm{X}}}}{\hbar }\tau }{\mathrm{d}}\tau } } \right\}\) , to obtain the excitation energy E X axis. Before this, for each T and E D , all the measured signals are multiplied with a Gaussian filter \(G\left( \tau \right) = {\mathrm{e}}^{ - 4\ln 2\left( {\frac{\tau }{{\tau _{\mathrm{F}}}}} \right)^2}\) , with τ F = 180 fs the full-width at half maximum, to minimize the effect of truncation on the Fourier transform 35 . Non-adiabatic excited-state dynamics calculations Simulations of non-adiabatic excited-state dynamics are performed on an antifacial oligomer dimer (Fig. 1c and Supplementary Fig. 8 ). The side chains are removed to reduce the computational costs. We have used density functional theory (DFT) to generate initial structures of a monomer (oligomer) and the respective dimer. Both systems are optimized using the quantum chemical program package Orca v.4.0.1 (ref. 51 ). The CAM-B3LYP (Coulomb attenuating method with Becke, three-parameter, Lee–Yang–Parr) functional 52 in combination with a single-zeta quality all-electron basis set is used and dispersion interactions are taken into account via Grimme’s empirical correction according to v.D3 (ref. 53 ) with Becke–Johnson damping 54 . The last dispersive corrections are particularly important to simulate realistic interacting dimer configurations. We further use a time-dependent DFT approach with the same functional/basis set to calculate the reference absorption spectra of molecules. These spectra are compared to experiment and serve as benchmark for excited-state structure for a simpler configuration interaction singles (CIS) level combined with the Austin model 1 (AM1) Hamiltonian used for excited-state dynamics simulations (the method is described below). Overall time-dependent DFT and AM1/CIS approaches produce similar absorption spectra for monomer and dimer (Fig. 3a and Supplementary Figs. 7e and 8a ) consistent with experiment (Fig. 1c and Supplementary Fig. 7d ) as judged by a pronounced band-gap absorption peak and the overall line shape. Notably, in all the cases theoretical spectra are blue-shifted compared to experiment. For example, absorption spectrum of a dimer is shifted to the blue by about 0.5 eV in AM1/CIS compared to experiment. This is attributed to several reasons. First, our simulations use a dimer, being a minimal model, instead of larger aggregates present in the experiments. Moreover, dielectric medium effects absent in the simulations, cause red shifts in the spectra. Second, the semiempirical level of theory is not exact and typically has an accuracy 0.2–0.4 eV when describing absorption spectra of conjugated molecules, as exemplified in a recent review 55 . These comparisons allow us to identify the essential excited states participating in the dynamics and to prepare an initial photoexcitation consistent with spectroscopic probes for our dynamical simulations. The semiempirical non-adiabatic excited-state molecular dynamics (NEXMD) package ( ) 55 is further used for all dynamical simulations of excited states. The NEXMD relies on an improved Tully’s fewest switches surface hopping algorithm 56 for modelling non-adiabatic dynamics as described in detail in the recent review 55 , which exemplifies many previous successful NEXMD applications. In the fewest switches surface hopping algorithm, the probability of the hop is chosen in a Monte Carlo-like fashion with hopping probability being proportional to the square of the non-adiabatic coupling vector. $${\mathbf{d}}_{\alpha \beta } = \left\langle {\phi _\alpha \left( {{\mathbf{r}};{\mathbf{R}}\left( t \right)} \right)|\nabla _{\mathbf{R}}\phi _\beta \left( {{\mathbf{r}};{\mathbf{R}}\left( t \right)} \right)} \right\rangle ,$$ (1) where r and R ( t ) are the electronic and nuclear vector coordinates, respectively, ∇ R stands for differentiation with respect to nuclear coordinates, and \(\phi _\alpha \left( {{\mathbf{r}};{\mathbf{R}}\left( t \right)} \right)\left( {\phi _\beta \left( {{\mathbf{r}};{\mathbf{R}}\left( t \right)} \right)} \right)\) is the CIS adiabatic wavefunction of the α th ( β th) electronic state. The direction of d αβ corresponds to the direction of the main force on the nuclei during strong non-adiabatic interactions between α th and β th electronic states 55 . All ingredients for non-adiabatic dynamics such as excited-state energies, gradients, transition density matrices and non-adiabatic coupling vectors are calculated ‘on the fly’ at AM1/CIS level as implemented in NEXMD 57 . A 1-ns constrained ground state trajectories at room (300 K) and low (10 K) temperature are performed to collect 500 snapshots of initial configurations for the excited-state dynamics. The Langevin thermostat and a friction coefficient γ = 20.0 ps is used. During the constrained ground state simulations, six atomic coordinates are held fixed (Supplementary Fig. 8 ) to compensate for the lack of dispersion corrections in the semiempirical approach. The optical spectrum is generated from 500 single-point calculations by averaging over all spectra from all geometries. The total optical absorbance A for all excitation energies Ω is broadened by a Gaussian line shape and weighted by the oscillator strength f of each excited state | α 〉 included, $${\it{A}}\left( {\Omega} \right) = \frac{1}{{N_i}}\mathop {\sum}\limits_i^{N_i} {\mathop {\sum}\limits_\alpha ^{N_\alpha } {f_\alpha ^i} } \left( {{\Omega} _\alpha } \right) \times \frac{1}{{\sqrt {2\pi \sigma ^2} }}\exp \left[ { - \frac{{\left( {{\Omega} _\alpha - {\Omega} } \right)^2}}{{2\sigma ^2}}} \right].$$ (2) The index i runs over all geometries N i , whereas the index α over all excited-state energies N α . A simulated laser pulse is used to populate the initial excited state for each configuration according to the Franck–Condon window, given by $$g_\alpha \left( {{\mathbf{r}},{\mathbf{R}}} \right) = \left( {\frac{{f_\alpha }}{{{\Omega} _\alpha ^2}}} \right)\exp \left[ { - \sigma ^2\left( {{\it{E}}_{{\mathrm{laser}}} - {\Omega} _\alpha } \right)^2} \right].$$ (3) Here, E laser represents the central energy of the laser pulse and is taken as 2.44 eV (2.46 eV) for the calculations at 300 K (10 K). The excitation energy width is given by the transform-limited relation of a Gaussian laser pulse f ( t ) = exp( t 2 /2 σ 2 ) with a full-width at half maximum of 12 fs, giving a standard deviation of σ = 0.15 eV ( σ = 0.25 eV) for 300 K (10 K). A classical time step of 0.1 fs and a quantum time step of 0.025 fs have been used for the propagation of nuclear and electronic degrees of freedom, respectively. Electronic decoherence is taken into account using an instantaneous decoherence procedure 58 where coefficients are reinitialized after successful and attempted hops. The evolution of the entire ensemble of trajectories defines the dynamics of the photoexcited wave packet and thus relevant to experimental relaxation timescales (Fig. 3b ). In addition, characteristic trajectory examples are chosen to analyse representative dynamics of wavefunctions and vibrational degrees of freedom. Here, the spatial extent of the wavefunctions is examined using transition density matrices 44 , $$\left( {\rho ^{0\alpha }} \right)_{nm} = \left\langle {\phi _\alpha \left( {{\mathbf{r}};{\mathbf{R}}\left( t \right)} \right)\left| {c_n^\dagger c_m} \right|\phi _0\left( {{\mathbf{r}};{\mathbf{R}}\left( t \right)} \right)} \right\rangle ,$$ (4) where \(c_n^\dagger \left( {c_m} \right)\) are creation (annihilation) operators, n and m denote atomic orbital basis functions and ϕ 0 ( t ) and ϕ α ( t ) are the ground and excited-state wavefunctions. Therefore, the diagonal elements \(\left( {\rho ^{0\alpha }} \right)_{nn}\) represent the net change in the electronic density induced on an atomic orbital for a ground to excited-state electronic transition (Fig. 3d and Supplementary Figs. 7b and 8b ). The normalization condition \(\mathop {\sum}\limits_{nm} {\left( {\rho ^{0\alpha }} \right)_{nm}^2} = 1\) holds for the CIS approximation 59 . In this scheme, the fraction of the transition density localized on either the top or bottom oligomer unit of the dimer can be calculated by adding the contributions from each atom i in the X ( X = top, bottom) oligomer unit as follows, $$\left( {\rho ^{0\alpha }} \right)_X^2 = \mathop {\sum}\limits_{n_im_i,i \in X} {\left( {\rho ^{0\alpha }} \right)_{n_im_i}^2} .$$ (5) At a given time, the intramolecular vibrations are described as a set of well-defined and independent harmonic oscillators 60 . The gradients are calculated analytically, whereas the second derivatives are obtained numerically. When the Hessian matrix is diagonalized, a set of excited-state instantaneous normal mode \({\mathrm{ES - INM(S}}_{\upalpha}{\mathrm{)}}\) vectors \(\left\{ {{\mathrm{Q}}_\alpha ^i} \right\}_{i = 1,3N}\) can be expressed as a linear combination of Cartesian displacements ∂ R , $${\mathrm{Q}}_\alpha ^i\left( t \right) = \mathop {\sum}\limits_{j = 1}^{3N} {l_\alpha ^{ij}} \partial {\mathbf{R}}_t^j,$$ (6) where \(l_\alpha ^{ij}\) are coefficients of the corresponding eigenvector matrix L α . Projection of the non-adiabatic coupling vector d 2,1 on the basis of ES-INM(S 2 ) and ES-INM(S 1 ) at the moment of transition t hop is given by $${\mathbf{d}}_{2,1}\left( {t_{{\mathrm{hop}}}} \right) = \mathop {\sum}\limits_j^{3N - 6} {a_2^j\left( {t_{{\mathrm{hop}}}} \right)} {\mathrm{Q}}_2^j\left( {t_{{\mathrm{hop}}}} \right) = \mathop {\sum}\limits_j^{3N - 6} {b_1^j\left( {t_{{\mathrm{hop}}}} \right)} {\mathrm{Q}}_1^j\left( {t_{{\mathrm{hop}}}} \right).$$ (7) where coefficients \(a_2^j\) and \(b_1^j\) reflect participation of vibrational modes of the first and second excited states, respectively. Data availability The data that support the findings of this study, large data sets stored in the data repositories of different institutions in different countries, are available from the authors upon reasonable request. Code availability The NEXMD code is available at . This program is open source under the BSD-3 Licence. | When light falls on a material, such as a green leaf or the retina, certain molecules transport energy and charge. This ultimately leads to the separation of charges and the generation of electricity. Molecular funnels, so-called conical intersections, ensure that this transport is highly efficient and directed. An international team of physicists has now observed that such conical intersections also ensure a directed energy transport between neighboring molecules of a nanomaterial. Theoretical simulations have confirmed the experimental results. Until now, scientists had observed this phenomenon only within one molecule. In the long term, the results could help to develop more efficient nanomaterials for organic solar cells, for example. The study, led by Antonietta De Sio, University of Oldenburg, and Thomas Frauenheim, University of Bremen, Germany, was published in the current issue of the scientific journal Nature Nanotechnology. Photochemical processes play a major role in nature and in technology: When molecules absorb light, their electrons transit to an excited state. This transition triggers extremely fast molecular switching processes. In the human eye, for example, the molecule rhodopsin rotates in a certain way after absorbing light and thus ultimately triggers an electrical signal—the most elementary step in the visual process. First experimental evidence for conical intersections between molecules The reason for this is a special property of rhodopsin molecules, explains Christoph Lienau, professor of ultrafast nano-optics at the University of Oldenburg and co-author of the study: "The rotation process always takes place in a similar way, although from a quantum mechanical point of view there are many different possibilities for the molecular movement." This is due to the fact that the molecule has to funnel through a conical intersection during the rotation process, as a 2010 study demonstrated experimentally in visual pigment: "This quantum mechanical mechanism functions like a one-way street in the molecule: It channels the energy in a certain direction with a very high probability," explains Lienau. The research team led by Antonietta De Sio, senior scientist in the research group Ultrafast Nano-optics at the University of Oldenburg, and Thomas Frauenheim, professor of Computational Materials Science at the University of Bremen, has now observed such a one-way street for electrons in a nanomaterial. The material has been synthesized by colleagues from the University of Ulm, Germany, and is already used in efficient organic solar cell devices. "What makes our results special is that we have experimentally demonstrated conical intersections between neighboring molecules for the first time," explains De Sio. Until now, physicists worldwide had only observed the quantum mechanical phenomenon within a single molecule and only speculated that there might also be conical intersections between molecules lying next to each other. Theoretical calculations support experimental data De Sio's Team has discovered this one-way street for electrons by using methods of ultrafast laser spectroscopy: The scientists irradiate the material with laser pulses of only a few femtoseconds in duration. One femtosecond is a millionth of a billionth of a second. The method enables the researchers to record a kind of film of the processes that take place immediately after the light reaches the material. The group was able to observe how electrons and atomic nuclei moved through the conical intersection. The researchers found that a particularly strong coupling between the electrons and specific nuclear vibrations helps to transfer energy from one molecule to another as if on a one-way street. This is exactly what happens in the conical intersections. "In the material we studied, it took only about 40 femtoseconds between the very first optical excitation and the passage through the conical intersection," says De Sio. In order to confirm their experimental observations, the researchers from Oldenburg and Bremen also collaborated with theoretical physicists from the Los Alamos National Laboratory, New Mexico, U.S., and CNR-Nano, Modena, Italy. "With their calculations, they have clearly shown that we have interpreted our experimental data correctly," explains De Sio. The Oldenburg researchers are not yet able to estimate in detail the exact effect of these quantum mechanical one-way streets on future applications of molecular nanostructures. However, in the long term the new findings could help to design novel nanomaterials for organic solar cells or optoelectronic devices with improved efficiencies, or to develop artificial eyes from nanostructures. | 10.1038/s41565-020-00791-2 |
Biology | 3-D maps of gene activity | Gene expression cartography, Nature (2019). DOI: 10.1038/s41586-019-1773-3 , nature.com/articles/s41586-019-1773-3 Journal information: Nature , Science | http://dx.doi.org/10.1038/s41586-019-1773-3 | https://phys.org/news/2019-11-d-gene.html | Abstract Multiplexed RNA sequencing in individual cells is transforming basic and clinical life sciences 1 , 2 , 3 , 4 . Often, however, tissues must first be dissociated, and crucial information about spatial relationships and communication between cells is thus lost. Existing approaches to reconstruct tissues assign spatial positions to each cell, independently of other cells, by using spatial patterns of expression of marker genes 5 , 6 —which often do not exist. Here we reconstruct spatial positions with little or no prior knowledge, by searching for spatial arrangements of sequenced cells in which nearby cells have transcriptional profiles that are often (but not always) more similar than cells that are farther apart. We formulate this task as a generalized optimal-transport problem for probabilistic embedding and derive an efficient iterative algorithm to solve it. We reconstruct the spatial expression of genes in mammalian liver and intestinal epithelium, fly and zebrafish embryos, sections from the mammalian cerebellum and whole kidney, and use the reconstructed tissues to identify genes that are spatially informative. Thus, we identify an organization principle for the spatial expression of genes in animal tissues, which can be exploited to infer meaningful probabilities of spatial position for individual cells. Our framework (‘novoSpaRc’) can incorporate prior spatial information and is compatible with any single-cell technology. Additional principles that underlie the cartography of gene expression can be tested using our approach. Main Single-cell RNA sequencing (scRNA-seq) has revolutionized our understanding of the rich heterogeneous cellular populations that make up tissues, the dynamics of developmental processes and the underlying regulatory mechanisms that control cellular function 1 , 2 , 3 , 4 . However, to understand how single cells orchestrate multicellular functions, it is crucial to have access not only to the identities of single cells but also to their spatial context. This is a challenging task, as tissues must commonly be dissociated into single cells before scRNA-seq can be performed, and thus the original spatial context and relationships between cells are lost. Two seminal papers tackled this problem computationally 5 , 6 —the key idea being to use a reference atlas of informative marker genes as a guide to assign spatial coordinates to sequenced cells. This concept was successfully used in various tissues 7 , 8 , 9 , 10 , 11 , including the early Drosophila embryo 12 . However, such methodologies rely heavily on the existence of an extensive reference database for spatial expression patterns, which may not always be available or straightforward to construct. Moreover, in practice the number of available reference marker genes is usually not large enough to label each spatial position with a distinct combination of reference genes, making it impossible to uniquely resolve cellular positions. More generally, marker genes, even when available, convey limited information, which could possibly be enriched by the structure of single-cell data. To this aim, we developed a new computational framework (novoSpaRc), which allows for de novo spatial reconstruction of single-cell gene expression, with no inherent reliance on any prior information, and the flexibility to introduce it when it does exist (Fig. 1 ). Similar to solving a puzzle, we seek the optimal configuration of pieces (cells) that recreates the original image (tissue). However, contrary to a typical puzzle, here we do not have access to the image that we aim to reconstruct. Although the number of ways to spatially arrange (or ‘map’) sequenced cells in tissue space is enormous, our hypothesis is that gene expression in the vast majority of these arrangements will not be as organized as in the real tissue. For example, we know that typically there exist genes that are specifically expressed in spatially contiguous territories and are thus consistent with only a small subset of all possible arrangements. We therefore set out to identify simple, testable assumptions that govern how gene expression is organized in space, and to subsequently find the arrangements of cells that best respect those assumptions. Fig. 1: Overview of novoSpaRc. A matrix that contains single-cell transcriptome profiles, sequenced from dissociated cells, is the main input for novoSpaRc. The output is a virtual tissue of a chosen shape, which can be queried for the expression of all genes quantified in the data. Full size image novoSpaRc charts gene expression in tissues Here, we specifically explore the assumption that cells that are physically close tend to share similar transcription profiles, and vice versa (Extended Data Fig. 1a , Supplementary Methods ). Biologically, this phenotype can result from multiple mechanisms, such as gradients of oxygen, morphogens and nutrients, the trajectory of cell development and communication between neighbouring cells. We stress that this is an assumption about overall gene expression across the entire tissue—not about individual genes and not about all cells that are physically close ( Supplementary Methods ). We show that, on average, the distance between cells in expression space increases with their physical distance, for diverse tissues in mature organisms or whole embryos in early development. Thus, to predict the spatial locations of sequenced cells, we seek to find a map of sequenced cells to tissue space (‘cartography’) such that overall structural correspondence is preserved—meaning that, overall, cells have similar relative distances to other cells in expression and physical space. The physical space is anchored by locations that may be either known (such as the reproducible cellular locations in the early stages of development of the Drosophila embryo 13 ) or approximated by a grid ( Supplementary Methods ). The distances are first computed for each pair of cells across graphs constructed over the two spaces, to account for the underlying structure of the data ( Supplementary Methods ). Then, novoSpaRc optimally aligns the distances of pairs of cells between the expression data and geometric features of the physical space, in a way that is consistent with spatial expression profiles of marker genes when these are available ( Methods , Supplementary Methods ). For reasons that are both biologically and computationally motivated, we seek a probabilistic mapping that assigns each cell a distribution over locations on the physical space ( Supplementary Methods ). We formulate this as a generalized optimal-transport problem 14 , 15 , 16 , which has been proven to be increasingly valuable for diverse fields (including biology 17 , 18 ) and renders the task of reconstruction feasible for large datasets. Specifically, we formulate an interpolation between entropically regularized Gromov–Wasserstein 19 , 20 and optimal-transport 21 objectives, which serves to satisfy the assumption of structural correspondence between gene expression space and physical space, and to match prior knowledge when available ( Methods ). We show that this optimization problem can be efficiently solved using projected gradient descent reduced to iterations of linear optimal-transport sub-problems ( Supplementary Methods ). To systematically assess the performance of novoSpaRc, we used a simple generative model of spatial gene expression to show that it can robustly recover it ( Supplementary Methods , Extended Data Fig. 1b–d ). novoSpaRc reconstructs tissues de novo Focusing on real single-cell datasets, we first reconstructed tissues de novo that have inherent symmetries that render them effectively one-dimensional, such as the mammalian intestinal epithelium 10 and liver lobules 7 . Schematic figures of the reconstruction process are shown in Fig. 2a, e . Cells were previously classified into seven distinct zones for the intestine, or nine layers for the liver, on the basis of robust marker gene information 7 , 10 . We found that the average pairwise distances between cells in expression space increased monotonically with the pairwise distances in physical one-dimensional space (Fig. 2b, f ), consistent with our structural correspondence assumption. Fig. 2: novoSpaRc successfully reconstructs complex tissues with effective one-dimensional structure de novo. a , e , The reconstruction scheme for the mammalian intestinal epithelium ( a ) and liver lobules ( e ). b , f , Demonstration of the monotonic relationship between cellular pairwise distances in expression and physical space for intestinal epithelium ( b ) and liver lobules ( f ). Distances are measured as weighted shortest paths along the graphs constructed over physical or expression spaces. Data are mean ± s.d. c , g , novoSpaRc infers the original spatial context of single cells of the intestinal epithelium ( c ) and liver lobules ( g ) with high accuracy. Heat maps show the inferred distribution over embedded layers (rows) for the cells in each of the original layers (columns). d, h, novoSpaRc captures the spatial division of labour of averaged expression of genes that have a role in the absorption of different classes of nutrients in the intestine (d) and the spatial expression patterns of a group of pericentral, periportal and non-monotonic genes in the liver lobule (h). The expression level of each gene in d and h is normalized to its maximum value. Full size image We used novoSpaRc to embed the expression data into one dimension. The embedded coordinates of single cells correlated well on average with their layer or zone memberships (Fig. 2c, g , Supplementary Methods ). The median Pearson correlation coefficient for reconstructed expression patterns to original patterns for the top 100 variable genes was 0.99 for intestine and 0.94 for liver ( Supplementary Methods ), and the fraction of cells that were correctly assigned up to one layer away from their original layer was 0.98 for intestine and 0.73 for liver ( Supplementary Methods , Extended Data Fig. 2a, b ). novoSpaRc captured spatial expression patterns of the top zonated genes and spatial division of labour within the intestinal epithelium—as well as within the layers of the liver lobules ( Methods , Fig. 2d, h , Extended Data Fig. 3a, b ), in which cells in different tissue layers perform different tasks and exhibit different expression profiles. For the intestine, varying the grid resolution to include either fewer or more embedded zones did not compromise the quality of the reconstructed expression patterns (Extended Data Fig. 3c ), which shows the potential for increased resolution of single-cell-based relative to atlas-based embedding. novoSpaRc reconstructs early embryos Next, we focused on spatially reconstructing the well-studied Drosophila embryo, as a more-challenging, higher-dimensional tissue. Late in stage 5 of development, the fly embryo consists of around 6,000 cells. It has been previously suggested 22 that at early stages of fly development, the expression levels of gap genes can be optimally decoded into positional information. The expression levels of 84 transcription factors were quantitatively registered using fluorescence in situ hybridization (FISH) for each of the cells by the Berkeley Drosophila Transcription Network Project (BDTNP) 13 . To assess the performance of novoSpaRc, we first simulated scRNA-seq data by in-silico dissociating the BDTNP dataset into single cells ( Methods ), and then attempted to reconstruct the original expression patterns across the tissue both de novo and by using marker genes (Fig. 3a ). Similarly to the ‘one-dimensional’ datasets, we found a monotonically increasing relationship between the cell–cell pairwise distances in expression space and in physical space (Fig. 3b ), confirming that the data adheres to our structural correspondence assumption. Fig. 3: novoSpaRc accurately reconstructs the Drosophila embryo on the basis of the BDTNP dataset 13 . a , FISH data are used to create virtual scRNA-seq data, which novoSpaRc inputs to reconstruct a virtual embryo. b , Demonstration of the structural correspondence hypothesis. Pairwise cellular distances in expression space increase monotonically with distances in physical space. Data are mean ± s.d. c , novoSpaRc spatially reconstructs the Drosophila embryo with only one marker gene. The quality of reconstruction (measured by Pearson correlation with FISH data) increases with the number of marker genes and saturates at perfect reconstruction at two marker genes, when using both structural information and marker gene information (blue boxes). This outperforms reconstruction that relies only on marker gene information (yellow boxes). The results are averaged for 100 different combinations of marker genes. For the box plots, the centre line is the median, box limits are the 0.25 and 0.75 quantiles and whiskers extend to ±2.698 s.d. d , Visualization of the reconstruction results for four transcription factors. The original FISH data (first row) are compared to reconstruction by novoSpaRc that exploits both structural and marker gene information (using two marker genes and one marker gene) and reconstruction without any marker gene information (de novo). e , The original locations of three cells are compared to their respective reconstructed locations by novoSpaRc (using two marker genes and one marker gene). The expression patterns of the marker genes used for the results in d and e are shown in Extended Data Fig. 5c . Full size image The reconstructed patterns of spatial gene expression highly correlated with the original ones (Fig. 3c ). We found that the novoSpaRc reconstruction that incorporated both structural and marker gene information outperformed the reconstruction based on only the latter, and that performance was saturated at two marker genes (Fig. 3c ), independently of the marker genes used. As expected, the quality of the reconstruction increased with the number of genes used to provide structural information in expression space, and with the fraction of spatially informative genes ( Supplementary Methods , Extended Data Fig. 4a, b ). The majority of spatial patterns were recapitulated faithfully even when only a single marker gene was used (Fig. 3c, d ). In addition, novoSpaRc identified the physical neighbourhoods from which cells originated when used de novo (up to inherent symmetries; see Supplementary Methods ), and pinpointed their true locations ( P < 0.05 compared to random assignment) when a handful of marker genes were used (Fig. 3e , Extended Data Fig. 5a, b ). We examined the expression patterns of four transcription factors that span the dorsal–ventral and anterior–posterior axes (Fig. 3d ). The quality of the reconstruction improved when applying the structural correspondence assumption ( Supplementary Methods , Extended Data Fig. 5d ). The de novo reconstruction correctly identified both axes of the embryo, and the reconstructed portrait was remarkably similar to the original one (Fig. 3d ). In general—because de novo reconstruction is performed without any prior information that would anchor the cells—the reconstructed configuration is similar up to global transformations (reflections, rotations and translations), relative to the respective axes of symmetry ( Supplementary Methods ). Consequently, the resulting patterns of gene expression might be shifted or flipped relative to the expected ones. However, there are features of a faithful reconstruction that we can test for, such that the reconstruction would be robust to small changes in the optimization parameters ( Supplementary Methods , Extended Data Fig. 4i ) and that the embedding of cells onto the embryo would be relatively localized—as we would expect for a biologically meaningful embedding (Fig. 3e ). This means that the distribution over locations that each cell is assigned should be localized, and indeed, the mean standard deviation of that distribution for all cells is significantly lower than that of a randomized embedding ( Supplementary Methods , Extended Data Fig. 4j ). Furthermore, we demonstrated that the results from novoSpaRc—as measured by correlation to observed imaging data and optimization error—were robust to optimization parameters and sources of noise, including partial sampling of cells, additive expression noise and dropouts (Extended Data Fig. 4c–h ). We next used novoSpaRc to reconstruct the stage 6 Drosophila embryo by using a scRNA-seq dataset 12 (Fig. 4a ). In that study, 84 marker genes were required to reconstruct a virtual embryo by distributing 1,297 cells over 3,039 locations. When we used novoSpaRc with the combination of both structural information and the reference atlas, the accuracy of reconstruction increased with the number of marker genes, reaching high correlation (Pearson correlation coefficient, 0.74) with the FISH data (Fig. 4b , Extended Data Fig. 5e ). The de novo, atlas-free reconstruction accurately separated the major post-gastrulation spatial domains (mesoderm, neurogenic ectoderm and dorsal ectoderm), as well as finer spatial domains (Fig. 4c, d ). We clustered the reconstructed patterns of the highly variable genes and averaged to obtain a representative pattern for each cluster, which we term the ‘archetype’ ( Methods , Supplementary Information ). novoSpaRc identified numerous distinct spatial archetypes (Fig. 4c, d , Extended Data Fig. 6 ). We compared representative genes of each spatial archetype with FISH images to visually assess the accuracy of the spatial reconstruction. Gene patterns that were expressed through the anterior–posterior or the dorsal–ventral axis were largely recapitulated: typical genes of the mesoderm (dorsal ectoderm), such as twi and sna ( zen and ush ), were colocalized ventrally (dorsally) (Fig. 4c, d , right, middle). novoSpaRc accurately captured localized spatial populations (Fig. 4c, d , left, Extended Data Fig. 6 , archetype 5), whereas less-extensive spatial domains were reconstructed with varying degrees of accuracy (Extended Data Fig. 6 ). Note that within the de novo reconstruction, accurate localization entails global transformations, as described above ( Supplementary Methods ). Fig. 4: novoSpaRc identifies spatial archetypes in the Drosophila embryo by using scRNA-seq data. a , Schematic overview. The expression patterns as reconstructed by novoSpaRc are compared with the BDTNP expression values. b , Reconstruction of the Drosophila embryo using scRNA-seq data. Distributions of gene-specific Pearson correlation coefficients reflect better reconstruction with increasing number of marker genes. c , Three of the spatial archetypes (1,3 and 9) that novoSpaRc identified in the Drosophila embryo. d , Representative genes for each of the spatial archetypes depicted in c . FISH data (left columns) are compared against the corresponding novoSpaRc predictions (‘virtual in situ hybridization’ (vISH); right columns). Full size image Before proceeding to more complex tissues, we reconstructed the zebrafish embryo dataset 5 (Extended Data Fig. 7 ). Similar to the original seminal study, we mapped the cells onto the surface of a hemisphere consisting of 64 distinct locations. The resulting spatial expression patterns highly correlated to the experimentally verified ones; novoSpaRc reconstructed the zebrafish embryo by using only 15 marker genes (in contrast to the 47 genes that were previously required 5 ) and the accuracy of the reconstruction increased with the number of marker genes (Extended Data Fig. 7 , Methods ). Furthermore—in contrast to previous reconstructions—no data imputation or other specialized preprocessing was necessary 5 . novoSpaRc charts diverse complex tissues To further demonstrate the applicability of novoSpaRc to complex tissues, diverse sequencing technologies and different organisms, we used it to reconstruct slices of mammalian brain cerebellum 23 (Fig. 5 ), the mammalian kidney 24 (Extended Data Fig. 8 ) and a dataset of hundreds of individual Drosophila embryos 22 (Extended Data Fig. 9 ). Fig. 5: novoSpaRc reconstructs mouse cerebellum tissue. a , The original and the coarse-grained spatial expression of a marker of Purkinje cells ( Pcp4 ) in a sagittal section of the cerebellum from direct spatial RNA sequencing 23 . b , The overall Pearson correlation between original gene expression and gene expression predicted by novoSpaRc increases markedly as more marker genes are used. The correlation when using only five marker genes is substantially higher than that of a random mapping of cells to locations. Density plots contain values for all 15,878 genes. c , The spatial gene expression of Pcp4 is visible with only five marker genes and is enhanced as more markers are used for the reconstruction. d , Examples of original and predicted expression for neuronal marker genes. Reconstruction was performed with 35 marker genes. e , novoSpaRc accurately reconstructs a coronal section of the cerebellum 23 . Full size image The adult mammalian brain is a well-studied, highly differentiated and complex tissue. To benchmark the capabilities of novoSpaRc in reconstructing complex tissues, we used mouse cerebellum slices from a recently developed spatial transcriptomics technology 23 . The dataset of sagittal sections contained 46,376 locations, corresponding to a single cell or a few cells, with a median of 52 quantified transcripts per location. To provide enough information to novoSpaRc, we first coarse-grained the data by binning neighbouring locations. This resulted in 7,704 locations, with a median of 379 quantified transcripts per location ( Methods , Fig. 5a ). novoSpaRc successfully reconstructed the whole transcriptome, with a Pearson correlation coefficient of 0.5 over all 15,878 genes when using 15 marker genes and 0.94 when using 50 marker genes (Fig. 5b , Supplementary Methods ). Spatial expression patterns emerged when using only a few markers. For example, spatial positions of Purkinje cells were revealed by reconstructing with only five marker genes (excluding all genes exhibiting an absolute Pearson correlation coefficient with Pcp4 of 0.25 or higher). Τhe signal improved markedly when more markers were included (Fig. 5c ). The reconstructed cerebellum slices showed concordance with the original spatial gene expression for a large number of known cell-type marker genes (Fig. 5d ). To illustrate the versatility of novoSpaRc, we further applied it to a coronal section of a brain cerebellum 23 , with similar results (Fig. 5e ). Next, we used novoSpaRc to spatially reconstruct a single-cell dataset from whole kidney 24 , which is a complex tissue with stereotypical organization. In the absence of a reference atlas of gene expression, the reconstruction was performed de novo. We focused on six major cell types of the kidney (Extended Data Fig. 8 ) and mapped the cells onto a two-dimensional target space. The de novo reconstruction recapitulated the urine flow within the kidney sub-compartments, as shown by the spatial gene expression of corresponding marker genes (Extended Data Fig. 8 ). We note that, as no prior information was required for this reconstruction, this case demonstrates the applicability of novoSpaRc to a wide variety of medically relevant tissues. Finally, to show that novoSpaRc can reconstruct not only a prototypical tissue but also individual samples, we used a dataset that captures expression patterns in hundreds of individual Drosophila embryos 22 . In this dataset, the expression of four gap genes and four pair-rule genes was measured along the anterior–posterior axis for 101 and 177 embryos, respectively, providing a distribution over expression patterns. We used novoSpaRc to reconstruct the expression patterns of the gap and pair-rule genes for individual embryos. For a given embryo, novoSpaRc reconstruction using a reference atlas based on the gene expression within the same embryo consistently outperformed reconstruction using a reference atlas based on the averaged gene expression across all embryos in the dataset (Extended Data Fig. 9 )—yet reached high correlation values for both (median Pearson correlation coefficients for reconstructing a fourth gene based on the three remaining genes were 0.99 (for expression within the same embryo) (0.95 for expression averaged across embryos) and 0.94 (0.77) for the gap and pair-rule genes, respectively). We examined the effect of the interpolation between structural and marker gene information, and evaluated the performance of novoSpaRc by comparing it to available reconstruction methods that fully rely on a reference atlas (Seurat 5 and DistMap 12 ) (Extended Data Figs. 10, 11). novoSpaRc has several advantages when compared to the other existing methods and overall shows substantial benefits in reconstruction performance (Extended Data Fig. 10 , Supplementary Discussion ). Identifying spatially informative genes A novoSpaRc-based spatial reconstruction allows us to identify known and potentially new spatially informative genes directly from the single-cell sequencing data. For the intestine and liver datasets, we recovered highly zonated genes without a reference atlas ( Methods , Supplementary Information ), and found that the top inferred zonated genes were supported experimentally and/or computationally (Fig. 6a , Supplementary Tables 1 , 2 ). Gene ontology enrichment analysis 25 further revealed that zonation-compatible biological processes enriched for different domains in the intestine and the liver were reconstructed by novoSpaRc ( Supplementary Information ). For the Drosophila single-cell dataset, we ranked all 8,924 genes according to their spatially informative rank ( Methods , Fig. 6b , Supplementary Information ), and found that transcription factors were (as known from classic genetics 26 ) among the most highly informative genes (Fig. 6c ). In addition, novoSpaRc identified numerous long non-coding RNAs and transcription factors as being highly spatially informative, many of them already predicted in a previous study 12 . Finally, we ranked all 15,878 genes in the cerebellum by their spatially informative rank ( Methods , Fig. 6d , Supplementary Information ), and found that well-known marker genes with a defined pattern of spatial expression are indeed among the highest-ranking spatially informative genes (Fig. 6d ). Fig. 6: novoSpaRc identifies spatially informative genes. a , Identifying spatially informative genes in the mammalian intestine and liver. We identify de novo (that is, with no marker genes used) the most highly zonated genes along the crypt-to-villus axis in the intestine (left) and across the axis of a liver lobule (right). The prediction of novoSpaRc is compared against the original expression patterns. The expression level of each gene is normalized to its maximum value. b – d , Identifying spatially informative genes in the Drosophila embryo (reconstruction with the BDTNP marker genes) and a slice of the mammalian cerebellum (reconstruction with 50 markers), using a measure of spatial autocorrelation. b , Expression patterns of the top 15 spatially informative genes in the Drosophila embryo. c , The spatial autocorrelation values (spatial information index) of the 84 transcription factors (TFs) chosen for the BDTNP dataset 13 are among the highest values over all 8,924 genes of the fly embryo, demonstrating that they are identified to be highly spatially informative. d , Top 10 spatially informative genes (out of the top 1,000 variable genes) in a section of the cerebellum. Full size image Discussion Together, we have demonstrated here that one can spatially reconstruct diverse biological tissues on the basis of a simple hypothesis about how gene expression is organized in space—a structural correspondence between the distances of cells in expression space and in physical space—and that it can be used to extract spatially informative genes. Our current implementation is based on pairwise comparison of cells and locations. This requirement can be readily altered. In fact, it is compelling to hypothesize that within certain biological contexts, different cell types may require higher-order interactions or exhibit different principles of spatial organization. Furthermore, we stress that because of the availability of general mathematical results in optimal-transport theory, our framework is versatile and can support a variety of alternative ways to compare distances in expression and physical space by varying the optimization loss functions ( Methods , Supplementary Methods ). Such alternative schemes are not currently supported by novoSpaRc, but could be implemented. Our data analyses and the success of the reconstructions by novoSpaRc suggest that we have identified a general principle for how gene expression is organized in tissue space ( Supplementary Discussion ). It will be interesting to find tissues for which this organization principle is weak or not valid. However, this principle may be underestimated, as most of the single-cell data available are relatively shallow and noisy. Our data also suggest that many more genes than perhaps anticipated are involved in spatial features and functions (including physiology and pathophysiology) of tissue. We have demonstrated that we can systematically identify at least a subset of these genes directly from single-cell data. In the future, we will extend these analyses to identify genes that are predicted to functionally interact in space. Finally, our developed framework can be flexibly extended beyond spatial reconstruction. We are currently using it to recover different types of biological signals, such as temporal progression on short (for example, cell cycle) and long (for example, developmental) timescales. Methods Data reporting No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. Data pre-processing For the cases for which normalized data was not available or used by the authors, we adopted the standard library size normalization in log-space, for example, if d ij represents the raw count for gene i in cell j , we normalized it as $${d}_{ij}\to {d}_{ij}^{^{\prime} }={\log }_{2}\left({10}^{5}\times \frac{{d}_{ij}}{{\sum }_{k}{d}_{kj}}+1\right).$$ Highly variable genes were identified by plotting the dispersion of a gene as a function of its mean and selecting the outliers above cut-off values (usually 0.125 for the mean and 1.5 for the dispersion). In the Slide-seq datasets 23 , we summed up the transcriptomes of neighbouring cells by rounding the coordinates of the physical locations to the next integer multiple of 50. This resulted in a total of 8,331 (9,890) cells for the sagittal (coronal) section of the cerebellum. Low-quality locations were further filtered out by requiring at least 50 genes per cell, resulting in a total of 7,704 (8,258) for the sagittal (coronal) section. Marker genes for the reconstruction were randomly selected from the set of 747 genes. As one of the means of benchmarking the different reconstructions was to visually assess the expression pattern of Pcp4 , we ensured that no genes with a Pearson correlation of | R | ≥ 0.25 with Pcp4 were selected as marker genes. Mathematical formulation of novoSpaRc The procedure used by novoSpaRc includes several steps. We first compute the graph-based distance matrices for N single cells in expression space, \({D}^{\exp }\in {R}^{N\times N}\) , and for M locations, \({D}^{{\rm{phys}}}\in {R}^{M\times M}\) (Extended Data Fig. 1a , Supplementary Methods ). Then, optionally, if a reference atlas is available, we compute the matrix of disagreement, \({D}^{\exp ,{\rm{phys}}}\in {R}^{N\times M}\) , between each of the cells to each of the locations, on the basis of the inverse correlation between the partial expression profile for each location given by the reference atlas and the respective expression profile for each cell. Equipped with these measures of intra- and inter-dataset distances, we set out to find an optimal (probabilistic) assignment of each of the single cells to cellular physical locations. We formulate this problem as an optimization problem within the generalized framework of optimal transport 14 , 15 , 16 . Optimal transport is a mathematical framework that was first established in the eighteenth century by Gaspard Monge and was initially motivated by the question of the optimal (minimal cost) way to rearrange one pile of dirt into a different formation (the respective minimal cost is appropriately termed the ‘earth mover’s distance’). The framework evolved both theoretically and computationally 15 , 16 , 21 and was extended to the correspondence between pairwise similarity measures via the Gromov–Wasserstein distance 19 , 20 . Thus, in our context, it allows us to build on these results and tools to feasibly solve the cellular assignment problem. We aim to find a probabilistic embedding, \(T\in {R}_{+}^{N\times M}\) , of N single cells to M locations that would minimize the discrepancy between the pairwise graph-based distances in expression space and in physical space, and—if a reference atlas is available—simultaneously minimize the discrepancy between its values across the tissue and the expression profiles of embedded single cells. For each cell i , the value of T i , j is the relative probability of embedding it to location j . These optimization requirements over T are formulated as follows. We measure the pairwise discrepancy of T for the expression and physical spaces using the Gromov–Wasserstein discrepancy 19 $${D}_{1}(T)=\sum _{i,j,k,l}L({D}_{i,k}^{\exp },\,{D}_{j,l}^{{\rm{phys}}}){T}_{i,j}{T}_{k,l},$$ where L is a loss function; specifically, we use the quadratic loss \(L(a,b)=\frac{1}{2}{|a-b|}^{2}\) . This term captures our preference to embed single cells such that their pairwise distance structure in expression space would resemble their pairwise distance structure in physical space. Intuitively, if expression profiles that correspond to cells i and k are embedded into cellular locations j and l , respectively, then the distance between i and k in expression space should correspond to the distance between j and l in physical space (for example, if i and k are close expression-wise they should be embedded into close locations, and vice versa). The discrepancy measure weighs these correspondences by the respective probability of the two embedding events. To measure the match to existing prior knowledge, or an available reference atlas, we consider $${D}_{2}(T)=\sum _{i,j}{D}_{i,j}^{\exp ,\,\mathrm{phys}}\,{T}_{i,j}.$$ This term represents the average discrepancy between cells and locations according to the reference atlas, weighted by T . Finally, we regularize T by favouring embeddings with higher entropy, where entropy is defined as $$H(T)=-\sum _{i,j=1}{T}_{i,j}\,\log \,{T}_{i,j}$$ Intuitively, higher entropy implies more uncertainty in the mapping. Entropic regularization drives the solution away from arbitrary deterministic choices and was shown to be computationally efficient 21 . Putting these together, we define the optimization problem for the optimal probabilistic embedding T *: $${T}^{\ast }={\rm{argmin}}(1-\alpha ){D}_{1}(T)+\alpha {D}_{2}(T)-\varepsilon H(T)$$ subject to $$\sum _{j}{T}_{i,j}={p}_{i}\,\,\forall i\in \{1,\ldots ,N\}$$ $$\sum _{i}{T}_{i,j}={q}_{j}\,\,{\rm{\forall }}j\in \{1,\ldots ,M\}$$ where ε is a non-negative regularization constant, and \(\alpha \in [0,1]\) is a constant interpolating between the first two objectives, and can be set to α = 0 when no reference atlas is available. The constraints reflect the fact that the transport plan T should be consistent with the marginal distributions \(p\in \{p\in {R}_{+}^{N};\,\sum _{i}{p}_{i}=1\}\) and \(q\in \{q\in {R}_{+}^{M};\,\sum _{i}{q}_{i}=1\}\) , over the original input spaces of expression profiles and cellular locations, respectively. These marginals can capture, for example, varying densities of single cells in the vicinity of different cellular grid locations, or the quality of different single-cell expression profiles (hence forcing low-quality single cells to have a smaller contribution to the reconstructed tissue-wide expression patterns). When such prior knowledge is lacking, p and q could be set to be uniform distributions. We derive an efficient algorithm for this optimization problem, inspired by the combined results for entropically regularized optimal transport 21 and mapping based on Gromov–Wasserstein distance between metric-measure spaces 20 ( Supplementary Methods ). Then, given the original single-cell expression profiles, represented by a matrix \(Y\in {R}^{N\times g}\) (for N single cells and g genes), and the inferred probabilistic embedding \(T\in {R}_{+}^{N\times M}\) (for N single cells and M locations), we can derive a virtual in situ hybridization (vISH), \(S={Y}^{T}T\in {R}_{+}^{g\times M}\) (for g genes and M locations), which contains the gene expression values for every cellular location of the target space. Note again that because our mapping is probabilistic, each of the cellular locations of the vISH does not correspond to a single cell in the original data. Rather, the vISH represents the expression patterns over an averaged, stereotypical tissue from which the single cells could have originated. novoSpaRc algorithm To spatially reconstruct gene expression, novoSpaRc performs the following steps: 1. Read the gene expression matrix. 1a. Optional: select a random set of cells for the reconstruction; 1b. Optional: select a small set of genes (for example, highly variable). 2. Construct the target space. 3. Set up the optimal-transport reconstruction. 3a. Optional: use existing information of marker genes, if available. 4. Perform the spatial reconstruction including: 4a. Assigning cells a probability distribution over the target space; 4b. Deriving a vISH for all genes over the target space. The novoSpaRc package, system requirements, installation guide and demo instructions are provided at . Generating in silico single-cell data for the BDTNP dataset To test the performance of novoSpaRc with single-cell resolution ground truth, we generated an in silico single-cell dataset for the BDTNP data 13 . In that case we have access to expression profiles for different locations across the embryo. We effectively dissociate the embryo by taking these expression profiles to be the expression profiles of single cells in our in silico set, masking their true original locations, and use novoSpaRc to reconstruct the original embryo (which may be done at lower spatial resolution). Identification of spatial archetypes The identification of spatial archetypes is performed by clustering the spatial expression of a given set of genes. The gene expression is first clustered by hierarchical clustering at the vISH level, although in principle different clustering methods can be used. The number of archetypes is chosen by visually inspecting the resulting dendrogram. The expression values of each gene of the cluster are then averaged per location to produce the spatial archetype for that cluster. Representative genes for each cluster are identified by computing the Pearson correlation of each gene within the cluster against the spatial archetype. The derivation of the spatial archetypes strongly depends on the set of genes used. We observed that the set of highly variable genes generally resulted in sensible spatial archetypes. A list of genes that correspond to each archetype is provided in the Supplementary Information . Identification of zonated genes For tissues with one-dimensional symmetry, we produce a ranking of highly zonated genes, both according to the original spatial expression patterns (Extended Data Fig. 2c, d ) and the reconstructed patterns (Fig. 6a ). The input is a spatial expression matrix (either original or reconstructed), specifying the expression level of each gene in each of the spatial zones. Then, to find a ranked list of genes that are highly zonated towards the first or last spatial zones (for example, crypt in the liver), we first select all genes (i) whose highest expression occurs in that respective zone; (ii) whose maximum expression value is in the top 1% of all genes; and (iii) that are statistically significantly zonated. To compute the zonation significance of individual genes, we used a non-parametric test based on the Kendall’s tau coefficient. The Kendall’s tau coefficient is a measure for the correspondence between two ranked lists—in our case, the expression values of a given gene over consecutive spatial zones and the numbering of the zones. Finally, the remaining genes are ranked according to their centre of mass. The lists of predicted zonated genes based on novoSpaRc’s reconstruction for the mammalian intestine and liver are available in the Supplementary Information . Gene ontology enrichment We used GOrilla for gene ontology (GO) enrichment analysis 25 , in which GO enrichment was computed on the basis of target and background lists of genes ( Supplementary Methods ). For both the target and background lists of genes, we selected genes that had a maximum expression value in the top 10% of all genes. The target lists for genes that were zonated towards the boundaries of the one-dimensional spatial axes (crypt and V6 in intestine; layers 1 and 9 in liver) were further filtered to contain only genes that are statistically significantly zonated, as described in ‘Identification of zonated genes’. The background lists contained the corresponding complements of the target lists. Identification of spatially informative genes We use a spatial autocorrelation measure to rank genes as spatially informative. Specifically, we use Moran’s I as a measure for global spatial autocorrelation. For each individual gene i , the Moran’s I score for its spatial expression, y i , over n cellular locations is: $$I=\frac{n}{{S}_{0}}\frac{{\sum }_{i,j}{z}_{i}{w}_{i,j}{z}_{j}}{{\sum }_{i}{{z}_{i}}^{2}}$$ where \({z}_{i}={y}_{i}-\underline{{y}_{i}}\) , \(\underline{{y}_{i}}\) is the mean expression of gene i , \({S}_{0}=\sum _{i,j}{w}_{i,j}\) and w i , j is a spatial weights matrix, which we base on a k -nearest neighbours graph for each cellular location ( k = 8). To calculate the Moran’s I score and the respective P values for different genes, we used the implementation of PySAL, a Python spatial analysis library 27 . The Moran’s I scores with their respective P values, based on novoSpaRc’s reconstructions for all genes of the Drosophila embryo, zebrafish embryo and cerebellum, are available in the Supplementary Information . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The scRNA-seq datasets were acquired from the Gene Expression Omnibus (GEO) database with the following accession numbers: GSE99457 for the intestinal epithelium 10 , GSE84490 for the liver 7 , GSE95025 for the Drosophila embryo 12 , GSE66688 for the zebrafish embryo 5 and GSE107585 for the kidney 24 . The cerebellum Slide-seq datasets 23 were acquired from the Broad Institute Single Cell Portal ( ). The individual Drosophila embryos dataset 22 is available as a supplementary information file of the original manuscript 22 . The BDTNP dataset was downloaded directly from the BDTNP webpage 13 . Code availability A Python package for novoSpaRc, and the scripts for reconstructing selected tissues presented in the manuscript, are provided at . | A three-dimensional computer model enables scientists to quickly determine which genes are active in which cells, and their precise location within an organ. A team led by Nikolaus Rajewsky, Berlin, and Nir Friedman, Jerusalem, has published the new method and their insights gained from this in Nature. Professor Nikolaus Rajewsky is a visionary: He wants to understand exactly what happens in human cells during disease progression, with the goal of being able to recognize and treat the very first cellular changes. "This requires us not only to decipher the activity of the genome in individual cells, but also to track it spatially within an organ," explains the scientific director of the Berlin Institute for Medical Systems Biology (BIMSB) at the Max Delbrück Center for Molecular Medicine (MDC) in Berlin. For example, the spatial arrangement of immune cells in cancer ("microenvironment") is extremely important in order to diagnose the disease accurately and select the optimal therapy. "In general, we lack a systematic approach to molecularly capture and understand the (patho)physiology of a tissue." Maps for very different tissue types Rajewsky has now taken a big step towards his goal with a major new study that has been published in the scientific journal Nature. Together with Professor Nir Friedman from the Hebrew University of Jerusalem, Dr. Mor Nitzan from Harvard University in Cambridge, USA, and Dr. Nikos Karaiskos, a project leader from his own research group on "Systems Biology of Gene Regulatory Elements," the scientists have succeeded in using a special algorithm to create a spatial map of gene expression for individual cells in very different tissue types: in the liver and intestinal epithelium of mammals, as well as in embryos of fruit flies and zebrafish, in parts of the cerebellum, and in the kidney. "Sometimes purely theoretical science is enough to publish in a high-ranking science journal—I think this will happen even more frequently in the future. We need to invest a lot more in machine learning and artificial intelligence," says Nikolaus Rajewsky. "Using these computer-generated maps, we are now able to precisely track whether a specific gene is active or not in the cells of a tissue part," explains Karaiskos, a theoretical physicist and bioinformatician who developed the algorithm together with Mor Nitzan. "This would not have been possible in this form without our model, which we have named 'novoSpaRc.'" The scientific question is like a puzzle for Professor Nikolaus Rajewsky and Nikos Karaiskos. Credit: Felix Petermann, MDC Spatial information was previously lost It is only in recent years that researchers have been able to determine—on a large scale and with high precision—which information individual cells in an organ or tissue are retrieving from the genome at any given time. This was thanks to new sequencing methods, for example multiplex RNA sequencing, which enables a large number of RNA molecules to be analyzed simultaneously. RNA is produced in the cell when genes become active and proteins are formed from their blueprints. Rajewsky recognized the potential of single-cell sequencing early on, and established it in his laboratory. "But for this technology to work, the tissue under investigation must first be broken down into individual cells," explains Rajewsky. This process causes valuable information to be lost: for example, the original location in the tissue of the particular cell whose gene activity has been genetically decoded. Rajewsky and Friedmann were therefore looking for a way to use data from single-cell sequencing to develop a mathematical model that could calculate the spatial pattern of gene expression for the entire genome—even in complex tissues. The teams led by Rajewsky and Dr. Robert Zinzen, who also works at BIMSB, already achieved a first breakthrough two years ago. In the journal Science, they presented a virtual model of a fruit fly embryo. It showed which genes were active in which cells in a spatial resolution that had never before been achieved. This gene mapping was made possible with the help of 84 marker genes: in situ experiments had determined where in the egg-shaped embryo these genes were active at a certain point in time. The researchers confirmed their model worked with further complex in situ experiments on living fruit fly embryos. NovoSpaRc enables a three-dimensional jigsaw puzzle of gene expression. Credit: Lior Friedman A puzzle with tens of thousands of pieces and colors "In this model, however, we reconstructed the location of each cell individually," said Karaiskos. He was one of the first authors of both the Science study and the current Nature study. "This was possible because we had to deal with a considerably smaller number of cells and genes. This time, we wanted to know whether we can reconstruct complex tissue when we have hardly any or no previous information. Can we learn a principle about how gene expression is organized and regulated in complex tissues?" The basic assumption for the algorithm was that when cells are neighbors, their gene activity is more or less alike. They retrieve more similar information from their genome than cells that are further apart. To test this hypothesis, the researchers used existing data. For liver, kidney and intestinal epithelium there was no additional information. The group had been able to collect only a few marker genes by using reconstructed tissue samples. In one case, there were only two marker genes available. "It was like putting together a massive puzzle with a huge number of different colors—perhaps 10,000 or so," explains Karaiskos, trying to describe the difficult task he was faced with when calculating the model. "If the puzzle is solved correctly, all these colors result in a specific shape or pattern." Each piece of the puzzle represents a single cell of the tissue under investigation, and each color an active gene that was read by an RNA molecule. The method works regardless of sequencing technique "We now have a method that enables us to create a virtual model of the tissue under investigation on the basis of the data gained from single-cell sequencing in the computer—regardless of which sequencing method was used," says Karaiskos. "Existing information on the spatial location of individual cells can be fed into the model, thus further refining it." With the help of novoSpaRc, it is then possible to determine for each known gene where in the tissue the genetic material is active and being translated into a protein. Now, Karaiskos and his colleagues at BIMSB are also focusing on using the model to trace back over and even predict certain developmental processes in tissues or entire organisms. However, the scientist admits there may be some specific tissues that are incompatible with the novoSpaRc algorithm. But this could be a welcome challenge, he says: A chance to try his hand at a new puzzle! | 10.1038/s41586-019-1773-3 |
Medicine | Study suggests side-effects and costs are biggest concerns for users of HIV pre-exposure prophylaxis | Lorraine T. Dean et al, Optimizing Uptake of Long-Acting Injectable Pre-exposure Prophylaxis for HIV Prevention for Men Who Have Sex with Men, AIDS and Behavior (2023). DOI: 10.1007/s10461-023-03986-5 Journal information: AIDS and Behavior | https://dx.doi.org/10.1007/s10461-023-03986-5 | https://medicalxpress.com/news/2023-01-side-effects-biggest-users-hiv-pre-exposure.html | Abstract Pre-exposure prophylaxis (PrEP) is a highly effective HIV prevention tool. Long-acting injectable PrEP (LAI-PrEP) offers another opportunity to reduce HIV. However, how at-risk individuals will consider LAI-PrEP over other modes of administration is unclear. We conducted a discrete choice experiment on preferences for PrEP among a sample of N = 688 gay, bisexual, and other men who have sex with men (GBMSM). We analyzed preferences for mode of administration, side-effects, monetary cost, and time cost using a conditional logit model and predicted preference for PrEP options. LAI-PrEP was preferred, despite mode of administration being the least important PrEP attribute. Side-effects were the most important attribute influencing preferences for PrEP (44% of decision); costs were second-most-important (35% of decision). PrEP with no side-effects was the most important preference, followed by monthly out-of-pocket costs of $0. Practitioners and policymakers looking to increase PrEP uptake should keep costs low, communicate clearly about PrEP side-effects, and allow the use of patient-preferred modes of PrEP administration, including LAI-PrEP. Resumen La profilaxis prexposición (PrEP) es una herramienta de prevención del VIH muy eficaz. La PrEP inyectable de acción prolongada (LAI-PrEP) ofrece otra oportunidad para reducir el VIH. Sin embargo, no está claro cómo las personas en riesgo considerarán LAI-PrEP sobre otros modos de administración. Realizamos un experimento de elección discreta sobre las preferencias por la PrEP entre una muestra de N = 688 hombres homosexuales, bisexuales y otros hombres que tienen sexo con hombres (GBMSM). Analizamos las preferencias por el modo de administración, los efectos secundarios, el costo monetario y el costo del tiempo mediante un modelo logit condicional y la preferencia prevista por las opciones de PrEP. Se prefirió LAI-PrEP, a pesar de que el modo de administración es el atributo de PrEP menos importante. Los efectos secundarios fueron el atributo más importante que influyó en las preferencias por la PrEP (44% de la decisión); los costos fueron los segundos más importantes (35% de la decisión). La PrEP sin efectos secundarios fue la preferencia más importante, seguida de costos de bolsillo mensuales de $0. Los médicos y legisladores que buscan aumentar la aceptación de la PrEP deben mantener los costos bajos, comunicar claramente los efectos secundarios de la PrEP y permitir el uso de los modos de administración de la PrEP preferidos por los pacientes, incluido LAI-PrEP. Working on a manuscript? Avoid the common mistakes Introduction The HIV epidemic continues to disproportionately impact gay, bisexual, and other men who have sex with men (GBMSM) [ 1 ]. Pre-exposure prophylaxis (PrEP) offers an effective approach to prevent HIV infections among GBMSM and reduce incidence in this population. There are currently two oral forms of PrEP available including tenofovir disoproxil fumarate/emtricitabine (TDF/FTC), which is also known as Truvada and was approved by the FDA in 2012, and tenofovir alafenamide/emtricitabine (TAF/FTC), which is also known as Descovy and was approved by the FDA in 2019 [ 2 ]. In addition, the recent US Food and Drug Administration (FDA) approval of long-acting injectable PrEP (LAI-PrEP) offers another effective tool for reducing new infections of HIV [ 3 ]. Preferences for PrEP are especially important to consider for African American/Black and Hispanic/Latinx GBMSM who are at high risk of exposure to HIV [ 4 , 5 ], whose HIV rates remain high [ 6 ] and who have not been sufficiently reached by oral PrEP [ 7 ]. There are several barriers to obtaining PrEP, and patients often make trade-offs in their decisions to use PrEP. Financial and time costs are prohibitive for many individuals, and previous studies have demonstrated that financial and time costs are consistently reported as the main barriers expressed by many individuals considering PrEP [ 8 , 9 , 10 , 11 , 12 ]. Despite most private and state Medicaid plans covering PrEP [ 13 ], and private health insurers increasingly expanding coverage for PrEP, cost-sharing associated with these insurance plans, including high out-of-pocket costs in the form of co-pays, coinsurance, and deductibles (i.e., people are still “underinsured”) remain a challenge [ 14 , 15 ]. In our previous study among MSM in three US cities who were prescribed PrEP, co-pays and deductibles for medical services were a greater barrier to accessing PrEP than the cost and co-pays associated with the medication itself [ 12 ]. Even those with prescription drug coverage through insurance plans could pay more than $2000 per year in co-pays for PrEP and its associated laboratory testing [ 16 ]. Patients must also spend the time required to attend appointments and refill prescriptions, potentially having to miss work and lose income. These monetary and time costs are not the only consideration for patients taking PrEP as they may be making decisions based on other aspects of PrEP, including the potential side-effects of PrEP and the possibility for stigma around sexual or drug use behavior that increase risk for HIV infection. As new modes of administration for PrEP emerge, real-world implementation questions arise of how patients will decide which PrEP modes to use, and what financial, cost, and other trade-offs they may be weighing in their decision. Introduction of LAI-PrEP raises many implementation questions about the likelihood of uptake compared to daily oral PrEP, on-demand PrEP (also known as 2-1-1 PrEP, in which two pills are taken at least two but not more than 24 hours before sex, another pill 24 hours after the first, and a final pill taken 24 hours after the second) and a subcutaneous PrEP implant [ 17 ]. Optimizing PrEP implementation and maximizing reductions in HIV incidence require an understanding of the decision-making process related to PrEP including newer LAI-PrEP formulations among at-risk populations. In this study, we conducted a discrete choice experiment (DCE) among a sample of racially diverse at-risk GBMSM to determine preferences for LAI-PrEP and other formulations, with the goal of identifying optimal approaches for effective implementation. Methods Development of DCE DCEs are a class of conjoint analyses where respondents make choices between at least two hypothetical alternatives that vary in several key attributes. By making a series of choices, the independent impact of each attribute on preferences can be calculated. This approach better approximates the complexity in the real-world process of health decision-making in which the choice between engaging a treatment or not depends on several factors, rather than just one element. Preference elicitation methods have been used previously to assess preferences for HIV testing and treatment in the US [ 18 , 19 ]. Before completing the DCE, participants were shown the following text: “In this next section, you will choose between two different potential PrEP choices. You will be shown some information about these two choices—the options are the same except for the things that differ here, including being equally effective at preventing HIV. You should select the PrEP option you would prefer.” To develop our DCE, we used the checklist of best practices for DCE developed by a working group from the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) [ 20 ]. We conducted interviews with 25 GBMSM seeking care at a sexually transmitted infection (STI) clinic in Rhode Island to develop a list of potential attributes for inclusion [ 21 ]. We narrowed this list to a set of four attributes based on this formative work and the expert opinion of the research team: cost, travel time, mode of administration, and side-effects. The full list of attributes and levels and their exact phrasing is found in the appendix [ 22 ]. We developed levels for the costs and travel time attributes based on the ranges provided by participants according to the amounts they would be willing to pay for PrEP and how far they would be willing to travel for an appointment. Levels for mode of administration and side-effects were derived by clinical recommendations and experience. The DCE was programmed as an online survey in Qualtrics, using a randomized design to ensure balance of all levels. Each participant was faced with eight different choice tasks where they chose between two PrEP options. The DCE was placed in the middle of a larger survey on attitudes towards PrEP and willingness to pay for PrEP medication and services. Fielding the DCE We recruited GBMSM between May 2020 and October 2021 through electronic advertising on several social networking applications targeted to GBMSM (Scruff, Jack’d) as well as targeted advertisements on Facebook and Instagram. To be eligible for the study, respondents needed to be 18 years or older, have been assigned male at birth or currently identify as male, have been sexually active with at least one man in the last 12 months, be HIV-negative, speak English or Spanish, and live in New England (Massachusetts, Connecticut, Rhode Island, Maine, New Hampshire, and Vermont). The DCE was available in English or Spanish depending on participant preference. Participants were given one week from starting to complete the survey and were blocked from taking it if someone from the same IP address had already completed the survey. Participants who completed the survey and provided an email address were sent an electronic $25 gift card. We dropped suspected bots or fake responses from the survey if they had inconsistency in responses (e.g., respondents who reported having sex with a man during the screener but not later on in the survey) or an IP address located outside of New England. Analysis of Results We calculated descriptive statistics (means, medians, standard deviations, ranges) of demographic variables. For the DCE, we used a conditional logit model with an Efron approximation to estimate preferences for attribute levels. We used dummy coding and set the least preferred level for each attribute to zero. We conducted a subgroup analysis of the coefficients based on several survey questions: participant race and ethnicity, participant self-reported income (above or below $75,000 per year), whether the participant reported ever taking PrEP in the past (even if just one pill), and whether the participant described themselves as willing to take PrEP in the future (definitely or probably willing, compared to those who were maybe, probably not, or definitely not willing). We calculated the relative importance of each attribute, or the percent of the decision that is associated with each attribute, by taking the distance between the highest and lowest coefficient within an attribute and normalizing across all attributes. We also simulated preferences for this population making hypothetical choices between different PrEP options. We predicted the share of respondents preferring a theoretical option by using the coefficients of the conditional logistic model. All analyses were conducted in R using the RStudio application (2021.09.02 Build 382) and the “clogit” command. The study was approved by the Institutional Review Board at Miriam Hospital in Providence, Rhode Island. Results Respondent Demographics A total of N = 688 GBMSM participated in the study. Demographic characteristics and PrEP use history for this sample are found in Table 1 . Most of the participants had never used PrEP but most were willing to use it. Some of the demographic variables do not sum to N = 688 because respondents did not answer or indicated they did not know the answer to certain questions. Table 1 Demographics of Discrete Choice Experiment Respondents (N = 688, rows may not sum to 688 due to missing data) Full size table Importance of PrEP Attributes Figure 1 shows the importance of each attribute for the decision between PrEP options overall. Mode of administration (9.1%) was the least important attribute. The two most important PrEP attributes were side-effects (43.5%) and total out-of-pocket cost (35.2%), followed by time for follow-up visits (12.2%). Fig. 1 Relative Importance of Each Included Attribute (Percent to which the attribute contributed to the decision to use PrEP) Full size image Preferences for Each Level Within Each Attribute Table 2 shows the conditional logit model coefficient for each level within each attribute. These coefficients are the results of a logistic regression that shows how the presence of each level within a PrEP option affected the participant’s choice. Higher coefficients indicate that participants were more likely to choose a PrEP option with that level, while lower coefficient values are associated with being less likely to choose PrEP. Coefficients for an attribute should be interpreted relative to those of other attributes within that level and values are interpreted in comparison to other attribute values in the DCE. The coefficients for $200, long term side-effects, 4-h travel time, and a PrEP implant were the lowest. Among modes of administration, individuals were most likely to prefer injection every few months (coefficient 0.16, 95% CI 0.09, 0.23), followed by a pill at the time of sex (coefficient 0.08, 95% CI 0.00, 0.15), then a daily pill (coefficient 0.01, 95% CI -0.07, 0.09), and an implant every few months (coefficient constrained to be 0). However, costs and side-effects exhibited the strongest influences on PrEP preferences, over and above mode of administration. Lower monetary cost PrEP had higher logistic coefficients. PrEP with an out-of-pocket cost of $0 was the most preferred, with a coefficient of 0.62 (95% CI 0.52, 0.71), followed by $10 PrEP at 0.61 (95% CI 0.51, 0.70), $25 PrEP at 0.41 (95% CI 0.31, 0.51), $50 PrEP at 0.40 (95% CI 0.30, 0.50), $100 PrEP at 0.26 (95% CI 0.16, 0.36), and $200 PrEP was the reference value with a coefficient of zero. PrEP with no side-effects was the most preferred with a coefficient of 0.76 (95% CI 0.69, 0.84), side-effects upon starting was 0.55 (95% CI 0.47, 0.63), and side-effects that persist while on PrEP were 0.42 (95% CI 0.25, 0.42) with long-term side-effects as the reference value. Shorter travel times were also generally associated with higher coefficients, with 30-min travel time having a coefficient of 0.21 (95% CI 0.13, 0.30), one hour 0.19 (95% CI 0.10, 0.27), two hours 0.16 (95% CI 0.07, 0.25), and three hours 0.17 (95% CI 0.09, 0.26) with four hours as the reference value. Table 2 Logistic coefficients for each PrEP attribute Full size table Table 2 shows these coefficients for all levels of attributes and the significance of each coefficient. Coefficients can be compared across attributes to show the relative preferences for each level. Cost and side-effects were the two most important attributes; specifically, “no side-effects” was the single most preferred PrEP level and $0 out-of-pocket cost was the second most important. We also conducted an analysis to determine the coefficients associated with each attribute level for different demographic groups and those with different experiences with PrEP, with the results of this analysis found in the appendix [ 22 ]. Overall, lower income people (those making less than $75,000 per year) had statistically significantly higher coefficients associated with lower cost PrEP and lower coefficients associated with higher cost PrEP. Side-effects were more important for White respondents than for other racial and ethnic groups. Finally, we explored the coefficients associated with each level based on experience with and self-reported willingness to take PrEP in the future. Those who had taken PrEP in the past were more sensitive to out-of-pocket costs (relatively higher coefficients associated with lower costs) than those who had never taken PrEP. Those who were more willing to take PrEP in the future (probably or definitely willing) had higher coefficients associated with less side-effects and lower coefficients associated with worse side effects. Predicted Preference Shares Using the results of the DCE, we simulated preferences to predict how cost and side-effects (the most important attributes for preferences) influence the average probability of PrEP uptake in Fig. 2 , starting from a 50% baseline. For example, if PrEP costs increased from $0 to $10, 0.5% of respondents would not be interested. Side-effects also made people less interested in PrEP; if PrEP had only side-effects on starting, 10.7% fewer people would be interested in PrEP compared to PrEP with no side-effects. The overall most preferred combination of attributes was PrEP that cost $0 out-of-pocket per month, 30 min travel time, had no side-effects, and was administered by injection every few months. Fig. 2 Effects of changing the characteristics on the average probability of uptake for PrEP for the overall survey population Full size image Discussion This is among the first studies to evaluate the decision-making process between different formulation of PrEP including LAI-PrEP among at-risk GBMSM. In this study of racially diverse GBMSM, we identified a strong preference for PrEP when offered as an injectable treatment, with no side effects, at no-cost, and with visit times of 30-min or less. LAI-PrEP was the most preferred mode of administration though more strongly preferred by White respondents than those of other races, by people who had taken PrEP in the past, and by people who had not previously used PrEP but expressed willingness to try it. Consistent with other studies of the impact of attributes of PrEP and their impact on preferences, lower out-of-pocket cost ($0) PrEP with no side-effects was the most preferred option to optimize PrEP use. These results can help guide PrEP implementation efforts and policy decisions to maximize PrEP uptake and effectiveness in reducing HIV incidence among at-risk populations. Our study builds on evidence from previous DCEs on PrEP preferences. One DCE that recruited participants via gay social networking applications also found cost was the most important attribute, though side-effects were not included as an attribute. Given the importance of side effects in our study, this study characterizes the relationship and impact of both cost and side-effects on PrEP preferences [ 23 ]. In another DCE study of preferences for LAI-PrEP in a national sample of MSM, side-effects and cost were the two most important attributes [ 24 ], consistent with our results; this study expands on that work and shows the importance of these attributes across all modalities of PrEP. A DCE of PrEP preferences in the US military found mode of administration to be the most important attribute in both those with and without experience taking PrEP, but neither cost nor side-effects were included in that DCE [ 25 ]. Our results strongly suggest that keeping out-of-pocket costs and side-effects low increases interest in PrEP, regardless of mode of administration. The introduction of a new PrEP mode alone may not increase PrEP uptake without accompanying strategies to ensure out-of-pocket costs and side effects are low. Efforts to develop new formulations of PrEP may increase interest in at-risk populations but are likely not as important as efforts to keep PrEP out-of-pocket costs low, though some research suggests that when directly asked about preferences for LAI-PrEP, relatively few GBMSM say they would prefer that to oral PrEP [ 26 ]. Similarly, while time was not a major driver of preferences, respondents showed a clear preference for shorter travel times. Keeping time costs low through short visits and injections every few months could increase interest in and uptake of PrEP. PrEP preferences also varied by demographic group and with different levels of PrEP experience. Lower-income respondents (< $75,000 per year in our sample) were more sensitive to out-of-pocket PrEP costs. Additionally, those with experience taking PrEP were more sensitive to these costs, suggesting that past experience with PrEP was negatively impacted by the actual out-of-pocket costs of PrEP they experienced. If health care providers, payers, and policymakers want to target PrEP to people with low incomes or those who may have taken PrEP in the past, out-of-pocket costs need to be kept low. In our study, small increases in cost (e.g., from $0 to $10) were not associated with large decreases in interest in PrEP. The tradeoffs we simulated between cost and side-effects also suggests that low PrEP costs alone may not be enough to encourage uptake. Simulated PrEP uptake was more impacted by even short-term side effects upon starting than costs of PrEP of at least $25. Costs of $50 per month were needed to cause respondents to be less interested in PrEP with no side-effects than PrEP with side-effects only on starting. Respondents also had very strong preferences against long-term side-effects. Individuals would prefer paying $200 for PrEP with no side-effects than pay nothing for PrEP with long-term side-effects. This suggests that efforts to reduce PrEP out-of-pocket costs, in isolation of the other tradeoffs patients consider, may not be effective to increase interest in PrEP. Surveys have shown that many young GBMSM are not even aware of PrEP [ 27 ], so targeted messaging efforts by public health officials about the safety and efficacy of PrEP may be the best way to increase PrEP uptake among high-risk youth and others unfamiliar with PrEP, especially given the importance of low out-of-pocket costs for PrEP among young GBMSM [ 28 ]. Among those not reporting interest in PrEP, there was no single attribute that stood out as being most important to increase interest in PrEP among that group. Keeping out-of-pocket costs low or none is of critical importance to maximize uptake. The US has several models for PrEP assistance that can make total out-pocket costs close to $0. In addition to the manufacturer’s coupon programs and other PrEP assistance programs, federal guidelines as of 2021 mandated that non-grandfathered Affordable Care Act compliant private health insurance plans cover services associated with PrEP, including provider visits, HIV and STI testing to remain eligible monitoring of kidney function, and others [ 29 ]. These recent decisions are a step in the direction of realizing the ideal combination of PrEP attributes that can increase uptake. However, covering out-of-pocket costs for uninsured individuals and especially in states that have not expanded Medicaid is still major challenge. Limitations DCE is a stated preference methodology; though we tried to keep choices simple and similar to those that may be encountered in real-world settings, respondents may not be familiar with making these decisions. For example, at the time of the survey, implant and LAI-PrEP were not approved PrEP modalities in the US. However, including these as options gave us the opportunity to assess emerging technologies and compare those to the current standard of care. In the interest of lowering cognitive burden for respondents, we also only presented a subset of the available attributes that people may consider when making a decision about PrEP. We only had respondents complete eight choice tasks with four attributes in order to increase survey completion rates; this is on the lower end but still within standard DCE practice [ 30 ]. While the results of this DCE show the relative importance of each of these attributes, the specific numeric values of the coefficients presented are dependent on the attributes and levels used in this experiment. With different attributes or levels, these numeric values would likely be different, though their relative impact would remain consistent. Additionally, this sample may be unrepresentative of all US GBMSM. We limited our sample to those using gay social networking sites in New England, and most of our respondents were cisgender non-Hispanic White men. Future research on PrEP preferences could include more transgender or non-binary individuals as well. Doing an online survey could be biased towards those GBMSM with higher levels of education [ 31 ]. We used a set of checks and questions to flag potential bots or spammers, and like other surveys dropped a high percentage of respondents from outside of our geographic study area. [ 32 ] Implications Despite these limitations, the results of this experiment highlight the complex decisions that GBMSM make when considering whether to take PrEP and which formulations. Policymakers should use these results to better develop strategies to increase the uptake of PrEP among MSM and prevent future HIV infections. Clear communication by the Centers for Disease Control and Prevention about the short- and long-term side-effects as well as the constant monitoring of side effects by physicians could help reduce the impact of fear of those side effects on PrEP uptake. Costs are also a barrier that can be addressed through policy. The US Preventative Services Task Force (USPSTF) recommendation on PrEP as an effective tool for HIV prevention (and associated “A” rating) should require most health plans in the US to cover PrEP medications without cost-sharing [ 33 ]. However, PrEP costs are complex and consist of more than just the cost of the medication, with lab testing and outpatient visits potentially adding up to hundreds or thousands of dollars in cost per year. Given the importance of cost in the decision to use PrEP, copay assistance programs through manufacturers and state policy efforts to reduce costs of PrEP could target these additional out-of-pocket costs to keep the overall costs of PrEP under $25 per month. Providers should be mindful as new, preferred modes of PrEP, like injections, are approved by the FDA. However, offering a new mode of administration may not substantially increase the appeal of PrEP on its own. Telemedicine and other virtual care approaches that have increased in use during the COVID-19 pandemic could be useful for reducing travel times for routine PrEP outpatient visits. Conclusion In this DCE measuring PrEP preferences in GBMSM, PrEP delivered through injection every few months, with no side-effects, that cost $0 out-of-pocket per month, and had 30 min travel time, represented the most desirable package for PrEP. While LAI-PrEP was the most preferred mode of administration, mode of administration did not emerge as a strong driver of preferences for PrEP. Instead, side-effects and monetary cost were the two most important attributes predicting PrEP preferences. As PrEP is a key piece of the Ending the HIV Epidemic (EtHE) plan, efforts to scale up the use of PrEP are unlikely to succeed unless cost and side-effect barriers can be sufficiently addressed by health care providers and policymakers. Availability of Data and Material (Data Transparency) Data is available from authors upon reasonable request. Code Availability (Software Application or Custom Code) All analyses were conducted in R, using the RStudio application (2021.09.02 Build 382). | A new survey finds that men who would be potential users of HIV pre-exposure prophylaxis (PrEP) medication prefer long-acting injections over pills, but rank side effects and costs as the most important issues for them in considering whether to take PrEP. The study, published online January 21 in AIDS and Behavior, was co-led by researchers at the Johns Hopkins Bloomberg School of Public Health. Unlike previous studies, the survey took into account the multiple factors that people consider when making a decision to take PrEP: cost, side effects, travel time, and pills versus injections. HIV PrEP came to market in 2012 after the Food and Drug Administration approved a once-daily pill that requires monthly check-ins. The FDA approved a long-acting injectable PrEP in December 2021. Heralded as a critical prevention tool, PrEP proved to be 99 percent effective in blocking HIV infection when used correctly. However, uptake has been slow. As of 2020, only about 25 percent of people for whom PrEP is recommended were using it, according to the Centers for Disease Control and Prevention, with uptake among Black and Hispanic people estimated at only 9 and 16 percent, respectively, in 2020. An initiative led by the U.S. Department of Health and Human Services, Ending the HIV Epidemic in the U.S., aims to increase overall PrEP uptake to 50 percent by 2030. The survey, fielded from May 2020 through October 2021, examined preferences relating to PrEP in a sample of 688 gay, bisexual, and other men who have sex with men. The sample included people who had previously used PrEP as well as those who had not. Respondents tended to rank side effects as the most important factor, followed by costs. And although they preferred long-acting injection over other ways of taking PrEP, they ranked mode of administration as the least important issue. "PrEP is very effective at preventing HIV transmission but has relatively low uptake; our results suggest that public health policymakers might be able to boost PrEP uptake by keeping costs low, and if health care practitioners communicate clearly about the potential PrEP side effects, regardless of what type of PrEP they decide to take," says study co-first author Lorraine T. Dean, ScD, associate professor in the Bloomberg School's Department of Epidemiology. Dean's co-first author on the paper was Zachary Predmore, Ph.D., associate policy researcher at the RAND Corporation. Predmore conducted the research while he was completing his Ph.D. at the Bloomberg School. The researchers recruited participants to the study by advertising on social media between May 2020 and October 2021. Participants were asked to fill out an online survey known as a discrete choice experiment—designed to reveal not only the participants' real-world preferences but also the relative strengths of those preferences. The results indicated that the participants, in considering whether to take PrEP, regarded potential side effects as most important. On average they would have preferred to pay $200—the highest cost in the choices offered—for PrEP with no side effects, rather than paying nothing for PrEP with long-term side effects. Nausea, vomiting and other gastrointestinal problems are potential side effects of PrEP, but have been reported by only a small percentage of people taking these medications. Second and third in importance to study participants were out-of-pocket PrEP costs, and travel time for follow-up medical visits. The new injectable PrEP starts with a monthly injection for two months, then one injection every two months. In contrast, PrEP pills are typically prescribed to be taken once daily. People taking the medication are monitored in person every 90 days or four times a year, while those using injectables get checked every two months, approximately six times per year. Respondents considered mode of administration the least important factor, although their preference on the whole was for a long-acting injected version over a daily pill or an as-needed short course of pills. Subgroup analyses indicated that those making less than $75,000 per year and those with prior experience taking PrEP tended to have stronger preferences for lower-cost PrEP. The results underscore the need to keep PrEP money and time costs, as well as side-effects, low in order to improve uptake, Dean says. Although the U.S. Preventive Services Task Force gave PrEP an "A" rating in 2019—effectively making PrEP and associated clinical services available at low or no cost under Affordable Care Act-compliant health insurance policies—not everyone who needs PrEP is covered by such policies, or by any health insurance at all. "I hope our results will encourage non-ACA insurers to keep PrEP at zero cost to those who need it," Dean says. "Ideally, injectable PrEP needs to be fast, free, and have minimal side effects, and our results suggest that could really be said about all PrEP products." She and her colleagues are now following up the study with an analysis of PrEP preferences by race and other demographic groupings. | 10.1007/s10461-023-03986-5 |
Medicine | Alcohol use affects levels of cholesterol regulator through epigenetics | F W Lohoff et al, Methylomic profiling and replication implicates deregulation of PCSK9 in alcohol use disorder, Molecular Psychiatry (2017). DOI: 10.1038/MP.2017.168 Journal information: Molecular Psychiatry | http://dx.doi.org/10.1038/MP.2017.168 | https://medicalxpress.com/news/2017-09-alcohol-affects-cholesterol-epigenetics.html | Abstract Alcohol use disorder (AUD) is a common and chronic disorder with substantial effects on personal and public health. The underlying pathophysiology is poorly understood but strong evidence suggests significant roles of both genetic and epigenetic components. Given that alcohol affects many organ systems, we performed a cross-tissue and cross-phenotypic analysis of genome-wide methylomic variation in AUD using samples from 3 discovery, 4 replication, and 2 translational cohorts. We identified a differentially methylated region in the promoter of the proprotein convertase subtilisin/kexin 9 ( PCSK9 ) gene that was associated with disease phenotypes. Biological validation showed that PCSK9 promoter methylation is conserved across tissues and positively correlated with expression. Replication in AUD datasets confirmed PCSK9 hypomethylation and a translational mouse model of AUD showed that alcohol exposure leads to PCSK9 downregulation. PCSK9 is primarily expressed in the liver and regulates low-density lipoprotein cholesterol (LDL-C). Our finding of alcohol-induced epigenetic regulation of PCSK9 represents one of the underlying mechanisms between the well-known effects of alcohol on lipid metabolism and cardiovascular risk, with light alcohol use generally being protective while chronic heavy use has detrimental health outcomes. Introduction Various pathways to the development of alcohol use disorder (AUD) exist and include an interaction of environmental and genetic risk factors. 1 , 2 , 3 Identifying underlying genetic risk factors for AUD has been challenging due to small effect sizes, heterogeneity and complex modes of inheritance. 2 However, the field of epigenetics in AUD is just developing, 4 and new technological advances are making it feasible to conduct epigenome-wide association studies (EWAS) of complex phenotypes using DNA methylation. Only a few EWAS for AUD exist, but they are limited by small sample sizes, low array-capture, tissue type, analysis strategy and data interpretation. 5 , 6 , 7 , 8 , 9 , 10 , 11 Consequently, no universal DNA methylation loci for AUD have been identified; however, recent data suggest multiple loci for mild-moderate alcohol consumption. 12 Given that heavy alcohol consumption can cause significant alterations to multiple organs and tissue types, we carried out a systematic cross-tissue and cross-phenotypic analysis of methylomic variation in AUD. We used samples from independent cohorts involving postmortem brain, blood, and liver tissue as well as various clinical and imaging phenotypes with the goal of identifying disease-associated methylomic DNA variation (Figure 1 ). Figure 1 Methylomic profiling approach in alcohol use disorder (AUD) using three discovery and six replication data sets identified PCSK9 as main epigenetic target. A schematic representation of cohorts investigated in this study broken into the discovery phase experiments ( a – c ) that identified PCSK9 association with alcohol use and the replication stage experiments ( d – i ) including biological target validation in animal models ( h , i ). Replication experiments were performed in multiple cohorts between blood and liver and data derived from publicly available datasets and direct investigation. For human blood, DNA from individuals who participated in the GTP ( n =392) ( d ) was analyzed for CpG cg01444643 (hg38, chr1: 55039175) (PCSK9 CpG1 ). PCSK9 CpG1 was significantly associated with both the binary SCID 37 lifetime alcohol abuse ( β =−0.007±0.004, F =3.891, d.f.=2/325, P =0.049) and the continuous KMSK Lifetime Alcohol scale 38 ( β =−0.001±0, F =4.944, d.f.=2/306, P =0.027). In a second human blood cohort ( e ), we assessed PCSK9 DNA methylation levels with pyrosequencing at the PCSK9 CpG1 and an adjacent CpG located at chr1: 55039185 (PCSK9 CpG2 ) in human subjects ( n =90) aged 21–65 years with a diagnosis of alcohol dependence and healthy volunteers ( n =62). We observed significantly lower DNA methylation at both CpGs (Student’s t -test, PCSK9 CpG1 : alcohol abuse=76.68±0.05, no alcohol abuse=78.6±0.064, P =0.0075; PCSK9 CpG2 :alcohol abuse=88.49±0.025, no alcohol abuse=89.28±0.032, P =0.028). In a liver cohort, we assessed PCSK9 DNA methylation levels at PCSK9 CpG1 in DNA derived from individuals with normal livers ( n =34) or primary liver disease tissue arising in the setting of chronic hepatitis B (HBV) or C (HCV) viral infection, alcoholism (ETOH), and other causes ( n =66; GSE60753) ( f ). PCSK9 DNA methylation was significantly elevated with alcohol-induced cirrhosis (alcohol cirrhosis=0.54±0.0023, healthy=0.49±0.0015, P =0.0021). Importantly, significant elevations were also observed with cirrhosis induced by hepatitis B (Hep B cirrhosis=0.55±0.0079, healthy=0.49±0.0015, P =0.023) and C (Hep C cirrhosis=0.57±0.0011, healthy=0.49±0.0015, P =1.2 × 10 −9 ). Finally, PCSK9 CpG1 DNA methylation levels assessed with pyrosequencing in liver tissue samples from liver transplant candidates with alcoholic cirrhosis ( n =50) and healthy controls ( n =47) ( g ). A significantly higher PCSK9 DNA methylation level was observed in alcoholic cirrhosis cases relative to controls (Student’s t -test; alcohol abuse=46.19±1.07, no alcohol abuse=37.63±0.89, P =6.5 × 10 −9 ). In translational models, mouse liver PCSK9 expression was significantly lower in the alcohol exposure group ( h ) (Student’s t -test; alcohol exposure=0.1±0.011, no alcohol exposure=1.02±0.057, P =0.0029). A translational rat model ( i ) was used to assess long-term effects of alcohol on PCSK9 expression in liver. Full size image Materials and methods Subjects and samples Subjects in the discovery and replication stages with alcohol dependence or abuse will be referred to as AUD. Discovery stage genome-wide DNA methylation analysis in brain Methylomic profiling We used a stepwise, multiple-level approach across various tissue types and outcome for the initial discovery analysis. A graphic representation is shown in Figures 1a–c . Methylomic profiling discovery data set of postmortem prefrontal cortex tissue: Bulk brain derived DNA methylation profiles generated on Illumina Human Methylation 450 (HM450) bead chip microarrays from 46 postmortem prefrontal cortex (PFC) samples were downloaded from the Gene Expression Omnibus from GSE49393. The sample was comprised of 23 subjects with AUD (16 males and 7 females) and 23 age-matched healthy controls. Detailed methodology and sample description can be found elsewhere. 13 Fresh-frozen sections of Brodmann area 9 (BA9, mainly the dorsolateral PFC of the brain) postmortem brain tissues were obtained from the New South Wales Tissue Resource Centre (NSW TRC) at the University of Sydney and ethic approval was obtained from the Sydney Local Health Network and the University of Sydney. For each probe on the microarray, a linear model was performed to determine the association of DNA methylation with alcohol relative to control status, controlling for age, sex and neuronal proportion as estimated by the Cell EpigenoType Specific (CETS) algorithm. 14 For this analysis, alcohol abuse and alcohol dependence were grouped together for case status. Correction for multiple testing was assessed using the False Discovery Rate method. Detailed sample demographics appear in Supplementary Table S1 . Neuronal and non-neuronal postmortem prefrontal cortical tissue National Institute of Child Health and Human Development (NICHD) brain bank sample: Postmortem cortical tissue from major depressive disorder (MDD; n =29) and matched control ( n =29) samples were obtained from the NICHD brain bank and DNA methylation profiles using HM450 arrays from FACs isolated neuronal and non-neuronal (glial) nuclei were obtained as previously described. 14 For the neuronal and glial fractions separately, a linear model was performed to determine the association of DNA methylation with alcohol relative to control status, controlling for age, sex and postmortem interval for each probe on the microarray. Alcohol case status was based on reported alcohol abuse in the summary of health history provided by the NICHD brain bank. Correction for multiple testing was assessed using the false discovery rate method. Detailed sample demographics appear in Supplementary Table S2 . National Institute on Alcohol Abuse and Alcoholism (NIAAA) methylomic profiling discovery data set with resting-state magnetic resonance imaging (MRI) functional connectivity (FC): A total of 68 right-handed individuals with a diagnosis of AUD and 72 healthy controls were recruited to the NIAAA at the National Institutes of Health (NIH), USA. Written informed consent to the study was obtained from all the subjects, which was approved by the Institutional Review Board (IRB) of the NIAAA and was in accordance with the Declaration of Helsinki and the NIH Combined Neuroscience IRB. Participants were compensated for their time. Blood was obtained from the 68 individuals with a diagnosis of AUD. The Structured Clinical Interview for DSM-IV Axis I Disorders (SCID) 15 was administered to all participants. A smoking questionnaire helped to determine the amount and frequency of a participant’s cigarette use. The Alcohol Dependence Scale (ADS) was administered to determine the severity of alcohol dependence (AD). 16 Exclusionary criteria included pregnancy, claustrophobia, and significant neurological or medical diagnoses. Participants were instructed not to consume any alcohol for 24 h and no more than half a cup of caffeinated beverages 12 h prior to each scanning visit. Participants could not participate if they had a positive alcohol breathalyzer or positive urine drug screen on the day of the scan. Controls were excluded if they met criteria for any current or past alcohol dependence. All subjects were required to be deemed physically healthy by a clinician. At the time of the MRI, AD participants could not be exhibiting severe symptoms of alcohol withdrawal, as determined by the Clinical Institutes Withdrawal Assessment-Alcohol revised (CIWA-Ar) score of >8. 17 Detailed sample demographics appear in Supplementary Table S3 . MRI data acquisition and pre-processing Whole-brain anatomical images and 5 min of closed-eyes resting-state fMRI were collected using 3 T General Electric and 3 T SIEMENS MRI scanners. High-resolution T1-weighted 3D structural scans were acquired for each subject using an MPRAGE sequence (128 axial slices, TR=1200 ms TE=30 ms, 256 × 256 matrix). Resting-state fMRI (rs-fMRI) datasets were collected using a single-shot gradient echo planar imaging pulse sequence with 36 axial slices acquired parallel to the anterior/posterior commissural line (TR=2000 ms, TE=30 ms, flip angle=90°, 3.75 mm × 3.75 mm × 3.8 mm voxels). rs-fMRI data pre-processing Using Functional MRI of the Brain (FMRIB)’s Software Library ( ) we applied; (1) slice timing correction for interleaved acquisitions using Sinc interpolation with a Hanning windowing kernel; (2) motion correction using FMRIB’s Linear Image Registration Tool; 18 (3) non-brain removal using Brain Extraction Tool; 19 (4) spatial smoothing using a Gaussian kernel of full width at half maximum 5mm; (5) grandmean intensity normalization of the entire 4D data set by a single multiplicative factor, the size of the voxels is 4 × 4 × 4 mm 3 ; (6) high-pass temporal filtering (Gaussian-weighted least-squares straight line fitting, with sigma=50 s); and (7) registration to high-resolution structural MNI standard space images using the FMRIB’s Linear Image Registration Tool. 18 , 20 rs-fMRI connectivity analysis We applied Probabilistic Independent Component Analysis (PICA) 21 by using the Multivariate Exploratory Linear Decomposition into Independent Components (MELODIC) toolbox of the FSL package. A temporal concatenation tool in MELODIC was used to derive group level components across all subjects. Pre-processed data were whitened and projected into a 36-dimensional subspace using PICA where the number of dimensions was estimated using the Laplace approximation to the Bayesian evidence of the model order. 21 The whitened observations were decomposed into sets of vectors, which describe signal variation across the temporal domain (time-courses) and across the spatial domain (maps) by optimizing for non-Gaussian spatial source distributions using a fixed-point iteration technique. 22 These FC components maps were standardized into z statistic images via a normalized mixture model fit (thresholded at z>7). 21 Two criteria were used to remove biologically irrelevant components: (1) those representing known artifacts such as motion, and high-frequency noise; and (2) those with connectivity patterns not located mainly in gray matter. Networks of interest were identified as anatomically and functionally classical resting state networks 23 upon visual inspection. The between-subject analysis was carried out using dual regression, a regression technique which back-reconstructs each group level component map at the individual subject level. Next, a non-parametric permutation test (FSL’s randomize tool, with 5000 permutations), that utilizes a threshold-free cluster enhanced (TFCE) thresholding, was used to assess statistically significant differences in FC between the groups. To minimize the potential confounding influence of age, gender, ancestry informative marker (AIM) scores and scanner types in these results, these parameters were used as nuisance covariates for each network of interest. Finally, the resulting statistical maps were thresholded at P <0.05 family-wise error corrected for the main group effect. The Harvard-Oxford cortical and subcortical atlases incorporated in FSL were used to identify the anatomical regions of the resulting PICA maps. The FSL Cluster tools were used to report information about clusters in the selected maps. The functional connectivity maps were overlaid onto the mean standardized structural T1 1-mm MNI template and visualized using Mricrogl. Peripheral blood Illumina HM450 data processing Following removal of three subject samples with failed bisulfite conversion, raw Illumina HM450 microarray data was processed using the wateRmelon package in R. 24 Raw data was trimmed of probes failing quality assessment and a known list of 32 323 cross-reactive probes, 25 followed by scale-based data correction for Illumina type I relative to type II probes. Methylated and unmethylated intensity values were then quantile normalized separately prior to the calculation of the β value based on the following definition: β value=(signal intensity of methylation-detection probe)/(signal intensity of methylation- detection probe±signal intensity of non-methylation-detection probe±100). Values were then adjusted by taking the residuals of a linear model of β -values as a function of sodium bisulfite modification batch. PCSK9 targeted replication Grady Trauma Project blood replication sample The subjects for this study were part of a larger investigation of genetic and environmental factors that predict response to stressful life events in a predominantly African American, urban population of low socioeconomic status. 26 , 27 , 28 Research participants were approached in the waiting rooms of primary care clinics of a large, public hospital while either waiting for their medical appointments or waiting with others who were scheduled for medical appointments. After the subjects provided written informed consent, they participated in a verbal interview and blood draw. This cohort was characterized by high rates of interpersonal violence and psychosocial stress; the majority of subjects reported at least one major trauma during their lifetime, and the number of traumatic experiences in childhood and adulthood predicted psychiatric symptom severity in adulthood. 28 , 29 DNA methylation profiles from n =392 African American individuals were downloaded from the Gene Expression Omnibus from GSE72680. Detailed sample demographics appear in Supplementary Table S5 . NIAAA blood samples of individuals with AUD for targeted plasma replication Subjects ( n =90) with a diagnosis of alcohol dependence according to the Structured Clinical Interview for DSM-IV 15 were recruited at the NIAAA alcohol treatment program. Participants were between 21 and 65 years old. Informed consent was obtained in accordance with the Declaration of Helsinki. Most of the alcohol-dependent participants were recruited through local newspaper advertisements for the alcohol treatment program at the NIAAA at the NIH, Bethesda, MD, USA. The participants consisted of men and women from a broad range of socioeconomic backgrounds ranging from executives to unemployed individuals. Participants underwent extensive clinical and physical examinations. Participants with a history of seizures, head trauma (defined as a period of unconsciousness exceeding 1 h), or medical conditions requiring chronic medications were excluded from participation. Participants were literate in English and were not suffering from active psychotic symptoms or severe cognitive impairment. Participants were voluntarily admitted to the NIAAA inpatient unit in the NIH Clinical Center between February 2010 and May 2014. Once they were admitted, participants were detoxified from alcohol and participated in the alcohol treatment program. Detailed sample demographics appear in Supplementary Table S6 . Blood and plasma samples were obtained and DNA extracted using standard methods. Plasma samples were obtained from two IRB approved protocols. All the bloods were drawn around 0800 hours after overnight bed rest. Participants were nil per os (NPO) for ~10 h prior to blood collection. Blood was collected in prechilled EDTA tubes which were promptly put into wet ice and taken to the laboratory to be processed. Samples were spun in a 4 °C centrifuge (Beckman-Coulter Allegra X-12R) at 1880 g for 10 min; the plasma was then aliquoted into NUNC cyrovials and frozen at −80 °C until assayed. The first set of samples ( n =55) were alcohol treatment-seeking inpatient participants. Their blood samples were drawn after alcohol withdrawal within 5 days of their last drink of alcohol. The second set of samples ( n =35) were alcohol treatment-seeking inpatients who were also diagnosed with post-traumatic stress disorder. After alcohol withdrawal and withdrawal from any other medications for up to 14 days after a participant's last drink, the participants’ blood samples were drawn. In both sample sets, average drinks per day, drinking days, and heavy drinking days were recorded for the 90 days prior to inpatient admission using the Timeline Followback Questionnaire. 30 ADS was also assessed; this score can range from 0 to 47 and scores greater than 27 are classified as substantial or severe AUD. Detailed sample demographics appear in Supplementary Table S6 . Low-density lipoprotein (LDL) cholesterol, high-density lipoprotein cholesterol, total cholesterol, triglycerides and total bilirubin were measured using standard procedures in mg d/l. Gamma-Glutamyltransferase (GGT) was measured using standard procedures in U/l. Samples were collected on the date of admission to the NIAAA treatment program and processed by the NIH Clinical Center Department of Laboratory Medicine. PCSK9 protein concentrations were determined in 90 subjects with AUD using Luminex technology (Luminex, Austin, TX, USA). A Human Magnetic Luminex Screening Assay was performed per the manufacturer’s instructions (catalogue #LXSAHM, R&D Systems, Minneapolis, MN, USA), with plasma dilutions of 1:5. Data were analyzed using Milliplex Analyst software (EMD Millipore Corporation, Merck, Darmstadt, Germany). Samples were assayed in duplicates. Mean CV for all 90 samples was 2.62±2.39 (mean±s.d.). Samples were randomized across four 96-well plates, and CV for two quality-control samples run on all four plates were 0.069 and 0.065, respectively. NIAAA AUD and healthy control blood samples for targeted pyrosequencing replication Blood was obtained from 62 healthy controls that participated in studies at NIAAA as previously described. 13 Detailed sample demographics appear in Supplementary Table S6 . Pyrosequencing was performed as detailed for these NIAAA healthy controls and 90 AUD subjects used in the plasma experiment described above. Pyrosequencing was performed following techniques described by Kaminsky et al. 31 Sodium bisulfite conversion was carried out using EZ DNA Methylation Gold Kit (Zymo Research, Irvine, CA, USA) according to the manufacturer’s instructions on 500 ng of DNA from tested human tissues. Nested PCR amplifications were performed with a standard PCR protocol in 25 μl volume reactions containing 3–4 μl of sodium-bisulfite-treated DNA, 0.2 μM primers, and master mix containing Taq DNA polymerase (Sigma Aldrich, St. Louis, MO, USA). Primer annealing temperatures for the outside and inside nested PCR were 58.6 and 61.9°C, respectively. Outside primer sequences were as follows: Forward primer 5′--3′, Reverse primer 5′--3′, Inside primer sequences were as follows: Forward primer 5′--3′, Reverse primer 5′--3′ with a biotin modification on the 5′ end. PCR amplicons were processed for pyrosequencing analysis per the manufacturer’s standard protocol (Qiagen, Germantown, MD, USA) using a PyroMark MD system (Qiagen) with Pyro Q-CpG 1.0.9 software (Qiagen) for CpG methylation quantification. The pyrosequencing primer sequence was 5′--3′. Only data passing internal quality checks for sodium bisulfite conversion efficiency, signal to noise ratio, and the observed vs. expected match of the predicted pyrogram peak profile using reference peaks were incorporated in subsequent analyses. Data generated derive from one technical replicate. University of Minnesota Liver Tissue Cell Distribution System liver samples Normal human liver and pathologic human liver were obtained through the Liver Tissue Cell Distribution System (LTCDS) (Minneapolis, MN, USA), which was funded by NIH Contract #HSN276200017C. The LTCDS is an NIH service contract to provide human liver tissue from regional centers to scientific investigators. These regional centers have active liver transplant programs. Human subject approval was obtained to provide portions of the resected livers for which the transplant was performed. We obtained liver tissue from transplant candidates with alcoholic cirrhosis ( n =50) and healthy controls ( n =50). Detailed sample demographics appear in Supplementary Table S7 . LTCDS human liver Illumina EPIC data processing From the LTCDS data set, 2 control and 2 liver transplant tissues were excluded due to space limitations on the array. The remaining human alcohol-induced cirrhotic liver transplant tissues ( n =48) and controls ( n =48) were subjected to Illumina Methylation EPIC DNA methylation microarray analysis. Normalization of binary format.idat files for red and green channel intensities was carried out in R. Initial background correction was performed using the minfi R-package 32 followed by individual red and green channel quantile normalization using the dasen method in wateRmelon. 24 After normalization, the beta value ( β ) was used to estimate the methylation level of each CpG locus using the ratio of intensities between methylated and unmethylated alleles (M/M+U=100) where M and U represent the methylated and unmethylated intensities respectively. 33 LTCDS-targeted pyrosequencing replication methods DNA purification and extraction: DNA was extracted using a Maxwell Tissue Purification Kit with a Promega Maxwell 16 Forensic Instrument (Promega, Madison, WI, USA). DNA extraction was performed according to manufacturer’s instructions. DNA concentrations were verified to be of adequate concentration (>95 ng μl −1 ). Real-time quantitative PCR RNA was extracted using an Ambion Ribopure RNA Purification Kit (Thermo Fisher Scientific, Rockville, MD, USA) according to the manufacturer’s instructions. One microgram total RNA was reverse-transcribed using SuperScript III First-Strand Synthesis SuperMix for qRT-PCR kit (Invitrogen, Grand Island, NY, USA). Real-time quantitative PCR was run in ViiA 7 Real-Time PCR System with Using TaqMan Gene Expression Assays (Thermo Fisher). Gene expression was studied with TaqMan PCR assays (PCSK9; Hs00545399_m1). 18S gene was used as an endogenous control. Data were analyzed initially using ViiA 7 Software (Applied Biosystems, Foster City, CA, USA). Gene expression was analyzed in liver tissue from transplant candidates with alcoholic cirrhosis ( n =24) and healthy controls ( n =24). The data analysis was performed using GraphPad prism 6.0 (GraphPad Software, San Diego, CA, USA). Sodium bisulfite pyrosequencing Pyrosequencing was performed as detailed above. Methylation data was excluded for three healthy control samples due to experimental error ( n =47). University of Florida liver sample The samples used in this analysis were derived from cirrhotic ( n =66) and normal liver ( n =34) tissues from the University of Florida Shands Hospital by surgical resection as reported previously by Hlady et al. 34 Normal livers were derived from surgical patients for benign liver lesions. Data were downloaded from the Gene Expression Omnibus from GSE60753. Sample demographics appear in Supplementary Table S8 . Translational animal models Mouse model Mouse brain, blood and liver tissues were analyzed for methylation, mRNA and protein expression using standard methodologies. Tissue was derived from mice that underwent the ‘NIAAA model’ of chronic and binge ethanol feeding. Briefly, this model uses 10 days ad libitum oral feeding of the Lieber-DeCarli ethanol liquid diet plus a single-binge ethanol feeding. Samples were collected 8 h after the single-binge feeding. Detailed information can be found elsewhere. 35 DNA of tissues was extracted using the Maxwell Tissue Purification Kit as described above. Real-time quantitative PCR Total RNA was extracted using an RNeasy mini-kit (Qiagen). One microgram total RNA was reverse-transcribed using SuperScript III First-Strand Synthesis SuperMix for qRT-PCR kit (Invitrogen). Real-time quantitative PCR was run in ViiA 7 Real-Time PCR System with Using TaqMan Gene Expression Assays (Thermo Fisher). Gene expression was studied with TaqMan PCR assays (PCSK9; Mm01263610_m1). We used 18S gene as an endogenous control. Data were analyzed initially using ViiA 7 Software (Applied Biosystems). The data analysis was performed using GraphPad prism 6.0 (GraphPad Software). Western blot analysis Liver homogenates were prepared in RIPA buffer (50 m M Tris; 1% NP40; 0.25% deoxycholic acid sodium salt; 150 m M NaCl; 1 m M EGTA) containing 1 m M Na 3 VO 4 and a protease inhibitor cocktail (Sigma, St. Louis, MO, USA). The protein concentrations were quantified using a detergent-compatible protein assay kit (Bio-Rad Laboratories, Hercules, CA, USA) according to the manufacturer’s instructions. Aliquots of 50 μg of protein extract were denatured in Laemmli buffer containing 5% β-mercaptoethanol, and the samples were loaded and separated by gel electrophoresis on a 7% Bis-Tris gel (Invitrogen). Samples were incubated with a primary antibody at 4 °C overnight under shaking conditions. Immunoreactive bands were visualized on nitrocellulose membranes using alkaline-phosphatase-linked anti-rabbit antibodies and the ECF detection system with a PhosphorImager (GE Healthcare, Piscataway, NJ, USA). Anti-PCSK9 antibody (ab125251) and beta-actin were purchased from Abcam (Cambridge, MA, USA). Rat model Adult male Wistar rats (Charles River) received pre-vapor operant conditioning for 3 weeks and were first trained to self-administer alcohol using a modified sucrose-fading procedure in which 10% ethanol was added to a sweet solution. Sweeteners were gradually removed from the solution. Upon completion, animals could self-administer alcohol (10% ethanol and water solution) on a schedule of reinforcement. The case rats were then treated with a chronic intermittent exposure to alcohol in vapor chambers. Ethanol exposure lasted 14 h and was followed by 10 h off. During exposure, blood alcohol levels ranged between 150 and 250 mg%. This cycle was repeated for 20 days. Nondependent rats were not exposed to alcohol vapor. DNA purification and extraction DNA was extracted using a Maxwell Tissue Purification Kit as described above. Real-time quantitative PCR RNA was extracted using an Ambion Ribopure RNA Purification Kit (Thermo Fisher Scientific) according to the manufacturer’s instructions. One microgram total RNA was reverse-transcribed using SuperScript III First-Strand Synthesis SuperMix for qRT-PCR kit (Invitrogen). Real-time quantitative PCR was run in ViiA 7 Real-Time PCR System with Using TaqMan Gene Expression Assays (Thermo Fisher). Gene expression was studied with TaqMan PCR assays (PCSK9; Rn01416753_m1). 18 S gene was used as an endogenous control. Data were analyzed initially using ViiA 7 Software (Applied Biosystems). The data analysis was performed using GraphPad prism 6.0 (GraphPad Software). Results Discovery strategy For the discovery stage of our analysis, using the Illumina 450 K HumanMethylation array platform, we conducted epigenome-wide association analyses of postmortem bulk brain tissue from PFC of individuals with AUD ( n =23) and healthy controls ( n =23) (Figure 2a and Supplementary Table S1 ). In this phase, postmortem cortical brain tissue was also sorted into neuronal and non-neuronal cells from individuals with depression and alcohol use history ( n =29) and healthy controls ( n =29; Figures 2b and c and Supplementary Table S2 ). In addition, we performed an EWAS on neuroimaging network activation as a function of alcohol use with peripheral blood from individuals with AUD that previously underwent resting state functional connectivity (RSFC) analysis 36 ( n =68) (Supplementary Table S3 ). Results of these three EWAS did yield various methylation sites that were associated with disease phenotypes (Figure 2 ). To identify epigenetic consequences of systemic alcohol exposure further, we next conducted an overrepresentation analysis of functional brain imaging-associated peripheral blood DNA methylation with brain tissue-specific alcohol use associations. A total of 17 probes corresponding to 12 genes were significantly associated across the salience network (SN), executive control network (ECN), visual and motor networks. These probes were also significantly associated with alcohol status in the bulk PFC cohort. The most significant gene-associated probe cg01444643 (hg38, chr1: 55039175; PCSK9 CpG1 ) was located in the promoter of the PCSK9 gene ( β =−9.911±3.034, F =3.073, d.f.=5/43, P =0.0021, Supplementary Table S4 ). We then performed the same overrepresentation analysis to assess for imaging network correlating peripheral blood DNA methylation and alcohol abuse associations in neuronal and non-neuronal nuclei from another postmortem brain cohort (Supplementary Table S2 ). In neurons, overrepresentation was observed in a visual network pathway different from that identified in the bulk PFC sample. Interestingly, only two networks were overrepresented in the non-neuronal fraction, which corresponded to the same SN and ECN networks identified in the bulk PFC analysis. A total of 86 loci were significantly associated across both the SN and ECN networks in peripheral blood as well as with alcohol abuse in non-neuronal nuclei. Of these loci only cg01444643 corresponding to PCSK9 ( β =−0.026±0.013, F =6.23, d.f.=7/51, P =0.048) and cg17845617 corresponding to Phosphatidylinositol-4-Phosphate 5-Kinase ( PIP5K1C ) were consistently identified in the National Institute of Child Health and Human Development (NICHD) and bulk brain cohorts (Supplementary Table S4 ). As the direction of association was consistent across discovery datasets for PCSK9 , we targeted this locus for in-depth follow up. Figure 2 Probe-wise association of alcohol use in the prefrontal cortex. ( a ) Volcano plot depicting the negative natural log of the p-value of association of alcohol abuse ( y axis) as a function of the beta value ( x axis) from a linear model adjusting for age, sex and neuronal proportion as estimated by the Cell EpigenoType Specific (CETS) algorithm. Data derived from frontal cortex bulk tissue from GEO dataset GSE49393. Three loci were significant after false discovery rate (FDR)-based correction for multiple testing including cg00393248 ( MYLK4 ; β =9.32±1.516, F =10.061, d.f.=5/43, P =2.23 × 10 −7 , FDR P =0.038), cg19608003 ( SLC44A4 ; β =−13.468±2.011, F =11.879, d.f.=5/43, P =3.54 × 10 −8 , FDR P =0.015), and cg19955284 ( β =−19.036±3.122, F =9.902, d.f.=5/43, P =2.64 × 10 −7 , FDR P =0.038). ( b ) Volcano plot depicting the negative natural log of the P -value of association of alcohol abuse in FACs isolated NeuN positive neuronal nuclei from the NICHD cohort ( y axis) as a function of the beta value ( x axis) from a linear model adjusting for age, sex, race and body mass index. In neurons, only cg03982998 ( LOC100130331 ; β =0.087±0.013, F =10.487, d.f.=7/51, P =2.3 × 10 −8 , FDR P =0.0078) and cg06395265 ( ZSWIM1 ; β =−0.049±0.008, F =8.967, d.f.=7/51, P =2.36 × 10 −7 FDR P =0.041) were significant after correction for multiple testing. ( c ) Volcano plot depicting the negative natural log of the P -value of association of alcohol abuse in FACs isolated NeuN negative non-neuronal nuclei from the NICHD cohort ( y axis) as a function of the beta value ( x axis) from a linear model adjusting for age, sex, race and PMI. ( d ) A table depicting functional connectivity networks significantly over-represented among AUD associated probes from GEO dataset GSE49393 and isolated neuronal and glial nuclei from the NICHD cohort. Structures analyzed included the anterior cingulate cortex (ACC), orbitofrontal cortex (OFC), insula, amygdala, visual cortex, motor cortex, frontal pole and medial prefrontal cortex (mPFC). Full size image PCSK9 methylation association with phenotype To further investigate the association of PCSK9 methylation and alcohol use, we first analyzed peripheral blood DNA from human subjects who participated in the Grady Trauma Project (GTP) ( n =392; Supplementary Table S5 ) and found that lower methylation of cg01444643/PCSK9 CpG1 was significantly associated with both the binary SCID 37 lifetime alcohol abuse item ( P =0.049) and the continuous Kreek–McHugh–Schluger–Kellow (KMSK) Lifetime Alcohol scale 38 ( P =0.027; Figure 1d ). In a second cohort, we assessed PCSK9 promoter methylation with pyrosequencing in individuals with AUD ( n =90) and healthy volunteers ( n =62; Figure 1e and Supplementary Table S6 ). Results showed significantly lower DNA methylation at PCSK9 CpG1: (Student’s t -test, PCSK9 CpG1 : alcohol abuse=76.68±0.05, no alcohol abuse=78.6±0.064, P =0.0075). PCSK9 methylation cross-tissue validation Although PCSK9 is mainly expressed in liver, using the EWAS approach, we also discovered it in brain and blood cohorts. However, it remains unknown to what degree there is conservation of methylation across tissues. Since blood, brain and liver tissue was not available from the same individuals, we used a mouse model to assess PCSK9 DNA methylation across tissues using sodium bisulfite pyrosequencing of the syntenic PCSK9 CpG (chr4: 106464510). Brain DNA methylation exhibited a non-significant trend for correlation with that of blood (Rho=0.51, P =0.064) and a significant correlation with liver methylation. Blood and liver methylation exhibited a non-significant trend for correlation (Figures 3a–c ). Blood methylation was significantly positively correlated with liver methylation in both mouse and human samples (Figure 3c and Supplementary Figure 1 ). These observations suggest that PCSK9 DNA methylation is relatively conserved across tissues, and warrants further replication in peripheral tissues as a biomarker of alcohol exposure. Figure 3 Cross-tissue-specific PCSK9 methylation and association with gene expression and plasma levels. ( a – c ) cross-tissue correlations: ( a ) Mouse syntenic PCSK9 DNA methylation in brain ( y axis) as a function of blood ( x axis; Rho=0.51, P =0.064). ( b ) A scatterplot of mouse syntenic PCSK9 DNA methylation in brain ( y axis) as a function of liver ( x axis; Rho=0.62, P =0.021). ( c ) A scatterplot of mouse syntenic PCSK9 DNA methylation in blood ( y axis) as a function of liver ( x axis; Rho=0.49, P =0.087). ( d – f ) Methylation to expression correlations: ( d ) Mouse syntenic PCSK9 DNA methylation in blood ( y axis) as a function of liver-specific PCSK9 gene expression ( x axis; Rho=0.67, P =0.0086). ( e ) Human PCSK9 DNA methylation in blood ( y axis) as a function of liver-specific PCSK9 gene expression from healthy controls ( n =47; x axis; Rho=0.41, P =0.046). ( f ) PCSK9 CpG1 DNA methylation ( x axis) as a function of plasma levels of PCSK9 ( y axis) in individuals with AUD ( n =90; Rho=0.31, P =0.0027). Full size image Functional relevance of PCSK9 methylation on expression Next, we assessed the functional relevance of peripherally measured PCSK9 DNA methylation on levels of PCSK9 in the plasma and gene expression in liver. Using our mouse model where multiple types of tissues could be obtained from the same animal, we observed a significant positive correlation of syntenic PCSK9 DNA methylation and liver gene expression (Figure 3d ), which was consistent with observations in humans (Figure 3e ). Furthermore, in humans, we observed a significant correlation of PCSK9 CpG1 methylation and PCSK9 plasma levels in AUD participants (Figure 3f ). Effect of alcohol on PCSK9 methylation and expression Since we consistently observed a decrease in PCSK9 DNA methylation with alcohol exposure, we next assessed liver PCSK9 gene expression and protein levels in the translational mouse model and in humans. Alcohol exposure was associated with significantly lower gene expression and protein levels of PCSK9 in the mouse model (Figures 4a–c ). In a cohort of human AUD liver transplant cases ( n =50) and healthy controls ( n =47; Supplementary Table S7 ), a significantly lower PCSK9 gene expression level was observed in alcoholic cirrhosis cases relative to controls (Figure 4e , Student’s t -test; alcohol exposure=0.3517±0.125, no alcohol exposure=0.9992±0.1388, P < 0.01). Interestingly, we observed a marked increase in methylation in the human liver transplant cases, (Figure 4d ) which was contrary to our prediction of low methylation leading to low expression. The high methylation and low expression observed in liver tissue may be due to the direct toxic effects of alcohol. Therefore, a portion of the observed decreased liver PCSK9 expression may be accounted for by hepatocyte loss/toxicity in the assayed tissue. In fact, we found that mouse PCSK9 expression was negatively correlated with liver aspartate aminotransferase (AST) values (Supplementary Figure 2 ). Furthermore, EWAS analysis of end-stage liver disease and control samples showed that estimated hepatocyte proportions were positively associated with human liver PCSK9 expression (Supplementary Figure 3 ). Independent of acute or chronic alcohol-induced cell loss dynamics, the possibility remains that continued chronic heavy alcohol exposure may result in an increase in PCSK9 expression with time as we observed in a chronic rat model of alcohol exposure (Figure 5 and Supplementary Figure 4 ), suggesting a need for a cell-type controlled longitudinal assessment of alcohol’s effects on liver PCSK9 expression. Figure 4 Alcohol exposure leads to lower PCSK9 expression in mice ( a – c ) and humans. ( d , e ). ( a ) PCSK9 mRNA expression in liver as a function of alcohol exposure status from mice that underwent the ‘NIAAA model of chronic and binge ethanol feeding’ 35 for 10-day chronic-plus-binge ethanol exposure. PCSK9 expression was significantly lower in the alcohol-exposed group (Student’s t -test; alcohol exposure=0.2443±0.06553, no alcohol exposure=1±0.07283, **** P <0.0001). ( b ) PCSK9 protein expression in mouse liver was significantly higher in the control group compared to the case group that underwent the ‘NIAAA model of chronic and binge ethanol feeding’ 35 (Student’s t -test; alcohol exposure=0.2372±0.01829, no alcohol exposure=0.4867±0.0321, ** P <0.01). ( c ) Western blot of mouse liver PCSK9 levels as assessed by anti-PCSK9 antibody ab125251. ( d ) Methylation analysis of PCSK9 CpG1 in human alcohol-induced end-stage liver disease bulk tissue shows marked increase of methylation which might be due to general toxic effects of alcohol and end-stage organ disease. (Student’s t -test; alcohol exposure=46.19±1.07, no alcohol exposure=37.63±0.8867, **** P <0.0001). ( e ) mRNA expression analysis of PCSK9 in human alcohol-induced end-stage liver disease bulk tissue reveals significantly decreased PCSK9 expression. (Student’s t -test; alcohol exposure=0.3517±0.125, no alcohol exposure=0.9992±0.1388, ** P <0.01). Full size image Figure 5 Model for PCSK9 interaction with alcohol. The model shown describes different stages of alcohol exposure and subsequent PCSK9 methylation and expression findings. Consistent with mild use are our finding of alcohol exposure leading to hypomethylation (discovery data sets, Figures 1a–e , plasma data set, Figure 2f ) with lower expression initially. Chronic alcohol use eventually leads to higher methylation and higher expression (rat data set, Supplementary Figure 4 ) whereas alcohol liver toxicity (acute—NIAAA mouse model or chronic—liver transplant cases) leads to high methylation with ultimately low protein expression, consistent with end-stage liver disease (Figure 4 and Supplementary Figures 2 and 5 ). Low expression and high methylation might also be affected by changes in cell type composition of tissue, as shown in end-stage liver disease tissue that varies greatly from healthy liver tissue on a global scale (Supplementary Figures 3 and 5 ). The early mild stage might be due to direct effects on transcription factor binding in the promoter regions (Supplementary Figure 7 ), while later might be caused by liver toxicity effects and tissue composition changes (Supplementary Figures 2 ). Full size image Assessment of PCSK9 DNA methylation from diseased liver tissue must be interpreted with caution, as liver PCSK9 DNA methylation varies in a cell type specific manner. PCSK9 CpG1 methylation exhibits significantly lower levels in mature hepatocytes relative to immature hepatocytes and other liver tissues (Supplementary Figure 5 ). This suggests that liver damage and the significantly lower number of estimated hepatocyte proportions in the liver transplant cases (Supplementary Figure 3c ) may result in an observed higher PCSK9 DNA methylation relative to healthy controls. Indeed, we observed a significantly higher liver PCSK9 DNA methylation level in alcoholic cirrhosis cases (Figure 4d : Student’s t -test; alcohol abuse=46.19±1.07, no alcohol abuse=37.63±0.89, P =6.5 × 10 −9 ). Within AUD subjects, a significant association of PCSK9 CpG1 methylation and AST values was observed and further supports this hypothesis (Rho=0.43, P =0.0405). Significantly higher cirrhosis-associated PCSK9 CpG1 DNA methylation was replicated in a second liver cohort derived from normal liver or cirrhotic liver arising from chronic hepatitis B or C viral infection, or alcoholism (Supplementary Table S8 and Supplementary Table S9 ). It is important to note that significantly higher PCSK9 DNA methylation was observed across all cirrhosis categories (Supplementary Table S9 ), supporting the assessment that liver damage confounds PCSK9 DNA methylation assessment in the liver. Biomarker for alcohol use/liver damage Finally, building a statistical prediction model trained on PCSK9 DNA methylation status in AUD subjects and controls, we generated a prediction of high versus low plasma PCSK9 levels using peripheral blood DNA methylation in an independent test set (Supplementary Figure 6 ) documenting the possible use of PCSK9 methylation as a biomarker for alcohol exposure. This finding is important as plasma PCSK9 levels have been shown to act as predictors of treatment response to statin therapy. 39 Discussion We used a cross-tissue, cross-phenotypic and translational approach to establish that alcohol exposure leads to methylation variation in the promoter region of PCSK9 affecting PCSK9 expression, which might contribute to regulation of low-density lipoprotein cholesterol (LDL-C). 40 , 41 , 42 Our finding is intriguing, given the direct action of alcohol on the liver which might modulate epigenetic regulation of PCSK9 expression via direct interference of transcription or alteration of transcription factor binding, such as sterol regulatory element-binding protein-2 (SREBP-2) and hepatocyte nuclear factor-1α (HNF1α; Supplementary Figure 7 ). This mechanism could explain the strong epidemiological evidence for the relationship between cardiovascular outcomes and alcohol use. Alcohol consumption and total mortality in cardiovascular disease (CVD) patients has been depicted to have a J-shaped relationship, 43 such that light to moderate drinking is associated with a reduction in CVD mortality risk while heavy drinking increases CVD mortality risk. PCSK9 methylation alteration may contribute to the downturn of this J-curve. The eventual upturn of risk in later stages of the disease might be due to chronic effects of alcohol on the liver and subsequent metabolic effects (Figure 5 ). Our results and model suggest that PCSK9 CpG1 could serve as an important biomarker in monitoring the effects of alcohol exposure. Given the recent FDA approval of monoclonal antibodies as PCSK9 inhibitors in 2015 for patients with heterozygous familial hypercholesterolemia (HeFH) or clinical atherosclerotic cardiovascular disease, 44 as well as strong recent findings of RNA inhibitor (RNAi) compounds significantly influencing PCSK9 levels, 45 our finding suggests that the interaction of alcohol and PCSK9 regulation should be further studied in clinical populations. Although statins are the standard of care for hypercholesteremia, not all patients are able to tolerate statins, including patients with alcoholic liver disease. Understanding the relationship between alcohol and PCSK9 regulation may help determine an optimal treatment window for PCSK9 inhibitors in individuals with AUD (Figure 5 ). However, given that alcohol dependence is most common in individuals with low socio-economic status, 46 access to currently expensive PCSK9 inhibitors may be difficult. Interestingly, we identified methylomic variation of PCSK9 through an investigation of brain tissue and alcohol-associated neuroimaging endophenotypes. Although the role of PCSK9 function in the brain is not well understood, studies have suggested both direct and indirect effects on brain function. PCSK9 is directly involved in low density lipoprotein receptor (LDL-R) degradation as well as other apolipoprotein E-binding receptors 47 , 48 and plays a protective role in nervous system development 49 by mediating cell differentiation. 47 , 50 Indirectly, reports link its function to both decreasing neurite outgrowth through interference with LDL-R neurite induction 51 and mediating neuronal apoptosis 52 through a mechanism downstream of oxidated-LDL. 53 PCSK9 has also been implicated in Alzheimer’s pathogenesis 53 , 54 , 55 , 56 , 57 through its role in apoptosis 53 , 58 , 59 , 60 and disposal of non-acetylated β-site amyloid precursor protein (APP)- cleaving enzyme 1 (BACE1), the rate-limiting enzyme in the generation of the Alzheimer’s disease amyloid β-peptide. 55 This is intriguing given recent findings that PCSK9 levels were significantly increased in cerebral spinal fluid in patients with Alzheimer’s disease. 30 Thus, future studies could investigate the effects of PCSK9 inhibitors in prevention of cognitive decline in Alzheimer’s disease and vascular dementia. It is likely that PCSK9 inhibitors will have pleiotropic effects that are unrelated to cholesterol regulation, such as effects on inflammation and immune function, 61 that might be relevant to brain physiology. Although the biological implications of PCSK9 regulation on brain function have not been fully characterized, an increased risk of neurocognitive adverse events in patients receiving PCSK9 inhibitors was recently reported. 62 However, new data from a large clinical trial showed no correlation between PCSK9 inhibitor treatment and neurocognitive side effects. 63 One potential explanation for the divergent findings of these studies might be unaccounted differences in alcohol exposure or other clinical characteristics. While our data support a dose dependent effect of alcohol exposure on PCSK9 levels (Figure 5 ), an important limitation of our study is the lack of longitudinal data and controlled dose escalation of alcohol exposure. In addition, our sample cohorts might have been biased, as AUD is highly prevalent in disadvantaged populations of low socioeconomic status 46 who often have various other metabolomic risk factors. Another potential confounding factor may have been the presence of rare genetic PCSK9 variants influencing PCSK9 methylation patterns and plasma levels. Although PCSK9 was initially identified as a gain-of-function mutation in cholesterol metabolism in families with a history of familial hypercholesteremia, 42 loss-of-function PCSK9 missense mutations identified in certain ethnic populations were associated with lowered plasma LDL-C levels and significant protection from CVD. 48 , 64 , 65 Presence of these rare PCSK9 variants may have potentially affected the PCSK9 expression levels measured in human plasma samples; however, given the low frequencies of such variants, it is unlikely that this had a major impact in our sample. There are several technical limitations that should be carefully taken into account when interpreting EWAS results. We used array based assessment of genome wide methylation patterns in our study that could be affected by common or rare single-nucleotide polymorphisms. While we were unable to explore to what degree underlying genetic variation might have contributed to our findings, the role of genome-epigenome interaction is complex and warrants further study. 66 To address some of these concerns, we conducted comprehensive sequential replication analyses from independent cohorts, as well as biological validation of our main finding. The field of epigenetics is rapidly expanding and it is likely that future higher resolution capture might be able to provide more comprehensive coverage of CpG sites. In addition, as the costs of whole epigenome next-generation sequencing become more affordable, it might be feasible to assess DNA methylation variation more completely, including 5-hydroxymethyl-cytosine variation. 67 Epigenetic regulation in AUD is complex and most likely involves multiple genes. In this paper, we focused on biological validation of PCSK9 . However, other intriguing candidates would include PIP5K1C , which has been implicated in pain signaling, 68 and Hepatocellular Carcinoma Associated Transcript 5 ( HTA/HCCAT5 ) (Supplementary Table S4 ). These genes were consistently associated with disease phenotypes in our cross-tissue and cross-phenotypic discovery analysis of genome-wide methylomic variation in AUD. Our findings could be biased by cell-type specificity of PCSK9 expression (Supplementary Figures 3 and 5 ), suggesting a need for a cell-type controlled assessment of alcohol’s effects on PCSK9 expression in brain, liver, and blood. While the availability of brain tissue remains a challenge for most psychiatric phenotypes, we showed strong correlation of PCSK9 methylation across tissues (Figure 3 ), further supporting the idea that use of peripheral biomarkers might be feasible in reflecting methylation status in other tissues. Future studies in AUD might take advantage of the possibility of liver biopsies to investigate cell-specific epigenetic changes longitudinally. In summary, we provide evidence that alcohol exposure alters PCSK9 methylation and expression. Our data may shed light on novel mechanisms by which CVD risk can be altered. In addition, PCSK9 methylation might be an important biomarker that can track alcohol exposure and might predict treatment response, efficacy and dosing regiments for individuals undergoing PCSK9 inhibitor therapy. | In an analysis of the epigenomes of people and mice, researchers at Johns Hopkins Medicine and the National Institutes of Health report that drinking alcohol may induce changes to a cholesterol-regulating gene. The findings suggest that these changes to the gene, known as PCSK9, may explain at least some of the differences in how cholesterol is processed in people who drink alcohol, or may affect those taking a relatively new class of PCSK9 cholesterol-lowering drugs designed to reduce LDL cholesterol, commonly known as bad cholesterol. Epigenomics is the study of potentially heritable changes in gene expression caused by "environmental" or other processes that do not involve changes to underlying DNA itself. In a report on the study, published Aug. 29 in Molecular Psychiatry, the researchers caution that their analysis looked at and found only associations between alcohol consumption and epigenetic changes, not cause-and-effect, and that more studies are needed to demonstrate any direct link. "Small amounts of alcohol are well known to be seemingly protective against heart disease in some studies, whereas heavy, chronic alcohol use can have detrimental effects on the liver as well as on the cardiovascular system," says Zachary Kaminsky, Ph.D., assistant professor of psychiatry and behavioral sciences. "Regulation of PCSK9 seems to correlate with this pattern and may be a significant underlying factor behind the variations in the relationship between cholesterol and cardiovascular disease when it comes to alcohol use." For their study, the researchers sought to measure how environment, in this case alcohol use, might lead to changes in which genes are expressed—turned on or off—when alcohol is consumed. They did so by examining information from so-called DNA "chips" known as microarrays that can reveal which genes have chemical methyl groups added across the whole genome. These chips looked at about 450,000 methyl groups at a time. Such "methylation" affects the level of gene expression. The researchers used several different sets of data. One set of data had DNA from the brains of 23 deceased people with documented alcohol dependence or abuse as compared to 23 healthy controls, all subjects whose DNA was gathered at The University of Sydney in Australia. Another set of data compared DNA from blood samples of 68 people who had documented alcohol dependence with 72 healthy controls recruited by the U.S. National Institute on Alcohol Abuse and Alcoholism (NIAAA). And a third set of data from the U.S. National Institute of Child Health and Human Development compared brain DNA samples from 29 people who had major depression with 29 nondepressed controls, some of whom were known to abuse alcohol. When the investigators cross-compared epigenetic data from the three sets of data to find out what changes occurred in common in all three data sets and what changes did not, the common factor highlighted the gene PCSK9. The researchers then looked at DNA from blood samples from a different NIAAA study, which enrolled people seeking treatment for alcohol dependence, and measured their PCSK9 levels. This group was composed of 90 people with documented alcohol dependence. The researchers found the higher the methylation at the PCSK9 gene, the higher the PCSK9 level in blood. In another set of experiments, the researchers fed mice alcohol in their diets for 10 days along with a single-binge feeding—which is similar to what we see in people with alcohol use disorder—and compared their epigenetic profiles to mice not fed alcohol. They took brain, blood and liver samples and analyzed the DNA for methyl groups on the PCSK9 gene and PCSK9 protein levels in liver. They found that in mice fed alcohol, methylation of the PCSK9 gene increased in all the tissues measured. But the mice given alcohol had lower PCSK9 protein levels in the liver. Kaminsky says results from the liver samples puzzled them. These samples had many more methyl groups on PCSK9 but not the increased protein levels that would be expected. They looked back in human liver samples from people with alcohol dependence who underwent a liver transplant and noticed a similar pattern: more methylation on PCSK9 and unexpectedly lower PCSK9 protein levels. In samples from people who abused alcohol, the researchers detected that PCSK9 gene expression was only a third of the level in people who didn't abuse alcohol. "Given that alcohol is metabolized by the liver and can cause liver damage if used in large amounts over long periods of time, this result made immediate sense—the liver cells were dying, which is why we didn't see the high levels of PCSK9 protein as would be expected," says lead author of the study Falk Lohoff, M.D., of the NIAAA. The researchers confirmed this by looking at a set of human tissue samples from patients with end-stage liver disease. Kaminsky and Lohoff concluded that PCSK9 regulation by alcohol seems to be dynamic, with small amounts of alcohol leading to lower PCSK9 methylation and gene expression, while chronic heavy alcohol use leads to higher methylation and higher gene expression, with ultimately low PCSK9 protein levels due to liver damage. In people, PCSK9 is found at its highest levels in liver, but is also found in other tissues, such as brain and blood. PCSK9 binds to the "bad cholesterol" receptors and blocks uptake and breakdown of bad cholesterol by cells, leading to accumulation in the bloodstream, where it then presumably clogs arteries. A new class of cholesterol-lowering drugs, for those who either can't tolerate statins or for whom statins don't work, reduces PCSK9 protein levels and helps take bad cholesterol out of the bloodstream. The drugs are expensive and carry some side effects. "So far, the safety and interaction of PCSK9 inhibitors and alcohol use hasn't been studied, but this is an important area of research given how common alcohol use is," says Lohoff. "Our finding of a PCSK9-alcohol link is intriguing, since PCSK9 inhibitors might be particularly useful in lowering bad cholesterol for people who have high PCSK9 levels due to drinking." Also, he says, PCSK9 inhibitors aren't metabolized by the liver, which is often damaged in individuals with heavy chronic alcohol use, meaning they wouldn't put an extra strain on the liver as some other medications might. Among the limitations of their study, the researchers say, is that they only looked at tissue DNA samples at one point of time. Because of this, they don't know if the epigenetic changes they observed are stable over time or reversible. Additional authors on the study include Ilenna Jones of Johns Hopkins University; Jill Sorcher, Allison Rosen, Kelsey Mauro, Rebecca Fanelli, Reza Momenan, Colin Hodgkinson, Melanie Schwandt, David George, Andrew Holmes, Zhou Zhou, Ming-Jiang Xu, Bin Gao, Hui Sun, Monte Phillips, Christine Muench and George Koob of the NIAAA; and Leandro Vendruscolo of the National Institute on Drug Abuse. | 10.1038/MP.2017.168 |
Space | How the universe's brightest galaxies were born | Nature, nature.com/articles/doi:10.1038/nature15383 Journal information: Nature | http://nature.com/articles/doi:10.1038/nature15383 | https://phys.org/news/2015-09-universe-brightest-galaxies-born.html | Abstract Submillimetre-bright galaxies at high redshift are the most luminous, heavily star-forming galaxies in the Universe 1 and are characterized by prodigious emission in the far-infrared, with a flux of at least five millijanskys at a wavelength of 850 micrometres. They reside in haloes with masses about 10 13 times that of the Sun 2 , have low gas fractions compared to main-sequence disks at a comparable redshift 3 , trace complex environments 4 , 5 and are not easily observable at optical wavelengths 6 . Their physical origin remains unclear. Simulations have been able to form galaxies with the requisite luminosities, but have otherwise been unable to simultaneously match the stellar masses, star formation rates, gas fractions and environments 7 , 8 , 9 , 10 . Here we report a cosmological hydrodynamic galaxy formation simulation that is able to form a submillimetre galaxy that simultaneously satisfies the broad range of observed physical constraints. We find that groups of galaxies residing in massive dark matter haloes have increasing rates of star formation that peak at collective rates of about 500–1,000 solar masses per year at redshifts of two to three, by which time the interstellar medium is sufficiently enriched with metals that the region may be observed as a submillimetre-selected system. The intense star formation rates are fuelled in part by the infall of a reservoir gas supply enabled by stellar feedback at earlier times, not through major mergers. With a lifetime of nearly a billion years, our simulations show that the submillimetre-bright phase of high-redshift galaxies is prolonged and associated with significant mass buildup in early-Universe proto-clusters, and that many submillimetre-bright galaxies are composed of numerous unresolved components (for which there is some observational evidence 11 ). Main We conducted our cosmological hydrodynamic galaxy formation simulations using the new hydrodynamic code GIZMO 12 , which includes a model for the impact of stellar radiative and thermal pressure on the multiphase interstellar medium (ISM). This feedback both regulates the star formation rate (SFR), and shapes the structure in the ISM. Informed by clustering measurements of observed submillimetre galaxies (SMGs) 2 , we focus on a massive halo (with a dark matter mass of M DM ≈ 10 13 M ⊙ at z = 2, where M ⊙ is the solar mass and z is the redshift) with baryonic particle mass M bary ≈ 10 5 M ⊙ as the host of our ‘main galaxy’, and run the simulation to z = 2. The only condition of the tracked galaxy that is pre-selected to match the physical properties of observed SMGs is the chosen halo mass. We combine this with a new dust radiation transport package, POWDERDAY, that simulates the traverse of stellar photons through the dusty ISM of the galaxy, allowing us to robustly translate our hydrodynamic simulation into observable measures. We simulate the radiative transfer from a 200 kpc region around the main galaxy. This simulation represents the first cosmological model of a galaxy this massive to be explicitly coupled with dust radiative transfer calculations. The two codes and the simulation set-up are fully described in Methods. We define two distinct regions in the simulations. The ‘submillimetre emission region’ is the 200 kpc region surrounding the central galaxy in the halo of interest. This is the region where all of the modelled 850 µm emission comes from, and is what relates most directly to observations. The ‘submillimetre galaxy’ refers to the central galaxy in the halo. Physical quantities from the submillimetre galaxy are most applicable to high-resolution observations, as well as to placing these models in the context of other theoretical galaxy formation models. As we will show, the submillimetre emission from the region is generally dominated by the central submillimetre galaxy, though the contribution from lower mass galaxies is often non-negligible. We track the submillimetre properties of the galaxies within the region from z ≈ 6. The SFRs of galaxies in the region rise from this redshift towards later times z ≈ 2, owing to accretion of gas from the intergalactic medium ( Fig. 1 ). As stars form, stellar feedback-driven galactic winds generate outflows and fountains, allowing recycled gas to be available for star formation at later times ( Extended Data Fig. 1 ). This phenomenon shapes a star formation history that is still rising at z ≈ 2, in contrast to galaxy formation models with more traditional implementations of subresolution feedback, which peak at z ≈ 3–6 for galaxies of this mass 13 , 14 , 15 . Mergers and global instabilities drive short-term variability in the global SFR, while outflows and infall driven by the feedback model can affect features in the star formation history in a somewhat cyclical ‘saw-tooth’ pattern. Figure 1: Evolution of physical and observable properties of the submillimetre emission region and the central galaxy. In each panel, the properties of the 200 kpc submillimetre emission region are shown with thick solid lines, while those of the central galaxy are given by thin dashed lines. a , Stellar and dust mass; b , SFR; c , predicted observed 850 µm flux density; d , specific SFR, sSFR ( M * /SFR). The SFR is averaged on 50 Myr timescales, and includes a correction factor of 0.7 for mass loss. Locations of major galaxy mergers (>1:3) are noted by green vertical ticks on the top axis of b . The light purple shaded region in c shows when the galaxy would be detectable as an SMG with SCUBA ( S 850 > 5 mJy). The pink and purple shaded regions in d show the rough ranges for the z = 2 main-sequence (MS) and starburst regime; the grey region in d denotes below the main sequence. PowerPoint slide Full size image At its earliest stages ( z ≈ 4–6), the integrated SFR from the galaxies in the region varies in the approximate range (100–300) M ⊙ yr −1 , with a significant stellar mass, (0.5–1) × 10 11 M ⊙ , in place, comparable to some high-redshift detections 16 . Feedback from massive stars enriches the ISM with metals, and the dust content simultaneously rises. By z ≈ 3, the combination of gas accumulation and substantial metal enrichment drives an increase in the dust mass by a factor of ∼ 50, with masses approaching ∼ 1 × 10 9 M ⊙ . Radiation from the delayed peak in the SFR interacting with this substantive dust reservoir drives the observed 850 µm flux density to detectable values of >5 mJy. The galaxies associated with the main halo enter a long-lived submillimetre-luminous phase, with a lifetime of ∼ 0.75 Gyr. While our main model is only run to z = 2 owing to computational restrictions for models of this resolution, tests with lower-resolution models reveal that at later times ( z < 1.5), a declining SFR due to inefficient accretion as well as exhausted gas supply drives a drop in the submillimetre flux density (for more details, see Methods). The star formation history of galaxies residing in haloes with M DM ≈ 10 13 M ⊙ (at z = 2), as controlled by the underlying stellar feedback, provides a physical explanation for the peak in the observed SMG redshift distribution at z = 2–3 (ref. 17 ). During the submillimetre-luminous phase, the emitting region is almost always occupied by multiple detectable galaxies. In Fig. 2 , we present gas surface density projections of six arbitrarily chosen snapshots during the evolution of the submillimetre-luminous phase ( z = 2–3). The panels are 250 kpc on a side; for reference, the full-width at half-maximum (FWHM) of the Submillimetre Common-user Bolometer Array (SCUBA) on the James Clerk Maxwell Telescope, the first instrument to detect SMGs, is ∼ 125 kpc at z ≈ 2. Multiple clumps of gas falling into the central galaxy are nearly always present. The observed flux density from the region is typically dominated by the central galaxy, with (on average) ∼ 30% arising from emission from subhaloes ( Extended Data Fig. 2 ). The submillimetre flux density of the central galaxy rises dramatically between z ≈ 2–3, reaching a peak value of ∼ 20 mJy. Owing to contributions from subhaloes surrounding the central galaxy, the flux from the overall 200 kpc region can exceed this, peaking at ∼ 30 mJy. Similarly extreme systems have recently been detected with the Herschel Space Observatory and the South Pole Telescope 4 , 18 , 19 . Figure 2: Surface density projection maps of the 250 kpc region around the central submillimetre galaxy for redshifts z ≈ 2–3. The submillimetre emission region probed in surveys typically encompasses a central galaxy in a massive halo that is undergoing a protracted bombardment phase by numerous subhaloes. Some of the brightest SMGs arise from numerous galaxies within the beam in a rich environment (bottom right panel). The colour coding denotes the gas column density ( N H ), with the colour bar on the right. PowerPoint slide Full size image While the central galaxy is being bombarded by subhaloes over a range of mass ratios during the submillimetre-luminous phase, major galaxy mergers akin to local prototypical analogues such as Arp 220 or NGC 6240 do not drive the onset of the long-lived submillimetre-luminous phase in the central galaxy. In Fig. 1 , we highlight when the galaxy undergoes a major merger with mass ratio ≥1:3. While major mergers are common at early times (and indeed drive some short-lived bursts in star formation), the bulk of the submillimetre-luminous phase at later times ( z ≈ 2–3) occurs nearly a gigayear after the last major merger. The ratio of the SFR to its integral over cosmic time (the specific SFR) of the overall emitting region is generally on the main sequence of galaxy formation at z ≈ 2 (defined as the main locus of points on the SFR–stellar mass ( M * ) relation), although the central galaxy can have values comparable both to main-sequence galaxies between z ≈ 2–3 and to outliers. One consequence of a model in which SMGs typically lie on the main sequence of star formation is that the gas surface densities show a broad range, ∼ (10 2 –10 4 ) M ⊙ pc −2 ( Extended Data Fig. 3 ), as well as diverse gas spatial extents ( Fig. 3 ). This is manifested observationally in the broad swath occupied by SMGs in the Kennicutt–Schmidt star formation relation 1 . The spatial extent and surface density of the gas are to be contrasted, however, with local merger-driven ultraluminous infrared galaxies, which exhibit typical FWHM radii of ∼ 100−500 pc (ref. 20 ). Idealized galaxy merger simulations with initial conditions designed to form SMGs further underscore this contrast, as they also result in compact morphologies during final coalescence, and can be inefficient producers of submillimetre radiation owing to increased dust temperatures 8 . Figure 3: Gas and stellar radius distribution for the central submillimetre galaxy. The orange histogram denotes the half-mass radius of the stars, while the blue shows the gas. The galaxy gas is more distributed in the central galaxy than the (subkiloparsec) extent expected from major mergers, although still sufficiently compact that it will remain unresolved even with approximately arcsecond resolution. The ordinate is weighted by the time the galaxy spends in the bin, and the overall normalization of the distribution is arbitrary. PowerPoint slide Full size image The central submillimetre galaxy is amongst the most massive and highly star-forming of galaxies at this epoch. The stellar masses are diverse, in the range ∼ (1–5) × 10 11 M ⊙ , comparable to recent measurements of this population 21 , and consistent with constraints from abundance matching techniques 22 . The molecular gas fractions of the central galaxy ( f gas ≡ M H2 /( M H2 + M * )) decline with stellar mass, and range from ∼ 40% at lower stellar masses to ≲ 10% at the highest masses. This is in agreement with observations 23 , although is dependent on the conversion from carbon monoxide ( 12 CO) luminosity to H 2 gas mass. We note that these predictions are quantitatively different from those produced by previous cosmological efforts in this field, with some predicted gas fractions exceeding f gas = 0.75 (refs 7 , 9 ) and median stellar masses as low as ∼ 10 10 M ⊙ (ref. 7 ). We present plots of the gas fractions and calculated spectral energy distributions (SEDs) of our model SMG in the context of observations in Extended Data Figs 4 and 5 . The modelled gas distributions within the central galaxy, which range from ∼ 1 kpc to 8 kpc, compare well with recent dust maps observed using the Atacama Large Millimetre Array 24 . The stellar masses, gas fractions and lifetimes are in agreement with some previous lower-resolution cosmological efforts 10 , although the predicted SFR and luminosity from this model are substantially larger. The SFR of the group of galaxies in the region peaks at ∼ 1,500 M ⊙ yr −1 . Importantly, up to half of the total infrared luminosity can come from older stars with ages t age > 0.1 Gyr. Using standard conversions 25 , the estimated SFR from the integrated infrared SED (3–1,100 µm) can exceed ∼ 3,000 M ⊙ yr −1 ( Extended Data Fig. 6 ), and hence infrared-based SFR derivations of dusty galaxies at high z may over-estimate the true SFR by a factor of ∼ 2. Indeed, the contribution of satellite galaxies to the global SFR, along with the contribution of old stars to the infrared luminosity may relieve some tensions between the inferred SFRs from submillimetre galaxies and massive galaxies modelled in cosmological hydrodynamic simulations 10 . The end-product of the central submillimetre galaxy at z ≈ 2 is a galaxy with a stellar mass of ∼ (4–5) × 10 11 M ⊙ that is distributed over a compact region of ∼ 1–5 kpc, and gas that is distributed similarly ( Fig. 3 ). This is similar in extent and mass to the observed z ≈ 2 compact quiescent galaxy population, which has a mean half-light radius of R e ≈ 1.5 kpc, a stellar mass of M * > 10 11 M ⊙ and ages t age ≈ 0.5–1 Gyr (ref. 26 ), suggesting a plausible connection between the galaxy populations. Indeed, a calculation of the stellar velocity dispersion along three orthogonal sightlines of the central galaxy during the submillimetre-luminous phase results in σ * ≈ 600–700 km s −1 , comparable to measurements of high- z compact quiescents. A large sample of simulated SMGs would allow for a robust analysis of the expected abundances of SMGs and compact quiescents at z ≈ 2. Our picture for the formation of SMGs suggests that they are not transient events, but rather natural long-lived phases in the evolution of massive haloes. The ∼ 0.75 Gyr duty cycle combined with the comoving abundance 27 of dark matter haloes of this mass result in an expected abundance of our model SMGs of ∼ 1.5 × 10 −5 h 3 Mpc −3 , comparable to the ∼ 10 −5 h 3 Mpc −3 observed for SMGs 28 . While modelling the full number counts involves convolving the typical duty cycle as a function of halo mass with halo mass functions over a range of redshifts, the approximate abundances implied by this model are encouraging. This model suggests that galaxies that form in haloes of mass M DM ≈ 10 14 M ⊙ at z = 0 will represent typical SMGs near the peak of their redshift distribution. Lower mass galaxy models do not achieve the requisite SFR and metal enrichment to generate submillimetre-luminous galaxies (see Methods). More extreme SMGs that are being detected for z = 5–6 (refs 29 , 30 ) may form in even more massive (and rare) haloes than those considered here. Methods Cosmological hydrodynamic zoom simulations We utilize a newly developed version of TreeSPH that employs a pressure–entropy formulation of smoothed particle hydrodynamics (SPH) 31 that obviates many of the potential discrepancies noted between grid-based codes, traditional SPH codes, and moving-mesh algorithms 32 , 33 , 34 . In particular, we employ the hydrodynamic code GIZMO 12 in P-SPH mode which conserves momentum, energy, angular momentum and entropy, and includes newly developed algorithms to treat the artificial viscosity, entropy diffusion and time-stepping 31 , 35 . The gravity solver is a modified version of the GADGET-3 solver 36 , and an updated softening kernel to better represent the potential of the SPH smoothing kernel is included 37 . The simulations are fully cosmological zoom-in calculations of the evolution of individual galaxies. A 144 Mpc 3 cosmological volume was simulated at low resolution down to redshift z = 0 with dark matter only. The halo of interest was identified, and re-simulated at much higher resolution with baryons included. The initial conditions were generated with the MUSIC code 38 . We simulate four zoom galaxies—one is our main galaxy, and the other three are at varying resolutions and masses for the purposes of testing. The main galaxy of interest to this study resides in a dark matter halo mass of M DM = 3 × 10 13 M ⊙ at z = 2. The initial baryonic particle masses in the high-resolution region were 2.7 × 10 5 M ⊙ , and the minimum baryonic/stars/dark matter force softening lengths were 9/21/142 proper pc at z = 2. The physical properties of all of the modelled galaxies are presented in Extended Data Table 1 . The baryonic physics implemented into GIZMO are developed on the basis of extensive tests studying idealized simulations of both isolated disks and galaxy mergers 39 , 40 , 41 , 42 , 43 , 44 . The gas cools using an updated cooling curve to standard 45 implementations in SPH codes which includes both atomic and molecular line emission 46 . The modelled ISM is multiphase. The neutral ISM is broken into an atomic and molecular component following algorithms that scale the molecular fraction with column density and gas phase metallicity 47 , 48 . Star formation occurs in molecular gas above a threshold density (here, this is set to n thresh = 10 cm −3 ). Star formation is further restricted to gas that is locally self-gravitating, where: where α is the usual virial parameter, β ′ ≈ 1/2, v the gas velocity, G is the gravitational constant and ρ is the density of the gas. This follows from studies 49 that show that the predicted spatial distribution of star formation in galaxies is more realistic when using a gas self-gravitating criterion compared to a variety of other algorithms (including a fixed density threshold, a pure molecular-gas law, a temperature threshold, a Jeans criterion, a cooling-time criterion and a converging flow criterion). The SFR follows a volumetric relation: where ˙ρ * is the star formation rate density, ρ mol the gas density, and t ff the local gas free fall time. In other words, stars are allowed to form with 100% efficiency per free fall time. The star formation is subsequently self-regulated by stellar feedback, resulting in a time-averaged efficiency on galaxy scales of ε ff of ∼ 0.005–0.1 (ref. 39 ). Once stars have formed, they can impact the ISM via various feedback mechanisms. Assuming a Kroupa 50 stellar initial mass function, and using STARBURST99 51 for luminosity, mass-return and supernova rate calculations as a function of stellar age and metallicity, we include the following forms of stellar feedback. Radiation momentum deposition. At each timestep, the gas near young stars is affected by a momentum flux given by: where L incident is the incident luminosity, τ UV/optical is the optical depth to UV/optical photons, τ IR = Σ gas κ IR , Σ gas is the column of gas and κ IR = 5( Z / Z ⊙ ) g −1 cm 2 . Supernovae and stellar winds. We utilize tabulated type-1 and type-II supernovae rates 51 , 52 ; if a supernova occurs during a timestep, thermal energy and radial momentum are injected within a smoothing length of the star. Gas and metal return are included as well. Stellar winds are similarly included with energy, wind momentum, mass and metals deposited within a smoothing length. Photoheating of H ii regions. The production rate of ionizing radiation from stars determines the extent of H ii regions (allowing for overlapping regions). These regions are heated to 10 4 K if the gas is below that threshold. We utilize models TL37 SR and TL37 HR in Extended Data Table 1 to test the convergence properties of our simulations. One model is run with the same mass baryonic resolution as our main model (standard resolution; SR), and one a factor of ∼ 8 higher resolution (high resolution; HR). In Extended Data Fig. 7 , we show the modelled duty cycle above a given flux density as a function of flux density for these two models. We see that the shortest lived ( ≲ 200 Myr) emission spikes present in the standard resolution model may not be converged in the highest resolution model. Notably, emission with longer duty cycles is either converged, or underpredicted in our standard resolution model, suggesting that the relatively long-lived submillimetre-luminous phase is robust. We show the M * – z relation for the central galaxy in Extended Data Fig. 8 as compared to observational constraints 22 . The central galaxy has a stellar mass a factor of ∼ 2 greater than the observed median stellar mass for comparable mass haloes at this epoch. The model galaxy may represent an outlier in the M * – z relation at these redshifts. Indeed, the thickness of the observational constraints shows the uncertainty, not range, in possible values. Alternatively, it is possible that the inclusion of feedback from an active galactic nucleus (AGN) could impact the stellar mass buildup in the galaxy, although the level to which black hole growth can impact star formation near the submillimetre-luminous phase is unclear. Some models have shown that AGN can grow efficiently in the absence of major mergers 53 , 54 , while other models and observations suggest that mergers may be necessary to grow massive holes 55 , 56 , 57 . The last major merger before the submillimetre-luminous phase is ∼ 1 Gyr before. Tests with our low resolution model (m13m14) show that without AGN feedback, residual star formation drives a factor of ∼ 2 increase in stellar mass at late times ( z ≈ 0−1). Finally, we note that a higher mass resolution model could potentially also result in decreased final stellar masses. In our convergence tests, the final M * (at z = 2) of the HR run is ∼ 60% that of the SR run. Dust radiative transfer calculations To calculate the inferred observational properties of our simulated galaxies, we developed a dust radiative transfer package, POWDERDAY. In short, POWDERDAY takes hydrodynamic simulations of galaxies in evolution, projects the gas properties onto an adaptive mesh and calculates the radiative transfer from the stellar sources through the dusty ISM until an equilibrium dust temperature is achieved. In detail, we identify galaxies using SKID to locate bound groups of baryonic particles 58 , 59 , and track their progenitors back in time 60 , 61 . Galaxies and haloes are required to contain at least 64 particles each in order to be identified. We cut out a 200 kpc (side length) region around the galaxy of interest, and subdivide the domain into an adaptive grid with an octree memory structure. Formally, we begin with one cell encompassing the entire 8 × 10 6 kpc 3 radiative transfer region. The cells then recursively subdivide into octs until there are a threshold maximum number of gas particles in the cell (we employ n subdivide,thresh = 64, although experiments with n subdivide,thresh = 32 show converged results). The physical properties of the gas particles are projected onto the octree using a spline smoothing kernel 62 . The spectral energy distribution of stars are calculated on the fly with the Flexible Stellar Population Synthesis code, FSPS 63 , 64 , through PYTHON-FSPS, a set of PYTHON hooks for FSPS ( ). The SEDs are calculated as simple stellar populations with ages and metallicities determined by the hydrodynamic simulation, and assuming a Kroupa IMF. The radiative transfer happens in a Monte Carlo fashion using the three-dimensional dust radiative transfer solver, HYPERION 65 . The code uses an iterative methodology to determine the radiative equilibrium temperature 66 , and we determine convergence when the energy absorbed by 99% of the cells has changed by less than 1% between iterations. We assume a dust grain-size distribution comparable to that of the Milky Way 67 , with R ≡ A v / E ( B − V ) = 3.15, where A v is the visual extinction and E ( B − V ) is the difference between the B- and V-band extinctions. The dust emissivities are updated to include an approximation for polycyclic aromatic hydrocarbons (PAHs) alongside thermal emission 68 . We assume a constant dust to metals ratio of 0.4, motivated by both Milky Way and extragalactic observational constraints 69 , 70 , 71 . The underlying HYPERION code has passed the standard benchmarks for codes of this type 72 , and we found that POWDERDAY compares well against other publicly available dust radiative transfer codes 73 , 74 , 75 in test starburst SPH galaxy merger simulations. Parameter choices In Extended Data Fig. 9 , we present a number of tests of our parameter choices for the radiative transfer calculations. We show the predicted 850 µm light curve from our lowest resolution model (m13m14) using fiducial parameters, as well as three parameter choice variations. We first ask whether our chosen radiative transfer grid size affects our principal results. Our fiducial model is a 200 kpc (on a side) box cut out of the global cosmological simulation centred on the halo of interest. This size was chosen to reflect a rough average of the typical (sub)millimetre beam sizes typically used to detect SMGs. For example, assuming Planck 2013 cosmological parameters 76 , the Submillimetre Common-Use Bolometer Array (SCUBA) on the James Clerk Maxwell Telescope (JCMT) has a 15″ full-width at half-maximum (FWHM) beam at 850 µm. At z = 2 this corresponds to ∼ 128 kpc. At the same redshift, the beam of AzTEC and LABOCA at 1 mm on the JCMT corresponds to ∼ 163 kpc (19″); the South Pole Telescope (SPT) has a beamsize of 540 kpc at 1.4 mm (63″); and Herschel’s SPIRE instrument ranges from 154 kpc to 308 kpc (250–500 µm; 18″–36″). Because a few notable beam sizes (of particular relevance, the SCUBA beam) are smaller than our assumed box size of 200 kpc, we have run an additional model with box length of 100 kpc (and all other parameters exactly the same). We highlight the resultant 850 µm light curve from this model in the top right panel of Extended Data Fig. 9 . When comparing to our fiducial model, it is apparent that our results are robust to the highest resolution beams that have been used for SMG surveys at single dish facilities to date. We additionally investigate whether our inclusion of PAHs in our model makes any difference to the calculated submillimetre-wave flux density of our model galaxy. This is presented in the bottom left panel of Extended Data Fig. 9 . Again, we note minimal impact on the submillimetre SED of our model. Finally, we ensure that our results are converged with the number of photons emitted. We fiducially run 10 7 photons per grid (roughly 100 per cell). In the bottom right panel of Extended Data Fig. 9 , we show the results from a run with 10 8 photons per grid, and show that the results are robust against this parameter choice. Relation to other models Historically, the methods used, and physical models for SMG formation in numerical simulations are quite varied. Here, we summarize these methods and results, and place our own model into this context. Broadly, there are three classes of SMG formation models: cosmological semi-analytic models (SAMs), idealized non-cosmological simulations and cosmological hydrodynamic models. The present model falls into the last category. Our model is the first self-consistent cosmological simulation with baryons and bona fide radiative transfer to form a submillimetre galaxy with physical properties comparable to those observed. The initial forays into this field were typically with SAMs. This is because SAMs are computationally inexpensive, and allow for a large search in physical parameter space relatively easily. SAMs either utilize analytic halo merger trees, or directly simulate them, and then employ analytic prescriptions to describe the central galaxies. The Durham SAM 7 , 77 couples galaxies formed in a semi-analytic model with dust radiative transfer. These simulations model galaxies that have axisymmetric geometries that consist of a disk and a bulge. Young stellar populations are assumed to still be enveloped in their birth clouds, and thus experience additional attenuation. This model suggests that roughly ∼ 22% of SMGs originate from major mergers, the remainder from minor mergers, and that the stellar IMF is flat during the starburst. The typical lifetime for the submillimetre-luminous phase is ∼ 100 Myr (a factor of ∼ 7.5 lower than found in our work), galaxies are extremely gas rich ( f gas ≈ 75%), and stellar masses are a factor of ∼ 10 lower than predicted by our model ( M * ≈ 2. × 10 10 M ⊙ ). While the stellar masses of SMGs are debated 21 , 78 , 79 , the gas fractions appear to be uniformly lower in observations 4 , 18 , 23 , 80 , 81 , and a flat stellar IMF is probably ruled out by CO dynamical mass measurements 82 . As an alternative to cosmological simulations, a number of studies have explored SMG formation in idealized simulations 8 , 83 , 84 , 85 , 86 . These studies evolve hydrodynamic models of idealized disks and mergers over a range of merger mass ratios, and combine these with dust radiative transfer simulations 74 . These models infer halo masses and stellar masses for SMGs comparable to those modelled here. This said, in the idealized galaxy models, ∼ 30%–70% of the SMGs (flux dependent) originate in merger-driven starbursts, substantially higher than what is found for our model. Some studies 8 have noted that binary mergers that cause SMGs may break up into multiples at high-resolution owing to the contribution to the total flux of individual inspiralling disks. Because idealized simulations are non-cosmological in nature, comparing the multiplicity inferred from these to our models is difficult: the major merger multiplicity can only be two when considering galaxies at the same redshift. On the other hand, Extended Data Fig. 2 suggests that potentially larger multiplicity can be observed for physically associated clumps. To fully capture the cosmic environment of SMGs during their formation, as well as their baryonic structure and morphology, cosmological hydrodynamic simulations are probably the best tool. Thus far, cosmological hydrodynamic simulations used to simulate SMGs have not employed direct radiative transfer models 10 . As such, inferring when a galaxy is an SMG in cosmological simulations has necessitated the use of parameterized emission models, such as assumed grey-body emission laws 9 , or SFR thresholds 10 . The physical properties for SMGs derived from the most extensive of these studies 10 (that is, M * , M DM and f gas ) are similar to the model presented here, although with roughly a factor of ∼ 3 difference in SFR. Code availability We have made POWDERDAY available at , and GIZMO available at . | The brightest galaxies in our universe are fuelled by what their gravity sucks in, not through explosive mergers of star systems as scientists previously argued, researchers said Wednesday. In what may be the most complete explanation yet of how these enormous collections of stars and dust came to be, scientists found the galaxies pulled in hydrogen gas and then used it to pump out the equivalent of up to 2,000 Suns per year, according to a study in Nature. By comparison, our own galaxy—the Milky Way—turns out stars at the rate of about one "Sun" per year. The brilliant light put off by these so-called submillimetre galaxies (SMGs)—named for the part of the electromagnetic spectrum they use—is all but invisible to the naked eye. "The massive galaxy grows via [pulling in] gas from intergalactic space and forms stars at a steady but large rate for nearly a billion years," study co-author Desika Narayanan of Haverford College in the United States told AFP. These galaxies date from the early days of our roughly 14-billion-year-old universe, but researchers have only known about them for a couple of decades. Their brightness, which gives off 1,000 times the light of the Milky Way, is due mostly to their prolific output of stars. Movie shows rotating view of extreme infrared-luminous starburst region in the early Universe, just a few billion years after the Big Bang. The model suggests that extreme infrared-luminous regions observed by submillimetre-wave telescopes are often comprised of groups of galaxies in the early Universe that will grow to be massive clusters in the present day. Credit: Robert Thompson (NCSA) Scientists disagree on how the SMGs were born, but one of the favoured explanations is that they result from massive galaxies slamming together and exploding into an intense burst of star-making. But Narayanan says this theory can't account for all the qualities of super bright galaxies, especially their relatively large sizes, since mergers tend to make rather compact galaxies. Enigmatic galaxy To test their own gravity-based explanation, Narayanan and colleagues used super computers to simulate the creation of an SMG. Image shows distribution of galaxies across the infrared luminous region, at a given instance in time. The colours denote the gas density. The model suggests that extreme infrared-luminous regions observed by submillimetre-wave telescopes are often comprised of groups of galaxies in the early Universe (just a few billion years after the Big Bang) that will grow to be massive clusters of galaxies at the present day. Credit: Robert Thompson (NCSA) They found that the galaxies grew by pulling in gas that was then used to make stars which—because they were new—radiated exceptional amounts of light. Mergers did not have a significant impact, even if SMGs can include clusters of galaxies which bump up their brightness, the study concluded. "The galaxies collectively contribute to the local luminosity and render the system extremely bright," Narayanan said. The simulations were so complex that it took thousands of networked computers more than a month to do only a part of the calculations. The team also ran models to track how light would move through these newly-created galaxies to see if the simulated outcome would resemble the real thing. "These results provide one of the most viable models thus far for one of the most enigmatic (features of space) that we know about," Narayanan said. The researchers were astounded to discover that, according to their calculations, submillimetre galaxies remain super bright for almost a billion years. Usually, intensely luminous phenomena in the universe burn out relatively quickly—a mere tens of millions of years. | nature.com/articles/doi:10.1038/nature15383 |
Physics | New design results in compact, highly efficient frequency comb | Quanyong Lu et al, High efficiency quantum cascade laser frequency comb, Scientific Reports (2017). DOI: 10.1038/srep43806 Journal information: Scientific Reports | http://dx.doi.org/10.1038/srep43806 | https://phys.org/news/2017-03-results-compact-highly-efficient-frequency.html | Abstract An efficient mid-infrared frequency comb source is of great interest to high speed, high resolution spectroscopy and metrology. Here we demonstrate a mid-IR quantum cascade laser frequency comb with a high power output and narrow beatnote linewidth at room temperature. The active region was designed with a strong-coupling between the injector and the upper lasing level for high internal quantum efficiency and a broadband gain. The group velocity dispersion was engineered for efficient, broadband mode-locking via four wave mixing. The comb device exhibits a narrow intermode beatnote linewidth of 50.5 Hz and a maximum wall-plug efficiency of 6.5% covering a spectral coverage of 110 cm −1 at λ ~ 8 μm. The efficiency is improved by a factor of 6 compared with previous demonstrations. The high power efficiency and narrow beatnote linewidth will greatly expand the applications of quantum cascade laser frequency combs including high-precision remote sensing and spectroscopy. Introduction Optical frequency combs 1 emitting a broad spectrum of discrete, evenly spaced narrow lines with well-defined phase, have become attractive laser sources for a variety of applications. Particularly, they provide a unique combination of large wavelength coverage and high spectral resolution, therefore they allow for simultaneous, precise, and rapid spectroscopy of wide wavelength regions of interest 2 , 3 . This is of great importance to the mid-infrared (mid-IR) wavelength regime in which many strong fundamental ro-vibrational molecular transitions take place. High power frequency comb sources in the mid-IR range will revolutionize the trace gas analysis in environmental or medical monitoring applications 4 . Mode-locked lasers 5 can directly emit frequency comb output and have been the dominant comb sources in the near-infrared (near-IR) spectral region. However, there are few comb counterparts in the mid-IR range. Difference frequency generation (DFG) 6 has been able to transform the near-IR frequency comb into the mid-IR range by mixing it with a continuous wave laser inside a nonlinear crystal, with nW power level per mode tooth 7 , 8 . The power was further boosted up to μW level by using an optical parametric oscillator 9 technique which provides optical gain via parametric amplification 10 . Microresonator-based frequency combs 11 have been able to produce mW power per mode in the mid-IR range by taking advantages of the ultrahigh-Q microresonator designs and powerful pumping sources 12 . Nevertheless, all of these technique requires external pumping sources and expensive optical components. Quantum cascade laser (QCL) frequency combs, on the other hand, have been demonstrated as promising semiconductor frequency comb sources in the mid-IR and THz realm in the past few years 13 , 14 , 15 , 16 . Since the frequency comb is directly generated inside the QCL without any extra optical components, QCL frequency combs are monolithic and chip-based comb sources offering great promise for high-speed, high-resolution spectroscopy. Mid-IR and THz spectroscopy experiments using QCL frequency combs have also been recently demonstrated with high precision and low noise 17 , 18 . Higher power and uniform power distribution among the comb modes will always benefit the applications with less scanning time and higher sensitivity. Currently, a demonstrated QCL frequency comb at λ ~ 9 μm emits an average power-per-mode of about 0.6 mW per mode at room temperature (20 °C) 15 . The uniformity of power distribution is further improved via Gires–Tournois interferometers (GTI)-coated QCL combs, but the average power on each mode remains the same, about 0.5 mW at −6 °C 19 . Results and Discussion Here we report high power room temperature QCL frequency combs at λ ~ 8.0 μm. A highly efficient strain-balanced active region was designed with broadband gain and low dispersion. The QCL frequency comb emits a very narrow intermode beating linewidth of 50.5 Hz and a high wall-plug efficiency (WPE) of 6.5% with a broad spectral coverage of 110 cm −1 for 290 modes and a significantly improved average power-per-mode distribution of ~3 mW. In this work, the QCL structure is based on a dual-core active region structure with a strong-coupled strain-balanced emitter designs at λ ~ 7.5 and 8.5 μm. Figure 1(a) is the band structure of the longer-wavelength active region design at λ ~ 8.5 μm. A strong coupling between the upper lasing level 2 and injector ground level 2′ was engineered by using a relatively thin injection barrier, which increases not only the quantum efficiency of the laser, but also increase the gain bandwidth 20 . Each structure was designed with a similar upper level lifetime of ~0.6 ps and was engineered to minimize cross-absorption between the two cores. Given the same doping concentration of 2.5 × 10 16 cm −3 , the same maximum current density is expected for each active core. This is important to obtain a flat-top gain and uniform power distribution for frequency comb operation. Figure 1(b) is the simulated gain spectrum calculated at a current density of 2.0 kA/cm 2 following the description in ref. 20 . The contribution of the gain from each active core is adjusted according to their optical confinement factor within the laser waveguide. The dispersions induced by gains from single cores and dual cores are calculated through the Kramers-Kronig relation, as plotted in Fig. 1(b) . Between wavelengths of 7.5 and 8.5 μm, group velocity dispersions (GVDs) induced by dual-core design less than 350 fs 2 /mm are achieved. On account of the QCL material dispersion of −840 fs 2 /mm and waveguide dispersion of 1000 fs 2 /mm at λ ~ 8.0 μm, a total GVD 510 of fs 2 /mm is estimated, which is sufficiently low for mode locking of dispersed cavity modes into evenly spaced comb modes via four-wave mixing. Figure 1 ( a ) Band structure of a strain-balanced active region designed at λ ~ 8.5 μm for frequency comb. The layer sequence in nm, starting from the injection barrier, is 2.8 / 2.5/ 0.9 / 3.5 / 3.0/ 0.9 / 3.2 / 2.4/ 1.6 / 2.8 / 1.7/ 1.3 / 2.6 / 1.6/ 1.3 / 2.2 / 1.6 / 1.4 / 2.0 / 1.6 / 1.5 / 3.1/ 1.7 / 3.2/ 2.1 / 3.2. The barriers are in bold font, and the wells are in normal font, the Ga 0.47 In 0.53 As insertions are in italic, and the underlined layers are doped to n = 1.7 × 10 17 cm −3 . The layer sequence for the design at λ ~ 7.5 μm is similar to ref. 20 with the injection barrier increased by 0.2 nm. ( b ) Calculated gain and GVD spectra for the single-core design (dashed lines) and dual-core design (solid lines) at a current density of 2.0 kA/cm 2 . Full size image A Fourier transform technique was first performed to evaluate the amount of residual GVD inside a 4 mm-long high-reflection (HR) QCL frequency comb 21 (See Method section). The time domain interferogram taken by the Fourier transform infrared (FTIR) is essentially the Fourier transform of the spectrum in the frequency domain. Due to the facet reflectivity, the sub-threshold spontaneous emission exhibits resonant cavity effects, resulting in “bursts” in the interferogram ( Fig. 2(a) ). Burst 1, as labelled in Fig. 2(a) , corresponds to the moving mirror position at which the interferometer optical path difference matches a single round trip optical distance within the laser cavity. The Fourier transform of this burst will represent the phase spectrum and amplified spontaneous emission (ASE) spectrum in the cavity. The GVD, defined by the group delay dispersion per unit length, are presented in Fig. 2(b) by performing the second derivative of the relative phase divided the total travelling distance inside the cavity. Clearly, the HR-coated comb device exhibits net positive GVDs in the lasing spectral range. As the currents increases near threshold, the net GVD decreases to ~460 fs 2 /mm at 1270 cm −1 , which is sufficient low for the modes to be efficiently coupled by four-wave mixing 15 , 22 . In addition, the modal gain is calculated by using the ASE spectra transformed from burst 1 and 3, as shown in Fig. 2(c) . A broad flat-top gain with full width at half maximum (FWHM) of 350 cm −1 also verifies the above broad gain active design. Figure 2 ( a ) Interferogram generated by a FTIR when measuring the subthreshold spectrum of a HR-coated comb device. Individual bursts in the interferogram are labelled sequentially. The resolution is 0.11 cm −1 . ( b ) Measurement of the GVD and ( c ) gain of HR-coated QCL-comb as a function of current. Full size image High CW power operation at room temperature was obtained for this frequency comb device as the optical power-current-voltage ( P - I - V ) characterization shown in Fig. 3(a) . This device emits a CW power up to 880 mW with a threshold current density of 2.05 kA/cm 2 . In pulsed mode operation, the device emits up to 1.7 W with a threshold current density of 1.76 kA/cm 2 . The slope efficiency and WPE are 2 W/A and 10.3% in pulsed mode operation, and 1.7 W/A and 6.5% in CW operation at room temperature. This is compared with the WPEs less than 1% from previous QCL frequency comb demonstrations 13 , 15 , 19 . The spectra measurements were performed on a Bruker FTIR spectrometer with a liquid nitrogen cooled mercury-cadmium-telluride (MCT) detector in rapid scan mode. The emitting spectra shown in Fig. 3(b) reveal that the device operates in single mode in the lower current range, and exhibits a broad lasing spectra at the higher currents with a coverage up to 110 cm −1 . The power-per-mode distribution at a current of 1.06 A is plotted on a logarithmic scale to further assess the uniformity of the emission, as shown in Fig. 4 . The power distribution is much more uniform that the previous demonstration 9 , with over 1 mW power for 77% of all the modes, and a high average power-per-mode of about 3 mW. The intermode spacings among the 290 frequency comb modes are rather constant, ~0.38 cm −1 . Figure 3 ( a ) P-I-V characterization of a 4-mm long HR coated QCL comb device in CW operation at 293 K. ( b ) Lasing spectra at different currents changing from 0.75 to 1.1 A. The resolution is 0.125 cm −1 . Full size image Figure 4: Power-per-mode distribution at a current of 1.06 A. The inset is the intermode spacing among different modes. Full size image To assess frequency comb operation via phase locking of adjacent laser modes, the linewidth of the intermode beating was measured at the round-trip frequency with a high-speed quantum well infrared detector (QWIP) and a spectrum analyzer (Agilent-E4407B). A high stability current driver (Wavelength Electronics QCL2000 LAB) is used for low-noise testing. Figure 5(a) plots the beatnote spectra at different currents. The spectra were acquired using a spectrum analyzer with a span range of 3 MHz and resolution bandwidth (RBW) of 10 kHz. The beatnote spectra stay fairly narrow in the current range of 780–938 mA with a full width at half maximum (FWHM) is limited by the RBW. To further evaluate the linewidth, high resolution scans were performed with a RBW of 30 Hz and span range of 100 kHz. Extremely narrow beatnote linewidths of 50.5 at 11.18096 GHz and 305 Hz at 11.1711 GHz were observed at currents of 800 and 938 mA, respectively ( Fig. 5(c and d) ). The corresponding powers and spectral ranges are 400 mW and 610 mW, and 75 cm −1 and 95 cm −1 , respectively. This narrow linewidth reflects that a well-defined phase is established among the frequency comb modes. A closer look of the beatnote linewidth and frequency as functions of currents is presented in Fig. 5(b) . The frequency comb dynamic current range ΔI f over the entire laser emitting range from the threshold to the roll-over currents ΔI is estimated to be 25%, which is much wider than the previous demonstrations with about 10% comb dynamic range 13 , 15 . At higher currents above 940 mA, the FWHM of the beatnote spectrum increases from 15 kHz at 950 mA to 52.5 kHz at 1060 mA with an output power of 770 mW. After the near roll over current of 1083 mA, the beatnote spectrum broadens to 400 kHz with a flat top, which indicates the device operates in the high-phase-noise regime 19 . The intermode beat frequency shifts as a function of current, with a tuning rate of 70 kHz/mA. This is because both the repetition frequency f rep and the carrier-envelope offset frequency f ceo are related to the group index n g , and are sensitive to the current induced temperature change. To realize a “ruggedized” broadband frequency comb for real-world applications, the comb spectrum and power will be stabilized by phase-locking to a reference stabilized frequency comb source 23 or adjusting the operating temperature accordingly to stabilize the frequency comb by using the feedback of beatnote frequency change. Figure 5 ( a ) Beatnote spectra at different currents at 293 K. ( b ) Beatnote linewidth and frequency as functions of currents. Beatnote spectra at currents of ( c ) 800 mA and ( d ) 938 mA at 293 K. Full size image An intermode beat spectroscopy was performed to examine whether all the laser modes participate in the comb operation and gain insight in to the respective phase and coherent properties of the comb modes 13 , 15 . After the laser light was guided into the Michelson interferometer of the FTIR and refocused into the QWIP, beatnote spectra were recorded for each mirror position during a step scan. Figure 6(a) shows the beatnote interferogram at a current of 938 mA acquired with a resolution bandwidth of 30 kHz, a scan range of 2 MHz and a step scan resolution 4 cm −1 . The peak beatnote powers measured by the spectrum analyzer and biased QWIP currents at different mirror positions were plotted together in Fig. 6(b) . Like the previous demonstrations, a minimum of the intermode beat interferogram is observed near the zero time delay position, indicating a well-defined phase relation between the modes. The Fourier-transformed intermode beat spectrum is almost identical to the intensity spectrum, as shown in Fig. 6(c) . Nearly the entire lasing spectrum with spectral bandwidth of 95 cm −1 contributes to the intermode beating and frequency comb formation. To further increase the spectral bandwidth even up to the octave range for mid-IR frequency comb sources, a broader gain design based on a balanced-gain heterogeneous active structure 24 with a double-chirped mirror for dispersion engineering 14 will be investigated in the next research stage. Figure 6 ( a ) Intermode beat spectroscopy taken at 938 mA. ( b ) Intermode beat interferogram (red) and intensity interferogram (black) measured with intermode beat spectroscopy. ( c ) Fourier transformed intermode beat spectrum (red) and intensity spectrum (black). Full size image Conclusions In conclusion, we demonstrate a frequency comb source based on a dispersion-compensated quantum cascade laser frequency comb at λ ~ 8 μm with high power output up to 880 mW for ~290 modes, covering a spectral coverage of 110 cm −1 . The wall-plug efficiency is 6.5%, enhanced by a factor of 6 compared with the previous results. Extremely narrow beatnote linewidths less than 305 Hz is identified over a wide current range of 25% of total laser dynamic current range. The demonstrated monolithic high power efficiency frequency comb source will find wide applications especially in remote spectroscopy and sensing where high power output is mostly desired. Methods Growth and fabrication The QCL structure presented in this work is based on the strain-balanced Al 0.63 In 0.37 As/Ga 0.35 In 0.65 As/Ga 0.47 In 0.53 As material system grown by gas-source molecular beam epitaxy (MBE) on an n-InP substrate. The growth started with a 2-μm InP buffer layer (Si, ~2 × 10 16 cm −3 ). The laser core consisted of a dual-core strain-balanced single-phonon resonance (SPR) structure, which 20 stages for each wavelength design. The average doping of the active region is ~2.5 × 10 16 cm −3 . The MBE growth ended with a 30 nm-thick InP cladding layer (Si, ~2 × 10 16 cm −3 ). Metal organic chemical vapor phase deposition (MOCVD) was then used for the growth of a 4.5-μm-thick InP cladding layer (Si, ~2–5 × 10 16 cm −3 ) and 0.5-μm-thick InP cap layer (Si, ~5 × 10 18 cm −3 ). The wafer was processed into a standard buried ridge geometry with a ridge width of 7 μm. A 4-mm long QCL frequency comb device was high-reflection coated with Y 2 O 3 /Au (500/100 nm) and epi-down mounted on a diamond submount for characterizations. Testing was done on a thermoelectric cooler (TEC) stage at room temperature. For continuous wave (CW) measurement, the optical power was measured with a calibrated thermopile detector placed directly in front of the laser facet. Gain and GVD measurement The gain and GVD spectra are acquired by using a Fourier transform technique 21 . The characterization is done by measuring the spontaneous emission of the QCL device under subthreshold CW operation with a FTIR. The emitted beam is splitted into two by the FTIR beam-splitter and reflected back by two mirrors and recombined into a MCT detector. The electrical fields for the two beams are expressed as: Here z 1 and z 2 are beam travelling distances, and φ 1 and φ 2 are their phases. c is the light speed in vacuum. Hence the total intensities of the two light beams detected by the MCT detector is: Here the Δz/ c is the time delay between the two beams, Δ φ is their relative phase. Fourier transform of equation (3) for the burst 1 labelled in Fig. 2(a) will generate ASE spectrum and the relative phase spectrum after a single round-trip travelling inside the cavity. The GVD is therefore deduced by performing the second derivative of the relative phase divided the round-trip travelling distance D : The ratio of two adjacent ASE spectra follows the relation 25 : where g Г is the modal gain, R 1 and R 2 are facet reflectivities, α w is waveguide loss, and L is cavity length. Therefore, the modal gain is calculated by using the ASE spectra transformed from burst 1 and 3 to average out some of the noise: Additional Information How to cite this article: Lu, Q. et al . High efficiency quantum cascade laser frequency comb. Sci. Rep. 7 , 43806; doi: 10.1038/srep43806 (2017). Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Northwestern University researchers have designed a quantum cascade laser (QCL) frequency comb that is dramatically more efficient than previous iterations. Led by Manijeh Razeghi, researchers in Northwestern's Center for Quantum Devices theoretically designed and experimentally synthesized a new, strain-engineered emitter material. Made with the new material, the compact QCL frequency comb is one order of magnitude more efficient and emits more than four times the output power than all previous demonstrations. Razeghi's QCL frequency comb operates in the infrared spectral region, which is useful for detecting many different kinds of chemicals, including industrial emissions, explosives, and chemical warfare agents. "We are seeing the beginning of a true revolution in compact gas sensor technology," said Razeghi, Walter P. Murphy Professor of Electrical Engineering and Computer Science in Northwestern's McCormick School of Engineering. "Imagine a handheld system that can detect trace amounts of hazardous chemicals in a fraction of a second." Supported by the National Science Foundation, Department of Homeland Security, Naval Air Systems Command, and NASA, the research was published online today in Scientific Reports. A revolutionary player in fundamental science, a frequency comb is a light source that emits a spectrum containing a series of discrete, equally spaced frequency lines. The exact spacing of frequencies is key to manipulating light for various applications and has led to new technologies in diverse fields, including medicine, communications, and astronomy. Today, frequency combs span vast frequencies of light from terahertz to visible to extreme ultraviolet. "Since the direct frequency comb was generated by using a mode-locked femtosecond laser in the 1990s, various techniques have been used to produce frequency combs," Razeghi said. "But each of these techniques requires multiple optical components. This is neither compact nor convenient." Razeghi's work has made it possible to generate a frequency comb from a single optoelectronic component just a few millimeters in length. The resulting QCL frequency comb is incredibly compact and emits more than 300 equally spaced frequency lines, spanning a range of 130 centimeters. "The system is based on a mass producible component with no moving parts," Razeghi said, "which is attractive in terms of both cost and durability." Razeghi's group is currently looking for ways to increase further the spectral range of its QCL frequency combs. This includes searching for ways to make a chip-scale, room temperature, terahertz frequency comb, which would enable new applications in non-destructive package evaluation and biomedical imaging. | 10.1038/srep43806 |
Biology | When strains of E.coli play rock-paper-scissors, it's not the strongest that survives | Michael J. Liao et al. Survival of the weakest in non-transitive asymmetric interactions among strains of E. coli, Nature Communications (2020). DOI: 10.1038/s41467-020-19963-8 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-19963-8 | https://phys.org/news/2020-12-strains-ecoli-rock-paper-scissors-strongest-survives.html | Abstract Hierarchical organization in ecology, whereby interactions are nested in a manner that leads to a dominant species, naturally result in the exclusion of all but the dominant competitor. Alternatively, non-hierarchical competitive dynamics, such as cyclical interactions, can sustain biodiversity. Here, we designed a simple microbial community with three strains of E. coli that cyclically interact through (i) the inhibition of protein production, (ii) the digestion of genomic DNA, and (iii) the disruption of the cell membrane. We find that intrinsic differences in these three major mechanisms of bacterial warfare lead to an unbalanced community that is dominated by the weakest strain. We also use a computational model to describe how the relative toxin strengths, initial fractional occupancies, and spatial patterns affect the maintenance of biodiversity. The engineering of active warfare between microbial species establishes a framework for exploration of the underlying principles that drive complex ecological interactions. Introduction Inter-species interactions form a complex web that drives ecological dynamics 1 , 2 , 3 , 4 , 5 . Competition, in particular, has been hypothesized as a driving force for the evolution and maintenance of biodiversity within various ecosystems 6 , 7 , 8 . As opposed to a hierarchical competitive structure, previous theoretical studies have shown that species diversity may be promoted by cyclical non-transitive interactions, which describe interactions where there is no single best competitor, but rather the network of species competition resembles a loop 9 , 10 , 11 , 12 , 13 , 14 , 15 . The most simplified example of this system can be described as a basic game of rock–paper–scissors. In this system, rock beats scissors, scissors cuts paper, and paper beats rock, resulting in cyclic competition with no hierarchical organization. Ecologies based on this type of interaction have been observed in various natural settings, ranging from desert lizards 12 , and costal reef invertebrates 16 to bacterial communities 11 , 17 , 18 . Of particular interest within this field of ecology is to better understand the underlying mechanisms that contribute to the stability of these ecosystems 17 , 18 , 19 , 20 . While the application of theory in ecology is still limited, the principles of a non-transitive ecology were first demonstrated experimentally using three isolated bacterial strains consisting of a toxin producing, toxin sensitive, and toxin resistant strain 17 . In this natural rock–paper–scissors ecology, the toxin producer could kill the toxin-sensitive strain, the toxin-sensitive strain could outgrow the toxin-resistant strain, likewise the toxin-resistant strain could outgrow the toxin-producing strain. This simple, non-transitive triplet demonstrated that when interactions among the strains remained local, cyclic competition could maintain species diversity. However, because this study only focused on a short observation duration of 7 days, the ability of non-transitive competition to maintain biodiversity over an ecologically relevant duration remains unclear. Additionally, because this previous study did not provide a characterization of the relative competitive advantages of each strain, it is difficult to relate this system with other non-transitive ecologies. Intuitively, one would expect competitive asymmetry within such an ecology. For example, a toxin-producing strain might outcompete a toxin-sensitive strain at a faster rate than a toxin-sensitive strain could outgrow a toxin-resistant strain. Due to this potential asymmetry of competitive advantages within this system, it is unlikely that coexistence would be maintained over a longer experimental duration. In this work, we investigate how asymmetric competitive advantages affect a three strain non-transitive ecology, expanding upon the previous experimental work using rationally engineered and well-characterized strains of E. coli . Specifically, we engineer three strains of E. coli to produce and release three different toxins that kill other members of the same species that lack the production of the corresponding immunity protein. In order to create a one directional rock–paper–scissors competition dynamic, each strain is engineered to produce an additional second immunity protein, providing immunity to one other strain in the ecology. The competitive relationships between each of the strain pairs are characterized in pairwise competition in order to establish relative competitive advantages and identify competitive asymmetry within the system. Using this characterized model microbial system, we explore the outcome of non-transitive competition in a solid growth medium environment (agar) where competitive interactions remain local. We find that given equal starting fractions among the three strains, the strain with the weakest competitive advantage consistently dominates the ecosystem. Additionally, we develop a computational model to investigate the relationship between asymmetric competition and parameter space. In particular, we examine the effect of varying the relative competitive strengths among the three strains and find that within a certain parameter space, a steady-state three-strain equilibrium can be reached by the ecology, albeit at different fractional occupancies. Finally, we demonstrate that there are initial conditions such as different patterns of initial distributions that can contribute to, or prevent, the establishment of steady-state coexistence. Results We created a synthetic ecology of three bacterial populations which compete with each other through cyclic non-transitive interactions 21 . In order to create this three-strain ecology where each E. coli strain possesses a competitive advantage over another, we used a class of naturally occurring peptide toxins called colicins. These peptides are lethal to certain E. coli strains and provide the producing strain with the ability to inhibit growth of competing strains 22 . Among colicins, the mechanisms of action fall within three major categories: disruption of protein production, degradation of DNA, and disruption of the cell membrane 22 . In order to mimic this natural competition diversity, we designed each strain to produce a different colicin that acts through each of the three mechanisms. Additionally, in order to establish a cyclic rock–paper–scissors competitive relationship, each strain was given a second immunity to the toxin produced by the targeted strain (Fig. 1 a). In this system, Strain Red (R), contains a plasmid-producing colicin E3 and E3 immunity protein, a fluorescence reporter protein, a lysis protein (for toxin release), and a secondary E7 immunity protein. Strain Green (G) contains the same plasmid structure expressing colicin E7, and the E7 immunity protein, and a secondary Col V immunity protein. Strain Blue (B) contains the wild-type Col V operon consisting of the toxin ( cvaC ), immunity ( cvi ), as well as export proteins ( cvaA and cvaB ) 23 , and a secondary E3 immunity protein (Supplementary Fig. 1 ). Due to their different mechanisms of action, inadvertent cross immunity due to structural or functional homology between related colicins was minimized. Fig. 1: Cyclic non-transitive dynamics enable biodiversity. a Diagram of the engineered ecology strains including one toxin and two immunity genes. Each toxin produced targets a different essential biological component of E. coli cells. b The three strains were combined at an equal ratio and immediately plated on agar. Each day corresponds to a passage obtained through replica plating. Image stills were obtained by fluorescence imaging. This experiment was executed one time at this density, but it was repeated at multiple densities as shown in Supplementary Fig. 2 . Full size image The first experiment we carried out to investigate our three strain ecology consisted of mixing the strains, from liquid cultures, at an equal ratio and plating them on a static agar plate environment. The choice to carry out all our experiments on solid media was based on the previously reported observation that local interactions and dispersal promote diversity within a community 17 . After incubation, the bacteria lawn on the agar plate was passaged by replica plating every 24 h to guarantee a constant source of fresh nutrients and uncolonized space to invade (Fig. 1 b). We tested a range of initial plating densities spanning a 10 4 fold change (Supplementary Fig. 2 a). We found that the duration of coexistence was inversely proportional to the initial plating densities (Supplementary Fig. 2 b). This outcome is attributed to the ability of the lower density cells to establish larger colonies due to the higher availability of free space surrounding them. In this scenario, cells at the colony boundary are able to shield cells in the interior from toxin exposure. Interestingly, we observed one scenario (1–1000 dilution) in which all three strains coexisted for the entire duration of the 30-day experiment (Supplementary Fig. 2 c and Supplementary Movie 1 ). In this scenario, Strain B was nearly able to completely colonize the agar plate, however a small patch of Strain G cells managed to survive the initial attack from Strain R. As a result, Strain G was able to invade into space colonized by Strain B, and Strain R could invade into the newly acquired area of Strain G. Nevertheless, in the four other scenarios we tested, the system always converged to a single winner, Strain B (Supplementary Movie 2 ). We wondered whether the survival of Strain G in the 1–1000 dilution was an outlier that occurred because of random distributions in the initial plating, or whether coexistence of this system of three strains was an expected outcome under these conditions. Therefore, we designed an experimental protocol which could guarantee the sequential placement of the three strains in individual colonies which are equally spaced from each other like the intersections of a grid. In order to achieve precise control of our experimental conditions we used a liquid handling robot to array the three strains into equally spaced grids of varying initial densities. After arraying the cells into grid format on agar plates, the strains were grown for 24 h, then passaged every subsequent 24 h by replica plating (Fig. 2 a). The results of this experiment were in agreement with our previous observations. As before, the initial density was inversely related to the duration of coexistence of the three strains, and Strain B was able to take over the agar plate (Fig. 2 b and Supplementary Fig. 3 ). Because competition within microbial communities may be affected by differences in growth rate, we first measured the growth rate of each RPS strain and found that the strains had similar growth rates (Supplementary Fig. 4 a). Therefore, we hypothesized that the reason Strain B was consistently winning was due to asymmetry among the potency of the toxins produced by the three competing strains. In order to determine the source of this asymmetry, we characterized the interactions between each pair of competing strains using the same grid pattern arrangement (Fig. 2 d). We found that Strain R produced the most lethal toxin, as it was able to completely eliminate Strain G prior to day 5. The next strongest toxin produced by Strain G was able to completely eliminate the Strain B by day 6. Finally, we found Strain B to be the weakest toxin producer, taking 7 days to completely eliminate Strain R (Fig. 2 e). The resulting ranking of relative toxin strengths was also demonstrated in kill curve experiments conducted in liquid culture (Supplementary Fig. 4 b, c). Therefore, we confirmed the asymmetry in the system, whereby Strain R was shown to be the strongest and Strain B the weakest (Fig. 2 f). Following this observation, our hypothesis was that since Strain R was the strongest toxin-producing strain, it was quickly, and completely, eliminating Strain G. As a result, when Strain B and Strain R were the only two left competing, Strain B was eventually able to win due to its immunity and toxicity to Strain R. In this first hypothesis, Strain B would win due to its position as the enemy of the strongest strain (Supplementary Fig. 5 a). However, we also considered the counterintuitive possibility that Strain B was consistently winning because it was producing the weakest toxin (Supplementary Fig. 5 b). In order to test this theory, we created a new three-strain ecology (RPS-2) in which we reversed the order of the rock–paper–scissors competitive relationship by swapping the secondary immunity genes for each strain. With these new constructs, Strain R2 now attacked Strain B2, Strain B2 attacked Strain G2, and Strain G2 attacked Strain R2. Although we reversed the order of competition by reversing the secondary immunity, the relative toxin strengths remained the same (Fig. 3 a and Supplementary Fig. 6 ). Therefore, while Strain B2 remained the weakest strain, the enemy of the strongest then became Strain G2. If our initial hypothesis of the winner being the enemy of the strongest were correct, we would expect Strain G2 to eventually take over the plate. Alternatively, if “predominance of the weakest” were the driving force, then we would expect Strain B2 to win in RPS-2 as well (Supplementary Fig. 5 b). Simultaneously, we developed a lattice-based computational model to simulate competition of the three strains in a 150 × 150 square grid (Fig. 3 b). With the parameter values established in our kill curve experiments, simulations of our computational model agreed with our previous experimental results demonstrating that Strain B should outcompete the other strains (Supplementary Movie 2 ). The simulations also confirmed our observation regarding the relationship between duration of coexistence and initial density (Supplementary Fig. 7 ). Furthermore, when simulating the RPS-2 system, the model predicted that the eventual winner of the RPS ecology would still be Strain B, confirming the idea of “predominance of the weakest” (Fig. 3 c and Supplementary Movie 4 ), a theory that had been hypothesized by previous theoretical works 24 , 25 , 26 , 27 , 28 , 29 . As predicted, we found experimentally that the final winner of the RPS-2 ecology was indeed Strain B2 (Fig. 3 d, Supplementary Fig. 8 and Supplementary Movie 5 ). This outcome occurs because the weakest strain allows its target (Strain R in RPS-1 and Strain G2 in RPS-2) to rapidly expand and eliminate the third strain in the system (Strain G in RPS-1 and Strain R in RPS-2). When only two strains are left, the “weakest” strain is then able to slowly take over due to the rock–paper–scissors dynamics (Supplementary Fig. 9 ). While we were enthusiastic to experimentally demonstrate the theory of “predominance of the weakest”, we still wondered about the scenario we observed in which all three strains were not fully eliminated, and seemed to have developed stable existence. We therefore hypothesized that there must be alternate possible outcomes that arise based on probability for a given set of toxin parameters. Fig. 2: Characterization of the ecology and causes of asymmetry. a Schematic diagram of the method used to create an ordered grid of RGB colonies on agar plate through the use of a liquid handling robot. On the right, images corresponds to passages of the lowest density grid (384). Passaging of plates was done by replica plating. This experiment was executed one time at this density and it was repeated at a higher density as shown in Supplementary Fig. 3 . b Quantification of strain coexistence as a function of time for three different initial densities plated in grid format. c Quantification of strain coexistence as a function of time for five different initial densities plated in a random distribution. d Paired competition on agar plates starting from an initial grid at 1536 density. e Quantification of time to takeover for each competing pair. f Diagram illustrating asymmetry in toxin strength in the RPS ecology. Strain R produces strong inhibition of Strain G, Strain G produces intermediate inhibition of Strain B, and Strain B produces weak inhibition of Strain R. Full size image Fig. 3: Characterization of reversed RPS ecology and computational model. a Diagrams of RPS (RPS-1) and RPS reversed (RPS-2) in comparison. The numbers represent the relative toxin strength obtained from kill curve experiments. Source data are provided as a Source Data file. b Pseudocode illustrates the rules behind the lattice model simulations. c Comparison of model simulations of RPS-1 (left) to model simulations of RPS-2 (right) with a grid of density corresponding to 1536. d Comparison of experimental results of RPS-1 (left) to experimental results of RPS-2 (right) with density 1536. All lattice simulations plots were simulated in a square lattice grid and subsequently trimmed into circular shapes to ease the comparison with experimental results. This experiment was executed one time at this density and it was repeated at a lower density as shown in Supplementary Figs. 3 and 8 . Full size image To further explore this concept, we investigated the relationship between toxin strength and the system steady state. We used our computational model to do a parameter sweep of toxin production strengths. For each combination of parameters, we ran 100 simulations. We found that “predominance of the weakest” was not always the most probable outcome. When keeping the toxin strength of both the weakest and strongest toxins fixed while varying the toxin strength of the intermediate strain, the most probable steady-state outcome was dependent on the intermediate toxin strength (Fig. 4 a). The model shows that if the toxin strength of the intermediate strain (Strain G) is close in value to the toxin production strength of the weakest toxin producer (Strain B), the most probable outcome is that the intermediate wins (strain G) (Fig. 4 b). On the other hand, when the toxin production strength of the intermediate strain falls within an intermediate range, the model predicts the possibility of stable coexistence between the three strains at different fractional occupancies (Fig. 4 c and Supplementary Movie 6 ). Finally, if the toxin production strength of the intermediate strain is close in value to the strength of the strongest toxin producer (Strain R), then the weakest strain (Strain B) always wins as predicted by the hypothesis of “predominance of the weakest” (Fig. 4 d). Interestingly, we did not observe any case in which the strongest strain (Strain R) won. When compared to our previous experimental results, our experimental parameter values fall within the parameter space that predicts two possible outcomes. The most probable scenario is that Strain B wins, but there is also a small probability of the establishment of coexistence. In agreement with the model, we observed that on the plate that established coexistence, the three strains maintained different fractional occupancies, with Strain B having the highest fractional occupancy. These results highlight the value of the computational model for examining a wide variety of scenarios that would be too difficult or time consuming to cover experimentally. To this end, we wanted to investigate how different initial fractional occupancies would effect the steady-state outcome. We found that, in grid format, the final outcome is highly influenced by the abundance of the weak strain, where an inverse relationship exists between the starting fractional occupancy of the weakest strain and its probability to takeover (Supplementary Fig. 10 ). We also explored how different patterns of initial strain distribution would affect the steady-state outcome of the competition in relation to toxin strength. Using the computational model, we simulated multiple scenarios with common spatial patterns, such as stripes, isolated clusters, and concentric circles (Fig. 4 e). We found that stripes led to similar results compared to the previously discussed case with the grid pattern in which the weakest strain is able to mostly dominate the system (Supplementary Movie 2 ). On the contrary, having the strains separated in three different clusters resulted in a significant increase in the parameter ranges that could lead to coexistence (Supplementary Movie 2 ). As before, in both scenarios we did not observe any case in which the strongest strain (Strain R) could win. On the other hand, when initially distributed in the pattern of concentric rings, it was possible for the strongest strain (Strain R) to take over the plate (Supplementary Movie 2 ). This only occurs when the strongest strain is placed in the inner circle, shielded from the weakest strain (Strain B). This way, if the intermediate strain in the middle manages to kill off the outer strain before Strain R reaches it, then the strongest strain can win. As expected, we also demonstrate that this outcome is dependant upon the ratio between the ring dimensions and the rate of takeover of each strain (Supplementary Movie 2 ). Fig. 4: Model simulations exploring spatial patterns and parameter space. a In this simulation the starting condition is an ordered array of alternating strains. The bar plots shows the outcome of 100 trials for multiple parameters of Pb (Probability of death of strain B). For all simulations, the probability of death of strain R (0.1) and strain G (0.5) are kept constant. b Time series representing the possible scenario where strain G wins (left-hand side of bar plot in part a ). Below, the corresponding image stills at multiple time points. c Time series representing the possible scenario where strain R B G coexist (left-hand side of bar plot in part a ). Below, the corresponding image stills at multiple time points. d Time series representing the possible scenario where strain B wins (right-hand side of bar plot in part a ). Below, the corresponding image stills at multiple time points. e We generated the same bar plots described in part a with different geometries as initial conditions.The bar plots shows the outcome of 100 trials for multiple parameters of Pb (Probability of death of strain B). For all simulations, the probability of death of strain R (0.1) and strain G (0.5) are kept constant. Full size image Discussion Bacterial communities occupy a myriad of diverse niche environments, playing important roles in processes ranging from nutrient recycling 30 , 31 to the regulation of human health 32 , 33 , 34 . While the ability to engineer robust bacterial communities could lead to major advancements in fields, such as recycling, sustainability, and healthcare, the mechanisms underlying species diversity and stability are still not yet well understood. Although cooperative interactions between the species comprising such communities contribute to ecological stability 35 , 36 , 37 , it is generally accepted that competition is the guiding force 19 , 32 , 38 , 39 , 40 . However, due to the complexity of natural ecologies, these types of competitive interactions are difficult to isolate or quantify in nature. Our study demonstrates the feasibility of using an engineered synthetic ecology to simplify complex community relationships in order to study underlying mechanisms that may lead to community stability and the maintenance of diversity. Due to the immense species diversity and wide range of competitive strategies organisms employ in nature 19 , 40 , 41 , we hypothesized that natural competitive dynamics are likely to be unbalanced 42 , 43 . Unlike a perfectly balanced game of rock–paper–scissors in which each of the three species could kill each other at an equal rate, we focus on characterizing an asymmetric system in which the relative competitive advantages among each predator–prey pair varies. While previous studies focused on the coexistence of non-transitive communities over a relatively short timeframe 17 , they do not allow the system to reach steady state, in which case it is likely that only one species would remain. Here, we demonstrate that intransitivity fails to promote biodiversity over a long time horizon when the relative competitive advantages are imbalanced 44 . Therefore, we believe that an asymmetric non-transitive ecology is a useful base model to study complex interactions among competing bacterial species. Using our three-strain ecology we experimentally demonstrate that a uniformly distributed asymmetric game of rock–paper–scissor is most likely to be won by the weakest species 24 , 25 , 26 , 27 , 28 , 29 . Interestingly, we show that an asymmetric ecology can develop steady-state coexistence, and that the relative toxin strengths among the three species dictate the extent of the coexistence space. Counterintuitively, under the same initial conditions of uniform distribution, our models predict that the producer of the most lethal toxin never wins. As opposed to pairwise competition, where the producer of the strongest toxin has a competitive advantage 6 , the producer of the strongest toxin in non-transitive communities are at an evolutionary disadvantage. This could be an important selective force against continuous evolution of ever-more lethal warfare chemicals in microorganisms, resulting in increased diversity of chemicals that are constrained to a specific toxin strength parameter space. The role that toxin-mediated competition plays in community stability may also explain the observed relative abundance of membrane targeting DNA, or ribosome targeting bacterial toxins among bacterial communities 45 . Finally, we also observe that the steady-state outcomes of the system can be altered by changing the initial strain distribution patterns. For example, we find that separate blocks in a triangular conformation can greatly expand the parameter space for coexistence, supporting the idea that spatially separated niches are more likely to sustain biodiversity 46 , 47 , 48 . On the other hand, strains initially distributed in concentric circles can enable the strongest toxin producer to win. These results demonstrate that many factors need to be considered if the goal is to design stable synthetic ecologies in an environment where interactions are local 6 . Overall, this study provides a mathematical model and engineering framework to study competitive interactions, gain mechanistic insight, and ultimately, predictive power that can be used as a guide to design stable communities. Methods Strains and plasmids Our strains were cultured in lysogeny broth (LB) media with 50 μg ml −1 spectinomycin for the WT and TP strains, in a 37 °C shaking incubator. The plasmids used in this study are described in (Supplementary Fig. 1 ). The colicin E3, Im3, colicin E7, Im E7, and E1 lysis genes were taken from previously used plasmids 21 using PCR and assembled with Gibson assembly 49 . The 4.5 kb colicin V expression cassette comprising CvaA , CvaB , CvaC , and cvi was isolated from the pHK11 plasmid from the wild type colicin V strain ZK503 by PCR 23 . All plasmids were transformation into DH5 α (Thermofisher) chemically competent E. coli and verified by Sanger sequencing before transformation into E. coli strain MG1655. The strains used in this study are described in (Supplementary Table 1 ). The gene sequences used in this study are described in (Supplementary Table 2 ). Growth rate For growth rate experiments, the appropriate E. coli strains were seeded from a −80 °C glycerol stock into 2 ml LB and the appropriate antibiotics and incubated in a 37 °C shaking incubator. After cells reached an OD600 of 0.1, 1 ml culture was added to a 125 ml Erlenmeyer flask containing 25 ml fresh media with appropriate antibiotics and left shaking at 270 rpm. Once the samples reached an OD600 of 0.1 samples were taken every 20 min and measured at OD600 using a DU 740 Life Science Uv/vis spectrophotometer. Toxin validation To prepare colicin lysate, the colicin E3 E. coli strain was seeded from a −80 °C glycerol stock into 2 ml LB and incubated in a 37 °C shaking incubator. After cells reached an OD600 between 0.4 and 0.6, 1 ml of the grown culture was collected in a 2 ml Eppendorf tube and two cycles of incubation at 98 °C for 5 min followed by 10 min at −80 °C were performed. The resulting media was then filtered and collected using a 0.22 μm syringe filter. For toxin co-culture experiments, wild type MG1655 E. coli strains were seeded from a −80 °C glycerol stock into 2 ml LB and incubated in a 37 °C shaking incubator. After cells reached an OD600 between 0.2 and 0.4, 5 μl culture was added to 200 μl fresh media in a standard Falcon tissue culture 96-well flat bottom plate. Additionally, 5 μl of the purified colicin lysate was added to each well. Cultures were grown at 37 °C shaking for 19 h and their optical density at 600 nm absorbance was measured every 5 min with a Tecan Infinite M200 Pro. Plate passage experiments Each E. coli strain was seeded from a −80 °C glycerol stock into 5 ml LB with 50 μg ml −1 spectinomycin. After growth for 8–12 h at 37 °C in a shaking incubator, the culture was diluted 100-fold into 25 ml of the same medium in a 50 ml Erlenmeyer flask and grown until reaching an OD of 0.4 (Plastibrand 1.5 ml cuvettes were used). Strains were then diluted 1:10, 1:100, 1:1000, and 1:10,000. 20 μl of each strain was then plated into separate regions of 100 × 15 mm Petri dishes containing LB agar and 50 μg ml −1 spectinomycin. The strains were then mixed using 10 glass beads and incubated overnight at 37 °C for 24 h. Plates were passaged every 24 h by replica plating onto a fresh agar plate containing the appropriate antibiotics. Liquid handling robot As before, each E. coli strain was seeded from a −80 °C glycerol stock into 5 ml LB with 50 μg ml −1 spectinomycin. After growth for 8–12 h at 37 °C in a shaking incubator, the culture was diluted 100-fold into 25 ml of the same medium in a 50 ml Erlenmeyer flask and grown until reaching an OD of 0.75. Upon reaching this OD, 5% glycerol was added to each culture and 45 μl of each prepared culture was then transferred into a single 384-well labcyte plate. To plate the grid array we used a Labcyte Echo liquid handling robot to transfer 2.5 nl volume from the source well on the labcyte well plate onto SBS-format PlusPlates containing 42 ml LB agar and 50 μg ml −1 spectinomycin. The transferred cells were arrayed according to 384, 1536, and 6144 well plate formats in order to create varying densities. The cells were then incubated overnight at 37 °C for 24 h. Colonies were transferred by replica plating from the SBS-format PlusPlates onto a standard 100 × 15 mm Petri dish containing LB agar and 50 μg ml −1 spectinomycin. Plates were passaged every 24 h by replica plating onto a fresh agar plate containing the appropriate antibiotics. Kill curve To prepare colicin lysate for each strain, the appropriate strains were seeded from a −80 °C glycerol stock into 5 ml LB and incubated in a 37 °C shaking incubator overnight. The overnight cultures were then passaged 1:100 into 2 ml LB and incubated in a 37 °C shaking incubator until reaching an OD600 of 1.0. 1 ml of the culture was then collected in a 2 ml Eppendorf tube, centrifuged at 21,130 rcf for 10 min and the supernatant was passed through a 0.22 μm syringe filter. For the kill curve assay, the appropriate strains were grown in a 37 °C shaking incubator to an OD600 between 0.3 and 0.4 in 5 ml LB in a 25 ml flask. 500 μl of the corresponding colicin lysate was then added to the flask. CFU measurements were taken by plating serial dilutions ( n = 3) and counting colonies. The first time point was taken immediately before each colicin was added, then every 10 min afterwards. For analysis, we took the difference of five time points (40 min) that corresponded to the maximum change in total alive cell count divided by the initial CFU count of the initial time point in order to calculate the percent cell death. The relative toxin strengths were then found by taking the ratios between the percent cell deaths for each strain pair. Image processing For plate imaging a Syngene PXi fluorescent imager was used. Strains producing sfGFP were captured using the Blue LED Module for excitation and SW032 emission filter, and strains producing mKate2 were captured using the Red LED Module excitation source and Filt 705 emission filter. Images were processed using Image J. Images were converted to 8-bit and background subtracted. In order to assign the false color blue, we inverted the fluorescence values of the sfGFP image using the math function v = − v + v (mean). We then took the difference from the mKate2 image in order to create a mask of the negative space from both GFP and RFP. This image was assigned as the "Blue" channel for composite images. Modeling We developed a lattice-based model in Matlab to simulate the competition dynamics between the three strains RGB. The model was based on similar principles previously described in the literature 17 . The lattice is a 150 × 150 regular square lattice with zero boundary conditions. This means that the edges of the lattice are set to zero (absorbing boundary conditions), simulating the physical boundary of the Petri dish that prevents cells from expanding beyond it, as well as the effect of disregarding cells beyond the boundary of the replica plating. Therefore, the four edges of the square are kept at a value of zero (no cells can grow/expand in that direction) and the grid is updated only for the internal pixel (pixel 2 to N − 1 on all sides). The simulations in Figs. 2 and 3 were obtained by starting the lattice with a grid array of alternating strains. The remaining lattice points are classified as “empty space”. The probability of death for each competing strain is associated to the relative potency of its enemy’s toxin. pR, pG, and pB refer to the maximum probability of death of strain R, G, and B, respectively. For each time loop, the lattice array is scanned pixel by pixel (ignoring boundary pixels which have a fixed value of 0) and is updated according to two main rules as shown in Fig. 3b . If the pixel considered is empty, the algorithm takes into account the relative occupancy of the eight neighboring pixels for the three strains R, G, or B. Three probabilities are calculates as the sum of locations occupied by each strain divided by the total neighboring locations (8). Finally, the Matlab function randsample is used to choose how to fill the spot according to the previously calculated probabilities. This process simulates expansion due to growth. On the other hand, if a given location is full, the strain is killed with a thresholded probability that is dependent on the number of surrounding enemies present. If the number of enemies is below 4 (corresponding to being ≤half surrounded), the probability of death is capped at half the maximum probability of death for the given strain. On the other hand, if the number of surrounding enemies is >4, the probability of death corresponds to the maximum probability associated to the given strain. In addition, we set a baseline probability of death = 0.05 which cumulatively represents the random death of cells and their removal due to the replica plate passaging. For the simulations shown in Fig. 4 , each simulation was run for t = 10,000 and each parameter set was simulated 100 times. Each time point corresponds to a reproduction event (around 25 min). Therefore, the time chosen to investigate steady-state dynamics corresponds to about 170 days, which we established to be a long enough interval both computationally and biologically. In terms of spatial parameters, the distance between two consecutive pixels represents roughly 1 mm on the agar plate. Therefore, the entire grid represents a square with a side of about 10 cm. The relationship between grid densities on the agar plates compared to the model are illustrated in Supplementary Fig. 7 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Authors can confirm that all relevant data are included in the paper and/or its supplementary information files. In addition, Source Data are provided with this paper. Plasmids and bacterial strains are available from the corresponding author upon request. Source data are provided with this paper. Code availability All code is available on github at the following link . | Bacteria is all around us—not just in bathrooms or kitchen counters, but also inside our bodies, including in tumors, where microbiota often flourish. These 'small ecologies' can hold the key to cancer drug therapies and learning more about them can help development new life-saving treatments. What happens when different strains of bacteria are present in the same system? Do they co-exist? Do the strongest survive? In a microbial game of rock-paper-scissors, researchers at the University of California San Diego's BioCircuits Institute uncovered a surprising answer. Their findings, titled "Survival of the weakest in non-transitive asymmetric interactions among strains of E. coli," appeared in a recent edition of Nature Communications. The research team consisted of Professor of Bioengineering and Molecular Biology Jeff Hasty; Michael Liao and Arianna Miano, both bioengineering graduate students; and Chloe Nguyen, a bioengineering undergraduate. They engineered three strains of E. coli (Escherichia coli) so that each strain produced a toxin that could kill one other strain, just like a game of rock-paper-scissors. When asked how the experiment came about, Hasty commented, "In synthetic biology, complex gene circuits are typically characterized in bacteria that are growing in well-mixed liquid cultures. However, many applications involve cells that are restricted to grow on a surface. We wanted to understand the behavior of small engineered ecologies when the interacting species are growing in an environment that is closer to how bacteria are likely to colonize the human body." The researchers mixed the three populations together and let them grow on a dish for several weeks. When they checked back they noticed that, across multiple experiments, the same population would take over the entire surface—and it wasn't the strongest (the strain with the most potent toxin). Curious about the possible reasons for this outcome, they devised an experiment to unveil the hidden dynamics at play. A computer model of three strains of E.coli, placed in clusters, to see which strain will dominate. Research conducted by UC San Diego's BioCircuits Institute. Credit: BioCircuits Institute/UC San Diego There were two hypotheses: either the medium population (called "the enemy of the strongest" as the strain that the strongest would attack) would win or the weakest population would win. Their experiment showed that, surprisingly, the second hypothesis was true: the weakest population consistently took over the plate. Going back to the rock-paper-scissor analogy, if we assume the 'rock' strain of E.coli has the strongest toxin, it will quickly kill the 'scissor' strain. Since the scissor strain was the only one able to kill the 'paper' strain, the paper strain now has no enemies. It's free to eat away at the rock strain slowly over a period of time, while the rock strain is unable to defend itself. To make sense of the mechanism behind this phenomenon, the researchers also developed a mathematical model that could simulate fights between the three populations by starting from a wide variety of patterns and densities. The model was able to show how the bacteria behaved in multiple scenarios with common spatial patterns such as stripes, isolated clusters and concentric circles. Only when the strains were initially distributed in the pattern of concentric rings with the strongest in the middle, was it possible for the strongest strain to take over the plate. It is estimated microbes outnumber human cells 10 to 1 in the human body and several diseases have been attributed to imbalances within various microbiomes. Imbalances within the gut microbiome have been linked to several metabolic and inflammatory disorders, cancer and even depression. The ability to engineer balanced ecosystems that can coexist for long periods of time may enable exciting new possibilities for synthetic biologists and new healthcare treatments. The research that Hasty's group is conducting may help lay the foundation to one day engineer healthy synthetic microbiomes that can be used to deliver active compounds to treat various metabolic disorders or diseases and tumors. Vice Chancellor for Research Sandra Brown said, "Bringing together molecular biology and bionengineering has allowed discovery with the potential to improve the health of people around the world. This is a discovery that may never have occurred if they weren't working collaboratively. This is another testament to the power of UC San Diego's multidisciplinary research." | 10.1038/s41467-020-19963-8 |
Medicine | Loneliness and isolation linked to heightened risk of heart disease / stroke | Loneliness and social isolation as risk factors for coronary heart disease and stroke: systematic review and meta-analysis of longitudinal observational studies, DOI: 10.1136/heartjnl-2015-308790 Editorial: Loneliness and social isolation as risk factors for CVD: implications for evidence based patient care and scientific enquiry, doi: 10/1136/heartjnl-2015-309242 Journal information: Heart | http://dx.doi.org/10.1136/heartjnl-2015-308790 | https://medicalxpress.com/news/2016-04-loneliness-isolation-linked-heightened-heart.html | Abstract Background The influence of social relationships on morbidity is widely accepted, but the size of the risk to cardiovascular health is unclear. Objective We undertook a systematic review and meta-analysis to investigate the association between loneliness or social isolation and incident coronary heart disease (CHD) and stroke. Methods Sixteen electronic databases were systematically searched for longitudinal studies set in high-income countries and published up until May 2015. Two independent reviewers screened studies for inclusion and extracted data. We assessed quality using a component approach and pooled data for analysis using random effects models. Results Of the 35 925 records retrieved, 23 papers met inclusion criteria for the narrative review. They reported data from 16 longitudinal datasets, for a total of 4628 CHD and 3002 stroke events recorded over follow-up periods ranging from 3 to 21 years. Reports of 11 CHD studies and 8 stroke studies provided data suitable for meta-analysis. Poor social relationships were associated with a 29% increase in risk of incident CHD (pooled relative risk: 1.29, 95% CI 1.04 to 1.59) and a 32% increase in risk of stroke (pooled relative risk: 1.32, 95% CI 1.04 to 1.68). Subgroup analyses did not identify any differences by gender. Conclusions Our findings suggest that deficiencies in social relationships are associated with an increased risk of developing CHD and stroke. Future studies are needed to investigate whether interventions targeting loneliness and social isolation can help to prevent two of the leading causes of death and disability in high-income countries. Study registration number CRD42014010225. This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: googletag.cmd.push(function() { googletag.display("dfp-ad-mpu"); }); Statistics from Altmetric.com See more details Picked up by 337 news outlets Blogged by 41 Referenced in 5 policy sources Tweeted by 507 On 27 Facebook pages Referenced in 2 Wikipedia pages Mentioned in 4 Google+ posts Reddited by 1 On 4 videos 1210 readers on Mendeley 1 readers on CiteULike Linked Articles Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. ?xml version="1.0" encoding="UTF-8" ? Request permissions Introduction Adults who have few social contacts (ie, who are socially isolated) or feel unhappy about their social relationships (ie, who are lonely) are at increased risk of premature mortality. 1 The influence of social relationships on mortality is comparable with well-established risk factors, including physical activity and obesity. 2 Yet, compared with our understanding of these risk factors, we know much less about the implications of loneliness and social isolation for disease aetiology. Researchers have identified three main pathways through which social relationships may affect health: behavioural, psychological and physiological mechanisms. 3 , 4 Health-risk behaviours associated with loneliness and social isolation include physical inactivity and smoking. 5 Loneliness is linked to lower self-esteem and limited use of active coping methods, 6 while social isolation predicts decline in self-efficacy. 7 Feeling lonely or being socially isolated is associated with defective immune functioning and higher blood pressure. 8 , 9 This evidence suggests that loneliness and social isolation may be important risk factors for developing disease, and that addressing them would benefit public health and well-being. The aim of this study was to investigate the size of the association between deficiencies in social relationships and incident coronary heart disease (CHD) or stroke, the two greatest causes of burden of disease in high-income countries. 10 We conducted a systematic review to answer the following primary question: are deficiencies in social relationships associated with developing CHD and stroke in high-income countries? Our secondary objectives included investigating whether loneliness or social isolation was differentially associated with incident heart disease and stroke, and whether the association between social relationships and disease incidence varied according to age, gender, marital status, socioeconomic position, ethnicity and health. Methods This study followed the Centre for Reviews and Dissemination's Guidance for undertaking reviews in healthcare. 11 A protocol was registered with the International Prospective Register of Systematic Reviews (registration number: CRD42014010225). 12 Study eligibility criteria To meet inclusion criteria, studies had to investigate new CHD and/or stroke diagnosis at the individual level as a function of loneliness and/or social isolation. We defined CHD as encompassing the diagnoses listed under codes l20–l25 of the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), and stroke as ICD-10 codes I60–69. We excluded studies where CHD or stroke diagnosis was not the first instance of diagnosis among participants, except where analyses controlled for previous events. We applied no other exclusion criteria regarding study population. Measures of social relationships met inclusion criteria for loneliness if they were consistent with its definition as a subjective negative feeling associated with someone's perception that their relationships with others are deficient. 13 Measures of social isolation had to be consistent with its definition as a more objective measure of the absence of relationships, ties or contact with others. 14 We focused on longitudinal studies in order to investigate the temporal relationships between loneliness or isolation and subsequent disease. Our purpose was to clarify the public health challenge posed by deficiencies in social relationships in high-income countries, 15 so we excluded all other settings. We applied no language, publication type or date restrictions to inclusion. Search strategy and selection criteria We searched 16 electronic databases for published and grey literature published up until May 2015: MEDLINE, EMBASE, CINAHL Plus, PsycINFO, ASSIA, Web of Science, Cochrane Library, Social Policy and Practice, National Database of Ageing Research, Open Grey, HMIC, ETHOS, NDLTD, NHS Evidence, SCIE and National Institute for Health and Care Excellence (NICE). Thesaurus and free text terms (eg, loneliness, social isolation, social relationships, social support, social network) were combined with filters for observational study designs and tailored to each database. The search strategy included no health terms, as it aimed to capture all disease outcomes, rather than focus on CHD and stroke. For the full electronic strategy used to search MEDLINE, see online supplementary appendix 1. Supplementary appendix 1 [heartjnl-2015-308790supp_appendix1.pdf] To complement the electronic search, we screened reference lists, searched for citations in Scopus (the largest database of abstracts and citations) and contacted topic experts identified through the UK Campaign to End Loneliness’ Research Hub. After removing duplicates, two researchers independently screened titles and abstracts before assessing full records using a standardised screening sheet. Additional information was sought from authors when necessary (3 (60%) responded). When authors did not reply, we searched for information from related publications to inform our decision. Data extraction and quality assessment Data were extracted into a standardised form by one researcher, and checked by a second. Study authors were contacted to obtain missing data. Based on the Agency for Healthcare Research and Quality framework and taxonomy of threats to validity and precision, 16 we selected the following domains as relevant for assessing studies: sampling bias, non-response bias, missing data, differential loss to follow-up, information error with regard to exposure and outcome measure, detection bias, confounding and study size. We identified age, gender and socioeconomic status as potential confounders (ie, factors correlated with exposure, predictive of outcome and not on the causal pathway). 17 , 18 No studies were excluded due to quality; instead, subgroup and sensitivity analyses were performed, to test the stability of findings according to internal validity. Quantitative synthesis We hypothesised that social relationships were associated with disease incidence, and that this association may differ according to the dimension of relationships measured, and individual-level and contextual-level factors. A preliminary synthesis was developed by grouping study characteristics and results according to their measure of relationships. The majority of papers reported relative hazards of new diagnosis, comparing people with higher versus lower levels of loneliness or social isolation. Since incidence of disease was low (<10%) in the three studies reporting ORs, these estimates were approximated to relative risks. 19 Where the lonely or isolated group was used as the reference, results were transformed to allow comparison across studies. Patterns identified in the preliminary synthesis were formally investigated. Only papers for which an effect estimate and SE or CI were available (reported in the paper or provided by contacted authors), or could be calculated, contributed to this stage of the analysis. Where several papers reported results from the same cohort, we privileged the findings with the longest follow-up time. If a study included multiple measures of exposure and/or outcome, we selected the result relating to the most comprehensive measure. Where a study used statistical controls to calculate an effect size, we extracted data from the most complex model to minimise risk of confounding. All effect sizes were transformed to the natural log for analyses. Using Revman V.5.3 (Review Manager (RevMan) Version 5.3 [program]. Copenhagen: The Nordic Cochrane Centre, 2014), CHD and stroke effect estimates were plotted in separate forest plots, and heterogeneity between studies was assessed using the I 2 statistic. Potential sources of variation were explored with prespecified subgroup analyses. Since heterogeneity could not be explained and removed based on these analyses, but we deemed studies sufficiently similar to warrant aggregation, we combined results using random effects models. This approach allows for between-study variation, and is consistent with our assumption that the effects estimated in the different studies were not identical, since they investigated different dimensions of social relationships and derived from different populations. Finally, sensitivity analyses were performed to test whether our overall results were affected by internal study validity and small-study effects. Contour-enhanced funnel plots for asymmetry were drawn using STATA V.12 (Stata Statistical Software: Release 12 [program]. College Station, TX: StataCorp LP, 2011). The limited number and the heterogeneity of studies did not support the use of tests for funnel plot asymmetry. 20 Results A total of 23 studies based on 16 cohorts were identified for inclusion in the review, after a two-stage process. See figure 1 for a flow diagram of the study selection process. Eleven studies on CHD and eight studies on stroke met inclusion criteria for the quantitative syntheses (ie, studies based on independent samples reporting data from which the natural log of the estimate and its SE could derived). Download figure Open in new tab Download powerpoint Figure 1 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram. CHD, coronary heart disease. Table 1 summarises the descriptive characteristics of the evidence included in our review (see online supplementary appendix 2 for individual study characteristics). Supplementary appendix 2 [heartjnl-2015-308790supp_appendix2.pdf] View this table: View inline View popup Table 1 Characteristics of the included evidence Assessment of loneliness and social isolation Prevalence of loneliness or social isolation ranged from 2.8% 40 to 77.2%. 31 Three papers measured loneliness, 21 , 30 , 42 18 measured social isolation 22–43 and two papers used a measure combining both dimensions. 34 , 35 The three papers on loneliness used different tools: a direct question asking about loneliness feelings during the day, 30 a question on feelings of loneliness in the past week 42 and a 13-item tool encompassing the perceived availability, adequacy or accessibility of social relationships. 21 Across the 18 studies on social isolation, 11 tools were used: six studies used the Berkman–Syme Social Network Index, 44 two studies used the 10-item Lubben Social Network Scale 45 and the remainder used nine different tools on the availability and/or frequency of contacts. One cohort study used a measure combining social isolation and loneliness, the 11-item Duke Social Support Index, which asks about frequency of interaction and satisfaction with relationships. 46 Loneliness and social isolation were predominantly treated as a categorical variable; two studies analysed them as continuous variables. 29 , 42 Only one study reported results based on measuring social relationships more than once. 42 Ascertainment of CHD and stroke A total of 4628 CHD and 3002 stroke events were recorded across the 23 papers. Eighteen studies measured incident CHD and 10 measured stroke (five studies reported on both outcomes). Diagnosis was ascertained from medical records, death certificates or national registers in all but four studies. Others used self-report, 34 , 35 or telephone interviews with a nurse or physician. 33 Two studies verified self-reported events against medical records. 29 , 36 , 38 The majority of studies with a measure of CHD focused on myocardial infarction and/or CHD death (11/18). Four studies included angina pectoris within their measure of CHD and two presented results for angina separately. The remit of the CHD measure was unclear in one study. 43 Study validity Figure 2 summarises risk of bias across the studies included in our review (see online supplementary appendix 3 for details of criteria). For many of the instruments assessing social relationships, information on reliability and validity was limited (online supplementary appendix 4 displays detailed information on the validity and reliability of tools). Four cohorts (six articles) relied on subjects reporting new diagnosis for all or part of the outcomes measured, and were judged to be at greater risk of misclassification (see online supplementary appendix 2 for details of outcome assessment). Limited information on attrition and blinding of outcome assessment meant that susceptibility to differential loss to follow-up and detection bias was unclear. We note that the multiplicity of risk factors investigated and the differential length of follow-up suggest that outcome assessment is unlikely to have been influenced by knowledge of baseline information on social relationships. Download figure Open in new tab Download powerpoint Figure 2 Internal validity. NA, not applicable. Supplementary appendix 3 [heartjnl-2015-308790supp_appendix3.pdf] Supplementary appendix 4 [heartjnl-2015-308790supp_appendix4.pdf] The results reported in 12 papers were at lower risk of confounding, that is, analyses controlled or accounted for age, gender and socioeconomic status. 21 , 22 , 27 , 28 , 30 , 33 , 36 , 37 , 39 , 40 , 42 , 43 Four studies presented results from univariate analyses, 31 , 34 , 35 , 41 with a further study adjusting for age only. 26 The remaining eight reports did not control for socioeconomic status, although in the case of the Health Professionals Follow-up Study the relative socioeconomic homogeneity of the sample may limit the impact of this omission. 22 , 24 Loneliness, social isolation and CHD Across 11 studies (3794 events; one study did not report numbers) based on independent samples, the average relative risk of new CHD when comparing high versus low loneliness or social isolation was 1.29 (95% CI 1.04 to 1.59; see figure 3 ). We found evidence of heterogeneity within this comparison (I 2 =66%, χ 2 =29.16, df=10, p=0.001) and explored whether this could be explained by social relationship domain (loneliness vs social isolation), gender, risk of confounding and higher risk of bias due to exposure measurement error. We found no evidence that effects differed according to each subgroup (see online supplementary appendix 5). We were not able to explore other potential sources of heterogeneity due to limited information and study numbers. Download figure Open in new tab Download powerpoint Figure 3 Forest plot of studies investigating incident CHD. CHD, coronary heart disease. Supplementary appendix 5 [heartjnl-2015-308790supp_appendix5.pdf] Social isolation and stroke Across nine independent study samples (2577 events; one study did not report numbers), the average relative risk of stroke incidence was 1.32 (95% CI 1.04 to 1.68; see figure 4 ). Following confirmation of heterogeneity (I 2 =53%, χ 2 =17.07, df=8, P=0.03) we performed subgroup analyses according to risk of confounding and risk of bias due to outcome measurement error (there were too few studies to perform any other analyses). There was no evidence of effects differing according to subgroup (see online supplementary appendix 6); we had insufficient information to explore other potential sources of heterogeneity. Download figure Open in new tab Download powerpoint Figure 4 Forest plot of studies investigating incident stroke. Supplementary appendix 6 [heartjnl-2015-308790supp_appendix6.pdf] Risk of bias across studies To test whether our findings were sensitive to internal study validity, we compared results with and without studies at greater risk of bias. We found no evidence of a difference in the ratio of the relative risks for CHD and stroke according to study validity (see table 2 ). View this table: View inline View popup Table 2 Sensitivity analyses Visual assessment of contour-enhanced funnel plots suggested that studies might be missing in areas of statistical significance (see figure 5 A, B). Comparing fixed-effects and random-effects estimates, we found the random-effects estimate to be more beneficial (CHD: relative risk (RR) random-effects: 1.29, 95% CI 1.04 to 1.59, compared with RR fixed-effects: 1.18, 95% CI 1.06 to 1.31; stroke: RR, random-effects: 1.32, 95% CI 1.04 to 1.68, compared with RR fixed-effects: 1.19, 95% CI 1.03 to 1.36). This suggests the presence of small-study effects, which could be due to reporting bias. Although we found no evidence that study quality and true heterogeneity explained small-study effects in our review, these, along with chance, remain possible explanations. Download figure Open in new tab Download powerpoint Figure 5 (A) Contour-enhanced funnel plot, coronary heart disease studies. (B) Contour-enhanced funnel plot, stroke studies. Additional studies Seven papers with a measure of social isolation were excluded from quantitative synthesis since they either did not report data in a format suitable for pooling and/or shared data with other studies. 23 , 25–27 , 29 , 38 , 41 Of the four papers that did not duplicate data from other studies, two reported results based on the Honolulu Heart Program: social isolation appeared to predict CHD but not stroke, in analyses adjusted for age, though the association disappeared in multivariate analysis. 26 , 27 In a univariate analysis of data from the Atherosclerosis Risk in Communities Study (USA) the Lubben Social Network score was not significantly associated with incident CHD among people with prehypertension. 41 A further study found no evidence of an association between social isolation and CHD among men in France and Northern Ireland, 29 although we note that this study controlled for depression, one of the possible pathways through which social isolation might lead to disease. Discussion Summary of findings and comparison with other work Our review found that poor social relationships were associated with a 29% increase in risk of incident CHD and a 32% increase in risk of stroke. This is the first systematic review to focus on the prospective association between loneliness or social isolation and first occurrence of CHD or stroke. Earlier reviews reported that cardiovascular disease (CVD) prognosis is worse among people with poorer social relationships. 1 , 2 Narrative reviews on social support and CHD have described an association with prognosis as well as incidence, but the strength of evidence was low. 47 , 48 A recent review of seven papers linked loneliness and social isolation to occurrence of CHD, 49 but the effect on prognosis and incidence could not be disentangled. We found an association between poor social relationships and incident CVD comparable in size to other recognised psychosocial risk factors, such as anxiety 50 and job strain. 51 Our findings indicate that efforts to reduce the risk of CHD and stroke could benefit from taking both loneliness and social isolation into account, as we found no evidence to suggest that one was more strongly related to disease incidence than the other. This is in line with other research linking subjective and objective isolation to hypertension, a risk factor for both stroke and CHD. 8 , 9 Strengths and limitations Our focus on longitudinal studies allowed us to comment on the direction of the relationship between social relationships and health, and avoid the problem of reverse causation. Pooling results from studies of CHD that measured loneliness and isolation allowed us to answer the broader question of whether deficiencies in social relationships are associated with disease incidence. We anticipated and explored heterogeneity where possible but found no statistical evidence that components of internal validity were associated with effect estimates. Subgroup analyses specified a priori showed no difference between the association of loneliness or social isolation with CHD incidence, and we found no evidence across studies of differences between men and women. We found insufficient data to explore the relative effects of the quantity and quality of relationships, or study effect modifiers in depth. Seven of the estimates included in our meta-analyses (five CHD, two stroke) were extracted from studies where participants were of higher socioeconomic status and in better health than the target population. The role of deficiencies in social relationships may be greater among individuals under stress, 52 and our results may underestimate the health-damaging implications of loneliness and social isolation among disadvantaged groups. Our review included some data collected from 1965; more recent strategies for CHD prevention may have modified the influence of loneliness and social isolation on disease incidence. In common with other reviews of observational studies, we cannot infer causality from our findings, nor can we exclude confounding by unmeasured common causes, or reverse causation if deficiencies in social relationships are the result of subclinical disease. Publication bias is a concern in every review, and may lead us to overestimate the ‘true’ effect of poor social relationships. Conversely, our pooled effects could be a conservative estimate: most of the studies in this review statistically adjusted for factors that are likely to be on the causal pathway, such as depression or health-related behaviour. Implications The main finding of our review, that isolated individuals are at increased risk of developing CHD and stroke, supports public health concerns over the implications of social relationships for health and well-being. Our work suggests that addressing loneliness and social isolation may have an important role in the prevention of two of the leading causes of morbidity in high-income countries. A variety of interventions directed at loneliness and social isolation have been developed, ranging from group initiatives such as educational programmes and social activities, to one-to-one approaches including befriending and cognitive-behavioural therapy. These have primarily focused on secondary prevention, targeting people identified as isolated or lonely, but their effectiveness is unclear. Evaluative research is needed to investigate their impact on a range of health outcomes. Addressing health-damaging behaviours is also likely to be important, with lonely and isolated people more likely to smoke and be physically inactive, for example 5 primary prevention strategies, such as promoting social networks or developing resilience, have received limited attention to date. Risk factors for loneliness and social isolation such as gender, socioeconomic position, bereavement and health status are well established 14 , 18 and hold the key to identifying people who may benefit from intervention. Our findings suggest that tackling loneliness and isolation may be a valuable addition to CHD and stroke prevention strategies. Health practitioners have an important role to play in acknowledging the importance of social relations to their patients. 53 54 Key messages What is already known on this subject? People with poorer social relationships are at increased risk of premature death. The implications of social relationships for disease onset are unclear. What might this study add? This systematic review of prospective longitudinal studies found that deficiencies in social relationships are associated with an increased risk of developing coronary heart disease and stroke of around 30%. This association is comparable in size to other recognised psychosocial risk factors, such as anxiety and job strain. How might this impact on clinical practice? Efforts to reduce cardiovascular disease incidence need to consider loneliness and social isolation. Acknowledgments We thank Rocio Rodriguez-Lopez and Melissa Harden for carrying out the electronic literature searches, and Martin Bland and Dan Pope for their advice on meta-analysis software and data analysis. | Loneliness and social isolation are linked to around a 30 per cent increased risk of having a stroke or developing coronary artery disease—the two leading causes of illness and death in high income countries—finds an analysis of the available evidence, published online in the journal Heart. The size of the effect is comparable to that of other recognised risk factors, such as anxiety and a stressful job, the findings indicate. Loneliness has already been linked to a compromised immune system, high blood pressure, and ultimately, premature death, but it's not clear what impact it might have on heart disease and stroke risk. The researchers trawled 16 research databases for relevant studies, published up to May 2015, and found 23 that were eligible. These studies, which involved more than 181,000 adults, included 4628 coronary heart disease 'events' (heart attacks, angina attacks, death) and 3002 strokes recorded during monitoring periods ranging from three to 21 years. Analysis of the pooled data showed that loneliness/social isolation was associated with a 29% increased risk of a heart or angina attack and a 32% heightened risk of having a stroke. The effect size was comparable to that of other recognised psychosocial risk factors, such as anxiety and job strain, the analysis indicated. This is an observational study, so no firm conclusions can be drawn about cause and effect, added to which the researchers point out that it wasn't possible to exclude the potential impact of other unmeasured factors or reverse causation—whereby those with undiagnosed disease were less sociable, so inflating the findings. Nevertheless, the findings back public health concerns about the importance of social contacts for health and wellbeing, say the researchers. "Our work suggests that addressing loneliness and social isolation may have an important role in the prevention of two of the leading causes of morbidity in high income countries," they write. In a linked editorial, Drs Julianne Holt-Lunstad and Timothy Smith of Brigham Young University, Utah, USA, agree, pointing out that social factors should be included in medical education, individual risk assessment, and in guidelines and policies applied to populations and the delivery of health services. But one of the greatest challenges will be how to design effective interventions to boost social connections, taking account of technology, they say. "With such rapid changes in the way people are interacting socially, empirical research is needed to address several important questions. Does interacting socially via technology reduce or replace face to face social interaction and/or alter social skills?" they ask. "Given projected increases in levels of social isolation and loneliness in Europe and North America, medical science needs to squarely address the ramifications for physical health," they write. "Similar to how cardiologists and other healthcare professionals have taken strong public stances regarding other factors exacerbating [cardiovascular disease], eg smoking, and diets high in saturated fats, further attention to social connections is needed in research and public health surveillance, prevention and intervention efforts," they conclude. | 10.1136/heartjnl-2015-308790 |
Earth | Connecting coastal processes with global systems | Nicholas D. Ward et al. Representing the function and sensitivity of coastal interfaces in Earth system models, Nature Communications (2020). DOI: 10.1038/s41467-020-16236-2 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-16236-2 | https://phys.org/news/2020-05-coastal-global.html | Abstract Between the land and ocean, diverse coastal ecosystems transform, store, and transport material. Across these interfaces, the dynamic exchange of energy and matter is driven by hydrological and hydrodynamic processes such as river and groundwater discharge, tides, waves, and storms. These dynamics regulate ecosystem functions and Earth’s climate, yet global models lack representation of coastal processes and related feedbacks, impeding their predictions of coastal and global responses to change. Here, we assess existing coastal monitoring networks and regional models, existing challenges in these efforts, and recommend a path towards development of global models that more robustly reflect the coastal interface. Introduction The coastal interface, where the land and ocean realms meet (e.g., estuaries, tidal wetlands, tidal rivers, continental shelves, and shorelines), is home to some of the most biologically and geochemically active and diverse systems on Earth 1 . Although this interface only represents a small fraction of the Earth’s surface, it supports a large suite of ecosystem services, including sediment and carbon storage, contaminant removal, storm and flooding buffering, and fisheries production 2 , with a global economic value of more than 25 trillion USD annually 3 . Roughly 40% of the world’s population resides within 100 km of the coast 4 ; much of the world’s energy, national defense, and industrial infrastructure is located along coasts; and shipping of goods and resources, which depends on coastal ports, is responsible for ~90% of international trade 5 . By 2100, up to 630 million people will live on land below annual flood levels under high CO 2 emission scenarios, 2.5 times more than in the present day due to sea-level rise (which expands floodplains), immigration, and urban growth 6 . These close connections between the coastal interface and human societies represent a grand challenge for sustainably managing the resources that coastal ecosystems provide as urban development and human populations along the coasts continue to rise. In addition to its importance for human livelihood, the coastal interface is an active component in the global cycling of carbon and nutrients. However, its global role remains poorly quantified in part due to the diversity of geomorphic settings, ecosystem types, their interconnectivity, and their dynamic behavior across a range of spatiotemporal scales 7 , 8 , 9 , 10 . Processes occurring in the water column and within sediments of tidal rivers, tidal wetlands, estuaries, and continental shelves significantly alter the quantity and quality of material that is both land- and marine-derived, and support the transfer of internally-produced materials across the coastal interface 11 . Further, a wide variety of coastal ecosystem types are demonstrated biogeochemical hotspots, in which process rates are not equivalent to the sum of terrestrial and aquatic contributions 12 , 13 . These highly dynamic biogeochemical processes are driven by two-way interactions between aquatic and terrestrial environments along the coast that remain poorly constrained empirically, resulting in limited representation in predictive models. Global Earth system models (ESMs) used to predict how ecosystems interact to affect Earth’s climate currently route riverine exports from land directly to the ocean with no processing within the coastal interface (Fig. 1 ). Inputs from land into the ocean are represented as fluxes that do not interact in the boundary/interface space. The lack of any form of processing that might alter either the quality or quantity of material transport between adjacent systems 14 may severely limit our ability to correctly depict the amount and form of water, energy, and matter entering the oceanic and atmospheric systems, as well as the effects of a wide range of disturbances and stressors with compounding effects such as sea-level rise, storm surge, and eutrophication on coastal ecosystems and infrastructure 15 , 16 . Local-to-regional scale models do exist for sub-elements of the coastal interface such as marsh and estuarine hydrodynamics, sediment budgets 17 , 18 and, more recently, photochemical and microbial processing of organic carbon 19 . Thus, there is potential for coupling specific components of these process-rich fine-scale models with global scale ESMs to more accurately depict the coastal interface. Fig. 1: Earth system model representation of the coastal interface. Current Earth system models (ESMs) represent the land and ocean as disconnected systems, with freshwater discharge being the only meaningful connection. Next-generation models should represent land–sea connections by incorporating coastal features such as the tidal rivers, wetlands, estuaries, the continental shelf, and tidal exchange across the coastal terrestrial–aquatic interface. This likely necessitates coupling different models to produce details at the sub-grid scale. Full size image We review what is known about the ecological and biogeochemical functions of coastal ecosystems in the context of the attributes and processes that should be represented in ESMs. We then provide recommended approaches for advancing the representation of the coastal interface in ESMs in order to improve climate predictions and impacts on the world’s economically valuable and densely populated coastal zone. We advocate for an improved mechanistic understanding of coastal interfaces from ecological and functional perspectives, the impact of coastal interfaces on global biogeochemical cycling and climate, and the effect of disturbances on coastal interfaces across a range of spatiotemporal scales. Overview of coastal interfaces Ecosystem-scale interactions This section describes the fundamental ecosystem-scale attributes and interactions that define the coastal interface and should be represented in coupled land–ocean models. Coastal interfaces are transition zones between land and ocean where the magnitude, timing, and spatial pattern of freshwater–seawater mixing determine the nature of biogeochemical gradients (Fig. 2 ). The primary defining feature of a coastal interface is a sea-to-land gradient in tidal influence on surface water elevation 20 . Hydraulic head gradients may drive the majority of groundwater fluxes and exchange 21 , but groundwater also responds to tidal variation, with tidal fluctuations driving a two-way exchange of water and geochemical constituents such as CO 2 and salt between the land, groundwater, and surface waters 22 . As such, we broadly define the coastal interface as any region where land, freshwater, and tides interact, or in other words all land surfaces (e.g., wetlands, marshes, floodplains) and water bodies (e.g., tidal rivers, estuaries, lagoons, deltas, and continental shelves) lying between purely inland and marine settings. These settings are complex and diverse by definition (Fig. 2 ) and encompass watersheds that lie below the head of tides. Fig. 2: Generalizable features of coastal ecosystems. a , b The inland extent of tidal influence on river flow increases with stream order, while the inland intrusion of salinity decreases. Rivers (and groundwater tables) on an active continental margin (e.g., US West Coast) are generally steeper in elevation, reducing how far inland tides permeate. Gradients in vegetation are influenced by these characteristics. c Estuarine environments can be broadly classified by their hydrodynamic properties such as net current velocity due to river flow (Fr f ) and how effectively tides mix a stratified estuary (M); fjords have low freshwater and tidal velocity scales due to their great depth whereas salt wedges have high contributions from rivers and a wide range of tidal contributions (adapted from Geyer and MacCready 23 ). d Classifications of shallow water depositional environments along the coast can be categorized based on the ratio of wave power to tidal power and whether they are regressive (i.e., net land gain; top half of the diagram) or transgressive (i.e., net land loss; bottom half of the diagram) environments. The top half of the diagram shows regressive environments such as deltas and strand plains. The bottom of the diagram shows transgressive environments such as estuaries and barrier lagoons. Open coast tidal flats and shelf environments can be linked to either type of coast with shelf width decreasing during regression (adapted from Steel and Milliken 107 ). Full size image Interactions between fresh groundwater discharge, river discharge, estuarine circulation, and tidal elevation determine the position and length of another defining feature of the majority of coastal interfaces — salinity gradients 23 . In the case of the tidally-influenced reaches of rivers with high discharge such as the Amazon River, the landward salinity intrusion is limited and water can remain fresh some distance offshore onto the continental shelf 10 . In contrast, smaller rivers experience significant salinity intrusion into river channels, groundwater, and soils 24 . The extent of the salinity gradient directly influences terrestrial vegetation distribution along the land-to-sea hydrologic gradient, as well as soil and sediment biogeochemistry and geomorphology in a bi-directional manner. For example, tidal exchange can both deposit marine-derived material onto terrestrial landscapes 25 and export terrigenous material to the sea 26 , 27 . Tidal influences on coastal ecosystems go beyond effects on salinity distributions to include effects on water velocity, flow direction, and flood frequency with consequences for carbon and nutrient exchanges in tidally affected freshwater wetlands 27 , 28 . The critical functions of shoreline stabilization and nutrient, carbon, and water cycling rely on vegetation within the coastal interface 15 . The distribution and productivity of coastal interface vegetation (e.g., algae, succulents, grasses, sedges, rushes, forbs, woody shrubs, and trees) is driven by gradients in flooding, salt and sulfide exposure, nutrient availability, topography, herbivore activity, and soil characteristics such as O 2 availability and redox potential 29 . Plant species diversity generally decreases with increasing salinity and flooding intensity, shifting from ecosystems that have many similarities with upland settings where tidal influence is minimal, to low-diversity communities dominated by halotolerant species such as cordgrass, mangroves, or succulents, and finally ending with perennially submerged aquatic plants such as seagrass 30 . Submerged vascular plants and emergent marshes are at the front line of the coastal interface because changes in their extent can have broad impacts across the whole coastal domain, and perhaps beyond 31 . Functional redundancy in the form of different species that contribute similarly to an ecosystem function is typically thought to be relatively low in such diversity-poor systems where few species can tolerate the harsh and fluctuating conditions, similar to terrestrial diversity-poor grasslands 32 . However, not all functional diversity occurs across species; the monospecific stands that dominate vast coastal wetlands often exhibit great genotypic diversity, which may yield high functional diversity despite low species richness 33 . As wetlands adapt to climate trends, the potential changes in relative representation of plant functional types — how models simplify plant diversity into manageable categories — must be incorporated into predictions of future coastal ecosystem function and adaptation. Understanding and characterizing such responses are critical to accurate representation of plant functionality in coastal interface models. Biogeochemical interactions and cycles This section describes the fundamental biogeochemical functions of coastal ecosystems that are likely the most critical to represent in regional and global scale models. Interactions among hydrology, vegetation, geomorphology, soils, and sediments influence the quantity and composition of carbon, nutrients, and redox-active compounds (e.g., O 2 , SO 4 2− ) within and exchanged by coastal interface ecosystems. Furthermore, many of these processes may be interactive across spatial scales 34 . For this reason, one of the largest challenges in constraining the role of coastal interfaces in global biogeochemical cycles is scaling our quantitative understanding of biotic and abiotic controls on molecular transformations and fluxes gained at the pore (e.g., nm 3 to µm 3 ), core (cm 3 ), or plot (m 2 or m 3 ) scale to estuarine sub-basins (km 2 ), entire estuaries (10–1000 km 2 ), and ultimately to the scale and process resolution of ESMs (100-10,000 km 2 ). The role of coastal ecosystems in the carbon cycle is important both for constraining global carbon budgets and also representing these significant fluxes in ESMs. Inland waterways concentrate material inputs from an entire watershed, which then pass through the coastal interface. In the case of organic carbon (OC), the small amount of OC that is mobilized from upland soils to rivers on an area basis (~1–5 g OC m −2 yr −1 globally) translates to 2 orders of magnitude greater loading (~300 g OC m −2 yr −1 ) into coastal interface ecosystems, which are a relatively small focal area (i.e., bottleneck) for inputs coming from the entire watershed 7 . It is currently estimated (via mass balance) that ~5.7 Pg of inorganic and organic C yr −1 is mobilized from upland terrestrial systems through inland waters and wetlands, of which 74% is returned to the atmosphere as CO 2 prior to delivery to the coastal ocean; this total flux is of similar magnitude as anthropogenic CO 2 emissions from fossil fuel burning (7.9 ± 0.5 Pg C yr −1 ), uptake by the ocean (2.4 ± 0.6 Pg C yr −1 ) and terrestrial biosphere (2.7 ± 1.2 Pg C yr −1 ) 10 . CO 2 emissions from tidal rivers have not yet been adequately included in global carbon budgets, but may make a substantial contribution considering the increasing surface area associated with the lower reaches of rivers 35 . Despite their relatively small global surface area (0.07–0.22%), vegetated coastal systems (seagrass, mangroves, and intertidal marshes) sequester 65–215 Tg C yr −1 , globally, which is equivalent to ~10% of the net residual land sink and 50% of carbon burial in marine sediments 36 . These ecosystems are being lost at a rate of 1–7% yr −1 due to human activities such as dredging, filling, eutrophication, and timber harvest 37 . Such habitat losses may also stimulate OC export and decomposition to CO 2 in coastal interface ecosystems 38 . Continental shelves play a similarly active role in global carbon cycling due largely to an abundance of nutrients from upwelling. Although continental shelves represent 7–10% of global ocean area, they contribute to 10–30% of global marine primary productivity; 30–50% and 80% of global inorganic and organic carbon burial in marine sediments, respectively; and up to 50% of the deep ocean OC pool 7 . The extent of OC transformation or loss as it passes through the coastal interface depends on its transport time, path, and exposure to the variety of surfaces (e.g., suspended particles, soil pores, and sediments) within the interface 10 . While allochthonous inputs can influence coastal ecosystem function, local sources of production also export or filter allochthonous transport. For example, processes such as low-tide rainfall can result in elevated mobilization of particulate OC (POC) from intertidal landscapes that can represent a significant fraction of annual POC export in many of these environments 39 . In addition, direct leaching from marsh plants and litter, exudation from roots, and biological production by algae are major local sources of chemically and optically distinctive dissolved organic matter to estuaries and coastal oceans 9 , 27 . Gradients in microbial community composition from rivers to continental shelves are generally controlled by salinity and redox (spatially) and river discharge (temporally) with distinct assemblages present in tropical, temperate, and high-latitude settings 40 , 41 . The hydrologic and geochemical gradients that characterize soils of coastal landscapes, particularly salinity and dynamic redox conditions, exert a strong influence over soil microbial community composition and metabolic functioning 25 . It remains a challenge to differentiate the effects of inundation and water chemistry on microbially driven biogeochemical functions in soils. At the pore-scale microbial activity, hydrologic connectivity, and drought legacy interact to regulate ecosystem functions 42 . At the core and plot scale, salinity dominates the controls on soil organic carbon (SOC), and salinity is negatively correlated with SOC content 43 . Along natural gradients, increasing salinity is correlated with an increase in denitrification 44 and a decrease in methane emissions 45 , while increases in salinity tend to decrease both methanotrophy and methanogenesis in previously freshwater environments 46 . Variability in the duration of salinity exposure can influence the production of greenhouse gases. For example, long-term soil exposure to seawater decreases microbial CO 2 production 47 while short pulses of seawater exposure increase CO 2 emissions 48 . However, rapid changes in salinity gradients could result in unexpected patterns of greenhouse gas emissions at sub-daily scales 46 . Other coupled microbial cycles may be less sensitive to salinity. For example, a diverse community of sulfate-reducing bacteria associated with tidal freshwater systems has been shown to be relatively resistant to seawater intrusion 49 . However, the full range of time frames (from seconds to years) over which these sensitivities could emerge have not been examined. Challenges for constraining coastal dynamics Hot spots and hot moments Because of their position at the interface of land and water, and thus constant exposure to terrestrial and aquatic fluxes, coastal ecosystems represent hot spots for processing and transformation of energy and matter (Fig. 3 ). Hot spots are defined as areas that show disproportionately high metabolic rates or carbon stocks relative to the surrounding areas 13 , and to their spatial representation. We suggest that hot spots can range from fine scales (e.g., cm 3 , m 2 ) to the scale of entire estuaries (10–1000 km 2 ) and influence local to global scale material budgets depending on the process. Fig. 3: Biogeochemical characteristics of coastal interfaces. a Two-way exchange of water and materials between terrestrial and marine environments drive gradients in geochemical constituents (e.g., ions, carbon, nutrients), plant distribution, and ecosystem functions (e.g., carbon storage, greenhouse gas emissions, sediment accumulation). b Biogeochemical reaction rates generally occur at more rapid timescales (e.g., hours to days) in aquatic systems such as rivers compared to soils and sediments (years to millennia). c Likewise, the residence time of biogeochemical components is short in aquatic environments such as estuaries and the surface ocean compared to the deep ocean and its sediments. d Coastal interface biogeochemistry is complicated by an abundance of hot spots and moments for diverse reactions across scales that can significantly alter expected reaction rates and residence times. Full size image It is both feasible and desirable to represent hot spot dynamics in ESMs that play a significant role on global scale biogeochemical cycles and are empirically understood. For example, mangroves cover 0.1% of the Earth’s surface 50 but are among the most productive carbon-sequestering ecosystems on Earth (1023 Mg C ha −1 ) and thus are hot spots for carbon storage and uptake from regional 51 to global scales 52 . More broadly, estuaries could be considered hot spots for productivity, carbon storage 53 , and/or decomposition 54 depending on hydrologic factors such as water residence time, estuarine exchange flow patterns, and position of the estuarine turbidity maximum zone 11 . For example, ~18 Tg C yr −1 is buried in fjord sediments, globally, which is equivalent to 11% of marine carbon burial rates; much of this OC is terrestrially-derived owing to the steep topography and a short residence time between terrestrial soils and estuarine sediments in these environments 53 . This is a feature of landscapes on active margins, whereas lower relief landscapes on passive margins have longer residence times and a greater extent of OC transformation prior to burial 55 (Fig. 2 ). These examples of depocenters (areas of maximum deposition) for rapid carbon burial are not only relevant to modern-day carbon cycling, but also act as significant carbon sinks over geologic timescales. For example, sustained burial of woody debris in Bengal Fan sediments has occurred over the last 19 million years; this debris is largely of lowland origin 56 , suggesting that alterations to the land use and hydrology of coastal interface ecosystems could influence geologically-relevant processes over modern timescales. Coastal ecosystems are sensitive to rapid and disproportionate hydrological and biogeochemical fluctuations with terrestrial, atmospheric, and oceanic origins including extreme precipitation events 57 , snow/ice melt 10 , accumulation and enhanced dry deposition of atmospheric pollutants 58 , extreme high tides, and storm surges 46 , 59 . Thus, hot moments — short time periods with disproportionately high metabolic rates—may play a prominent, but typically ignored role in coastal ecosystem biogeochemical cycling. These hot moments may be controlled by processes occurring around the roots of plants (i.e., the rhizosphere) driven by interactions between plants and microorganisms, plant-driven water flow and solute transport, plant uptake of nutrients, soil chemical reactions such as rapid changes in redox potential 60 or sorption and cation exchange 61 , or mixing of terrestrial and aquatic-derived substrates 38 . Hot moments play a larger role in certain biogeochemical cycles than others. For example, although soil methane emissions generally decrease, and even become negative (i.e., uptake from the atmosphere) along coastal salinity gradients, rapid events such as ebullition induced by storm surge can result in momentarily high CH 4 fluxes 59 . Similarly, periods of intense rainfall during low-tide conditions can result in elevated rates of erosion and transport of sediment and organic matter from intertidal platforms (e.g., vegetated marshes and unvegetated mudflats) to adjacent creeks and surrounding coastal ocean 62 . A key challenge of measuring and modeling coastal interfaces is determining the spatiotemporal scale(s) needed to represent processes and systems such that the outcomes of interest are not biased by misrepresentation of available measurements in time and space, relative to the hot spots and hot moments that characterize the system. For example, inter-comparisons of methane models show large inconsistencies that are primarily due to uncertainties in temperature sensitivity, substrate limitation of CH 4 production, and wetland area dynamics 63 . While the last issue can be addressed by using consistent surface water inundation remote sensing products 64 , the first two issues represent knowledge and modeling gaps that exist, in part, because of the highly dynamic nature of methane production and emission. High temporal resolution measurements of different processes are thus needed to couple ecosystem responses (e.g., greenhouse emissions) with the underlying controls to properly represent hot moments in regional models and ESMs. While new technologies are emerging that allow highly resolved organic carbon or gas flux measurements 46 , 59 , there is a lack of consensus on how to appropriately scale lateral land-water carbon fluxes, or carbon emissions from either the bottom-up or top-down origin 8 , 65 . The concept of hot spots and hot moments has been criticized for lacking a quantitative definition. For example, it has been suggested that unusually high spatiotemporal variability with ecosystem-scale importance should be defined as ecosystem control points with four distinct categories: permanent control points that experience sustained high rates of biogeochemical activity relative to surrounding areas such as riparian and hyporheic zones, activated control points that only support high rates when a limiting resource such as nutrients or oxygen is delivered, export control points that accumulate reactants until some threshold is reached that allows export such as OC accumulation in soils that is mobilized only during storms, and transport control points that have a high capacity for transporting solutes/reactants such as macropore flow paths in soils or stormwater drainage pipes 66 . Capturing the spatial and temporal variability of ecological processes across coastal interfaces in this context remains unclear; and consequently represents a challenge to be included in ESMs. Disturbances and stress at the coastal interface Coastal ecosystems are broadly sensitive to disturbances and stress from surrounding watersheds and the ocean that result in anomalous (i.e., non-steady state) responses. Disturbance typically refers to events that temporarily alter ecosystem attributes (e.g., plant productivity, GHG fluxes, etc.) but occur infrequently enough to allow for recovery time during which attributes re-establish a normal dynamic equilibrium; in contrast, higher frequency or continuous stress events permanently shift the trajectory of an ecosystem attribute 67 . Long-term stress to an ecosystem is also referred to as a press as opposed to a pulse disturbance event, and these can interact producing compounding effects 68 . The dominant chronic stressors on coastal ecosystems are sea-level rise 15 , 16 , temperature increases 69 , ocean acidification 70 (Fig. 4 ), land use conversion (e.g., urbanization), and long-term alterations to water flow (e.g., river impoundment and water extraction) and coastal-estuarine circulation 71 (Fig. 4 ). The dominant episodic disturbances are flooding (either from storm surges or upland sources); drought; and temporary vegetation removal via sedimentation, erosion, wildfire, harvest, and other human manipulations. Fig. 4: Coastal ecosystem disturbances, stressors, and vulnerability. a Increasing air and water temperatures, water acidification, rates of sea level rise, eutrophication, hypoxia, and frequency/magnitude of extreme storm surge events are among the primary threats to the ecology and hydro-biogeochemistry of coastal interfaces. b Although the resilience of coastal ecosystems is relatively unknown, it is likely that compounding disturbances and chronic stress will eventually exceed their impact threshold, resulting in widespread collapse of ecological function. Additional drivers of change not shown include land use change, river impoundment, natural resource extraction, invasive species, droughts, floods, and fires (concept inspired by McDowell et al. 89 ). Full size image Changes in the global distribution of ecosystems along the coastal interface must be considered in light of centuries of direct human alterations. Climate change will likely increase the frequency of extreme weather events (droughts to tropical cyclones), dramatically altering the delivery of water, nutrients, and carbon to coastal zones. Chemical constituents associated with extreme weather result in extended periods of degraded water quality as well as switching modes of coastal ecosystems between autotrophy to heterotrophy 72 . The timing and longevity of these perturbations add to the uncertainty of the role of these systems as greenhouse gas sources or sinks and as exporters of carbon to the oceans 7 . Further, the ecological structure of coastal ecosystems is already experiencing the effects of sea-level rise with coastal forest boundaries retreating inland 15 and salinization of tidal freshwater systems shifting their function and related rates of carbon burial and greenhouse gas (i.e., CO 2 and CH 4 ) emissions 48 . Tidal marshes have been reclaimed for agricultural purposes throughout Western Europe and North America, and large-scale reclamation and land conversion continues in regions including coastal China, impacting hydrologic connectivity and ecosystem-scale fluxes with the construction of various engineered seawalls 73 . Eutrophication of estuarine waters occurs as a result of both natural episodic nutrient inputs and long-term changes in land use practices (e.g., agriculture, septic systems, nitrogen-fixing vegetation, etc.), and in some cases results in hypoxic conditions that can harm fish and wildlife; hypoxia occurs due to both natural and anthropogenic causes 74 . Deforestation can alter the function and resilience of coastal ecosystems, ultimately causing an irreversible loss of coastal wetlands. Mangrove forests in tropical regions are losing between 0.16 and 0.39% of land area annually to development, aquaculture, and agriculture 75 . Such coastal land alteration has already released large quantities of soil organic carbon to the atmosphere as CO 2 , and an estimated 0.15–1.02 Pg C yr −1 continues to be emitted globally 76 . Shifts in the interaction between freshwater hydrology and tidal influences due to sea-level rise, delta subsidence, or anthropogenic changes (e.g., impoundments) will impact coastal interface geomorphology, such as delta evolution, riverine and coastal sedimentation, and wetland ecological/physical structure 20 . Changes in sediment supply may be considered a stress that alters the evolution of wetland structure and function, although episodic events such as landslides and volcanoes are disturbances under the return-interval-based terminology adopted here 67 . Though meta-analyses have shown that salt marshes can keep pace vertically with sea-level rise 77 , the lateral position of marshes is not as stable as they narrow or expand depending on the net sediment budget 78 and external stressors and disturbances such as waves, storm surge, and sea-level rise. The contraction of marsh area likely produces an increased export of organic and inorganic material across the coastal interface 26 , 79 although some portion of the material is re-deposited on the marsh plain during the landward transgression process 80 . Although wave-induced erosion may be considered an episodic disturbance, moderate storms and diurnal winds are mainly responsible for the majority of salt marsh edge erosion 81 . Internal deterioration of salt marshes, through salinity intrusion, herbivory, eutrophication, or other chronic factors has also been linked with sediment export 82 . Both lateral erosion and internal deterioration can be considered as net neutral processes from a budgetary perspective if landward migration corridors are available 26 , 83 . However, given the rapid nature of salt marsh loss and extensive coastal development, it is likely that salt marsh loss is a net contributor of material across the coastal interface. Another visibly prominent shift in coastal ecology is the poleward migration of mangroves due to declining freeze frequencies and landward migration due to seawater intrusion 84 . Conversion from herbaceous-dominated to woody plant-dominated wetlands greatly increases aboveground carbon uptake on the landscape scale 52 , 85 and can accelerate soil elevation gain 86 and long-term wood retention in channels and floodplain microtopography 85 , influencing long-term persistence of these ecosystems. Conversely, seawater intrusion into freshwater wetlands at the upstream edge of the coastal interface can cause vegetation death and accelerated soil carbon loss resulting in the collapse of the ground surface and a conversion of the plant-dominated wetland to open brackish water 87 . Such landscape-level shifts are dependent on a complex interplay between land use (e.g., extent of coastal development), geomorphic conditions, and relative sea level rise. For example, direct salt marsh conversion to open water or tidal flats may have a greater importance than mangrove expansion into salt marsh habitats in low relief areas with high relative sea level rise, while tidal freshwater marshes may either increase or decrease in areal extent under mean or max sea-level rise scenarios 88 . Among the largest uncertainties in projecting the future distribution, structure and function of coastal interfaces is quantifying tipping points for the collapse of ecosystem structure and function 89 across the coastal domain that incorporates the combined effects of a myriad of disturbances and stressors with compounding impacts (Fig. 4 ). Extreme events can push ecosystems already under stress beyond their tipping point, altering long-term ecosystem structure and also act as a hot moment for biogeochemical activity in the shorter term 72 . A fundamental goal of ESMs is the ability to accurately predict the influence of ecosystem distributions, structure, and function on global climate. Achieving this goal necessitates representation of feedbacks between the complex processes, stressors, and disturbances described above. Modeling the coastal interface Current generation ESMs such as the Energy Exascale Earth System Model 90 are coupled climate models that aim to simulate the Earth’s climate system, which depends on terrestrial and ocean biogeochemistry, and the interactions between atmosphere, ocean, and land (as well as ice, in high latitude regions). Coastal interfaces fall in between these traditional ESM domains, and are not typically represented in such models 1 . The dynamic nature and non-linear, unpredictable, and heterogeneous biogeochemistry of coastal ecosystems present huge challenges for model representation. Further, the omission of coastal interfaces emphasizes a critical question: what coastal processes need to be considered, at what spatial and temporal scales should they be modeled, and what empirical data are needed to parameterize and assess model performance? The primary currencies exchanged across land–ocean–atmosphere–cryosphere boundaries include water, energy, carbon, nutrients (e.g., nitrogen, phosphorous, iron, etc.), and oxygen 91 . From the watershed side, we argue that the model domain in most need of improvement is the low elevation shoreline zone, as modeled hydrological runoff and associated nutrient and carbon loads must pass reactively through marsh and deltaic regions before fluxes can be accurately transmitted to the receiving waterbody; reactive transport through the marsh system is closely linked to variations in water level 92 . A study utilizing such reactive transport models in the southeastern US concluded that small increases in water level due to sea-level rise may increase nutrient export in marshes that have elevations near mean high water, but the opposite effect will occur in marshes with lower elevations 92 . Incorporating these processes in the land components of ESMs will allow improved but one-way computation of reactive transport through the marshes to the receiving water models. This would be a significant improvement over their current functionality, in which estuarine and coastal processes including fluxes of nutrients and particulate and dissolved OC 2 , 27 , and gradient-driven baroclinic exchange between the estuaries and the ocean 93 are incorrectly represented, without sufficient resolution to resolve these processes or the net sinks of carbon and nutrients in estuaries. When the challenge is evaluated from the ESM ocean components, improving coastal interface representation takes on a larger geographical and biogeochemical meaning. Continental coastlines in ESMs are typically represented by large grid cells; a single cell may encompass an entire estuary. As a result, sediment, carbon, and water delivered to the cells are fully mixed and diluted by the cell size and cannot accommodate complex biogeochemical interactions that occur in tidal rivers, estuaries, and the continental shelf. The central problem, in this as in a number of other ESM modeling domains, is how to model grid‐averaged fluxes that may critically depend on subgrid‐scale heterogeneities 94 . Some global climate models have approached this issue by using estuarine box model approaches, while regionally-refined or Voronoi meshes (shrinking the size and increasing the number of grid cells in the terrestrial–aquatic interface and other critical regions) are other options 95 . These efforts successfully reconstructed observational data, and should be further used for hind- and fore-casting under specific scenarios defined as pressing needs by the scientific community. Present state-of-the-art regional scale estuarine models (e.g., FVCOM-ICM, SCHISM, and ROMS) simulate estuarine hydrodynamics and biogeochemical processes in a robust manner 96 . This is particularly true for the hydrodynamic components of these 3D baroclinic tools that use turbulence closure schemes for parameterizing eddy viscosity and mixing processes. As a result, the models accurately reproduce tidal circulation, stratification, and exchange flows in the estuaries extending from the upstream river inflow boundary to the ocean boundary typically set at the continental shelf 54 . When applied in high resolution over the nearshore intertidal regions, the models employ wetting and drying techniques to represent flooding 97 and are able to represent tidal processes over tidal distributaries, tidal flats and marsh regions. In addition, researchers have developed modules for submerged aquatic vegetation and tidal marshes, and sediment diagenesis, allowing explicit implementation of known marshes within the estuaries 98 . For example, one study utilizing FVCOM concluded that restored floodplain wetlands contribute large amounts of organic matter to estuaries, aiding in the restoration of historic trophic structure across the coastal interface 96 . An application of the ROMS model to several Northeastern US estuaries demonstrated that the length scale ratio between tidal excursion and salinity intrusion is one characteristic that can be used to broadly distinguish estuarine hydrodynamic regimes 93 . However, the implementation of the biogeochemistry of tidal marshes and submerged aquatic vegetation in fine-scale 3D coastal and estuarine models is an area of emerging technology and requires dedicated research efforts. One aspect missing from these estuarine models is groundwater–surface water interactions and intrusion of seawater into aquifers; this predictive capability has been developed as a distinct class of groundwater models such as SEAWAT 99 and SUTRA 100 though field measurements are still needed to further understanding. Applications of SEAWAT, which does not simulate unsaturated flow (i.e., the water table only rises due to flow through the saturated zone), have shown that the model performs well when the ocean–aquifer interface is steep but performs poorly when the slope decreases 99 . Representing disturbance and hot spots/moments in the above model framework adds additional complexity. While current ESMs are designed to capture and model forcings such as regional weather and sea-level rise 90 , resulting disturbances e.g., coastal flooding, are not represented, although finer-scale models are capable of accurately predicting storm surge and flooding over complex landscapes 101 . Future ESM refinements of disturbance-representation should thus focus on shifting/compound drivers of coastal ecosystem function (e.g., surface vegetation response to flooding) and hydrology (e.g., groundwater inundation versus riverine or tidal flooding), interactivity of biogeochemical cycling and elemental stores with all ESM components (e.g., redistribution of SOC due to coastal erosion), and inclusion of other types of disturbances (e.g., low-tide rainfall, permafrost thaw). Such disturbance regimes have been identified as important components of local to regional scale response of coastal systems to change 15 , 81 . Recommendations Bridging the gap between model scales We recommend three potential approaches for improving coastal interface representation in ESMs with varying levels of process-level detail. The first approach is a simplified representation that involves finding generalizable features of coastal ecosystems that can be binned as different coastal interface functional types (Fig. 5a ). These functional types could include distinct tidal river classifications based on topographic regimes (i.e., passive and active margins) and stream order (Fig. 2b ), estuarine regimes (e.g., salt wedge, fjord, well-mixed; Fig. 2c ), intertidal ecosystems (e.g., tidal flats, deltas, saltmarshes; Fig. 2d ), and shoreline ecosystems (e.g., rocky, sandy, coastal forest). On the ocean side of the interface, this could involve analytical solutions based on bulk properties such as mean estuarine water column depth, flow/depth-averaged salinity gradients, and mixing characteristics (eddy viscosity) to parameterize exchange flows, flushing, and loading. This approach provides a practical simplification that would allow an improvement over the present coastal interface representation in ESMs. Instead of only classifying a pixel as some fraction land and some fraction ocean, a portion of the pixel would also be classified as a certain type of tidal river, estuary, intertidal, and shoreline ecosystem, which is a significant improvement over the current state of the art. However, this would not allow dynamic two-way coupling of land, atmosphere, and ocean models. It is also difficult to incorporate how the hot spot and hot moment dynamics described previously would be treated in such a framework, except perhaps as long-term averages. Fig. 5: Representing coastal interfaces in Earth system models. a Perhaps the simplest approach would be to classify coastal interfaces based on a series of functional types for their main features (i.e., different types of tidal rivers, estuaries, intertidal ecosystems, and shoreline ecosystems; Fig. 2 ). Process parameterizations derived from synthesized data would be applied to the fraction of a pixel occupied by each feature rather than the current state of the art, which assigns some fraction of coastal pixels as land and some fraction as ocean. b The most sophisticated approach would be to couple high-resolution regional coastal interface models with coarser resolution Earth system models using a variable pixel size (i.e., Voronoi mesh). c Perhaps the most feasible and robust approach would be a combination of the two, whereby existing or strategically developed high-resolution models are coupled, and classifications of functional types are applied to systems where data required for high-resolution models are not available. Full size image On the land side, column-based models that represent changes in vegetation and marsh biogeochemistry would build off existing ESM components. The models could be used to assess carbon stores and losses, and simulate complex biogeochemical cycles in response to simplified hydrological forcings related to sea-level rise and salinity changes. These model structures may have limitations in capturing lateral fluxes between columns, groundwater–surface water interactions, and geomorphic change. As a result, again, realistic representation of hot spots/moments would be limited or nonexistent. The second approach is a detailed, brute force 3D representation of coastal systems around the world (Fig. 5b ). All major coastal seas, estuaries, and deltas worldwide would be explicitly simulated through nesting or similar two-way coupling procedures. The estuarine models with tidal marshes would provide a complete representation of coastal interface processes that allows feedback between each component and provide the most robust representation of hot spots/moments and disturbance effects. This approach requires the development of estuarine circulation and biogeochemistry models of all major estuaries worldwide. In many developed areas, such models have already been developed 102 , 103 and can serve as the starting point. In remote regions, model development may be performed using climatological information, but in these cases in situ data for model calibration/validation may be limited or unavailable. Major challenges to such a process-rich modeling approach include the coupling of model domains (atmosphere, land, ocean, surface, and subsurface) at appropriate scales, the computational resources to simulate these systems at resolutions needed to capture the process dynamics and feedbacks that distinguish individual regions from others, and the large (and perhaps impractical) efforts required for model development and, crucially, maintenance and accessibility to a range of users. On the plus side, however, the resolution demand includes both temporal and spatial scales needed to accurately represent both hot spots and hot moments. Fundamental research is still needed to understand these scales and whether the integrated, both in time and space, impact of hot spots and hot moments justifies the computational costs of explicitly representing them. The third approach is a combination of simplified and detailed representation of the world’s coastline, whereby existing high-resolution estuarine, land and ecogeomorphic, and integrated hydrological models are used to leverage community efforts as virtual field sites for developing reduced-order modeling approaches for existing ESM components. For example, physical Earth system modeling parameterization in the land, river and ocean components of the ESM could be employed at spatially-variable resolutions near the coasts to allow process-rich fidelity to span the scale between the largest ESM scales (100 km) and smallest estuarine and marsh scales (1 m). We suggest that the brute force approach on its own is unrealistic and undesirable; it is also inconsistent with the central goals of ESMs, which center on abstracting and thus understanding the complete Earth system climate. Thus, melding both approaches is needed to leverage existing ESM capabilities present in land, river, and ocean modeling to enable them to predict under-resolved physics with enhanced fidelity, leveraging the information already available in existing site implementations of estuarine models. Under this framework, the land and atmosphere components of the ESM should provide worldwide watershed loading (flow and nutrients) and weather forcing (long and shortwave solar radiation and wind forcing). While processes such as worldwide watershed loading, weather forcing, and coastal flooding are actively being developed into ESM frameworks 104 , their full incorporation into coupled ESMs is necessary before coastal interface representation is possible to address. Improving existing coarse resolution shoreline pixels with explicit 3D model representations using coupled high-resolution components requires, at a minimum, a synthesis of existing observational data at coastal interfaces that could leverage such incorporation. Such efforts could also be combined with spatially-distributed and/or grouping-based sensitivity analyses to further identify a reduced number of the most robust parameters to incorporate into ESMs 105 . Box 1 outlines the recommended criteria for process/element representation within the framework of system classifications to embark on such approaches for coastal interfaces. While representing the coast will consume additional computing resources, we suggest that this will have a low overall burden considering the small global extent of the coastline and the relatively low computational cost of existing land models (relative to the ocean and atmosphere). We posit that the outsized role of coastal systems on global biogeochemical cycles merit any additional computing resource needs. For example, the ocean and atmosphere modules of the Energy Exascale Earth System Model consume ~90% of the model’s computing resources 90 ; only a fraction of the 10% used by the land module would be needed for coastal representation. Box 1. Key attributes and processes in the coastal interface that should be represented in ESMs either empirically (i.e., parameterized) or mechanistically (i.e., process) Processes and attributes Global impact Relevant stress and disturbance | We live, work, and play at the coast. About 40 percent of the world's population currently lives near the coast. Much of the world's energy, defense, and industrial infrastructure is located on the coast, and shipping through coastal ports accounts for more than 90 percent of global trade. But coastal landscapes are also vulnerable to global change. By 2100, more than twice as many people could live in areas susceptible to flooding, given sea level rise, urban growth, immigration, and high carbon dioxide emission scenarios. "The main global climate [computer] models researchers use currently describe the coast as a pixel that is essentially half land and half ocean," said PNNL biogeochemist Nick Ward. "It just hands off the land to the sea." In a recent review article, published May 18 in Nature Communications, an interdisciplinary team of researchers led by Ward proposed a path to refining the representation of coastal interfaces in Earth systems models used to predict the climate on Earth. They suggest describing the functions of coastal interfaces on an ecosystem scale, classifying coastal ecosystems into a few functional types, using detailed models that exist at local scales for those categories, and then applying lessons learned from local models to similar coastal ecosystems around the entire globe. Current Earth system models (ESM) represent the land-sea interface simply. A new review article led by a PNNL researcher proposes a more refined representation that reflects the gradual transition from land to sea at coastal interfaces. Credit: Nathan Johnson | PNNL Defining coastal process gradients on an ecosystem scale Earth system models describe how ecosystems around the world interact through nutrient and energy transfer to influence climate. Biological and geochemical diversity are the foundation of many of those transfer cycles, and coastal ecosystems are home to some of the most biologically and geochemically diverse systems on the planet. However, most details about the function of coastal ecosystems is missing from current Earth system models. These models currently represent a coastal interface as a simple transition between land and sea. In reality, the interface is a gradual shift, shaped largely by the balance between tidal flow and freshwater discharge from land. Geography—steep or shallow coastlines—is a primary factor that influences how freshwater and seawater interact at a coastal interface. The effect of tides travels further inland through tidal rivers for shallower coastlines than one where rocky cliffs meet the sea. Also, following the flow of water at a coastal interface reveals processes and gradients from the scale of molecules and microbes to trees and sediment. The mix of freshwater and saltwater at a coastal interface generates a salinity gradient that influences the types of plants that grow inland as well as the composition of microbial communities living in the soil and sediment. The plants and microbes in a coastal area affect the nutrient and carbon cycles of an ecosystem, and sediment washed from land into rivers and estuaries influences the nutrient availability. "These factors have been studied in labs and at different sites around the world," said Vanessa Bailey, a soil scientist at PNNL. "Now we want to bring together everyone studying the interface between land and the sea to better understand the connected processes happening in coastal interfaces on the ecosystem scale." The geography of a coastline—a steep active margin or a shallow passive margin—affects how freshwater and seawater mix at a coastal interface. Credit: Nathan Johnson | PNNL Describing coastal process disruption and resilience As scientists work to better understand how coastal ecosystems function, they also have a second task: to study how resilient these ecosystems are to global change. Because coastal ecosystems are an interface between land and sea, they also experience effects from disruptions in both areas. Drought and land use changes inland can affect coastal processes, along with sea level rise in the ocean. With so many interconnected processes happening in coastal ecosystems, scientists face many questions when creating detailed predictive models of the function and response of coastal interfaces. Which coastal processes and which land-based processes need to be included? On what geographic and time scales? Do the models reflect how coastal ecosystems could respond to global change? Classifying coastal processes and coordinating future research In their article, the team of interdisciplinary researchers from academia, national laboratories, and federal agencies proposes a strategy to answer those questions. One way to represent coastal interfaces in Earth system model is with a model of every coastal ecosystem in the world. However, with 372,000 miles of coastline around the world, the researchers acknowledge that task is unfeasible. Coastal ecosystems look very different around the world, but they could have common features that are useful to describe in detail for Earth system model. Credit: USGS, Cameron Venti, and Nathan Anderson on Unsplash So they offer another option: classify coastal ecosystems into a few functional types, make detailed models for those categories, and then apply those models to other coastal ecosystems. Functional classifications could describe geographic features, like rugged coastlines or gently sloping beaches. They could describe ecosystems in ways that gather different types of estuary systems, tidal flats, or shorelines together. "As the scientific community develops these models, it will also reveal gaps where more observational data are needed to validate the models," Ward said. To gather that data, he and his colleagues recommend leveraging existing long-term ecological monitoring networks and directing the research toward common questions. The task ahead is something no one can do alone, the team concluded. Ward agreed: "Developing these detailed models will require coordination across institutions, funding agencies, and existing long-term research networks to develop a global-scale understanding of coastal interfaces." | 10.1038/s41467-020-16236-2 |
Physics | New device could contribute to a major increase in the rate of future optical communications | www.nature.com/ncomms/journal/ … /abs/ncomms2293.html Journal information: Nature Communications | http://www.nature.com/ncomms/journal/v3/n12/abs/ncomms2293.html | https://phys.org/news/2013-01-device-contribute-major-future-optical.html | Abstract Metallic components such as plasmonic gratings and plasmonic lenses are routinely used to convert free-space beams into propagating surface plasmon polaritons and vice versa. This generation of couplers handles relatively simple light beams, such as plane waves or Gaussian beams. Here we present a powerful generalization of this strategy to more complex wavefronts, such as vortex beams that carry orbital angular momentum, also known as topological charge. This approach is based on the principle of holography: the coupler is designed as the interference pattern of the incident vortex beam and focused surface plasmon polaritons. We have integrated these holographic plasmonic interfaces into commercial silicon photodiodes, and demonstrated that such devices can selectively detect the orbital angular momentum of light. This holographic approach is very general and can be used to selectively couple free-space beams into any type of surface wave, such as focused surface plasmon polaritons and plasmonic Airy beams. Introduction Photodetectors are widely used optical components that record the intensity of incident light by converting it into an electrical signal. In the detection process, any information about the phase profile of the incident wavefront is lost. Here we present a new approach to designing detectors, based on the principle of holography, which enables the detection of the number of twists of the wavefront within a wavelength of propagation—known as the topological charge of corkscrew-shaped wavefronts, characteristic of beams carrying orbital angular momentum (OAM)—while simultaneously taking advantage of widely available photodiode technology. The principle of holography, developed first in 1947 by Gabor 1 , was applied to free-space optical beams with the advent of the laser and later extended to surface waves by Cowan 2 in 1972. Holography is originally an imaging technique that consists of scattering an incident laser beam from an optically recorded interference pattern (the hologram) such that the scattered light reconstructs the three-dimensional image of an object. In the work of Cowan 2 , holograms were generated using surface waves as reference beams. In subsequent papers, authors made use of the collective excitation of free-electrons on metal surfaces 3 , known as surface plasmon polaritons (SPPs) to create and record holograms 4 , 5 , 6 . The interference between the light scattered by the object and the reference SPP beam creates high-intensity lines that imprint a phase grating onto the photographic film placed in contact with the metal surface. The information encoded in the film is reconstructed by sending a readout SPP beam propagating at the interface between the metal and the photographic film. In this paper, we demonstrate the use of the holography principle to design couplers for complex wavefronts. In particular, our approach simplifies the problem of recording the hologram by patterning it directly onto the device, thus removing the photographic layer commonly used in holography 7 . The hologram consists of a distribution of scatterers disposed directly onto the metal surface in the locations where constructive interference between the two waves occurs, that is, where the phase of both the incident wave and the SPP are equal. As an example of this powerful method, we have created holographic surfaces by interfering a converging SPP wave with incoming free-space beams carrying OAM, also known as optical vortex beams. Following the principle of reciprocity, these surfaces can scatter a diverging SPP wave into a free-space optical vortex beam as well. Vortex beams have a doughnut-like transverse intensity profile and carry an OAM of ħL i per photon, where L i is the topological charge and ħ is the reduced Planck's constant. OAM states are orthogonal and beams with different OAM can propagate collinearly while carrying a quantum number of any integer value. These intriguing properties make light with OAM appealing for applications in microscopy, optical trapping and optical communication 8 , 9 , 10 , 11 . To fully exploit the potential of OAM, several techniques have been developed to selectively detect it. Inspired by the pioneering work of Beth 12 on the detection of spin angular momentum, He et al . 13 measured the torque induced by OAM transferred from a vortex beam to absorptive particles. A more direct solution consists of reversing the wavefront using appropriate holographic plates 14 . The efficiency of such diffracting devices has been improved by an alternative mode-sorting approach, which combines interferometry and conjugate helical phase elements by directing photons with OAM states onto a series of output ports 10 . More evolved holographic plates can also simultaneously detect a large number of OAM states using a CCD camera 15 . Refractive elements have been recently developed to transform the OAM information into transverse momentum information for efficient detection of OAM with high quantum numbers with sensitivity down to the single photon limit 16 . All of these free-space methods require bulky systems comprising multiple components, which are usually not cost effective and are difficult to implement. Integrated photonic waveguide plates have been recently proposed to sort the OAM on-chip by sampling the wavefront of vortex beams using vertical gratings that couple light into phased-array waveguides 17 . In this paper, we propose a straightforward but powerful alternative approach to selectively detecting the OAM of light with a specific topological charge L i , using state-of-the-art commercial detectors integrated with couplers. This novel integration scheme is based on holographic interfaces which couple incident light into focusing SPPs. Results Holographic designs of SPP couplers for beams with OAM The schematic in Fig. 1a explains the procedure followed for the design of the holograms. We compute the in-plane interference pattern which occurs when a propagating and focused SPP coherently interacts with a single charged free-space vortex beam impinging on the interface at normal incidence ( Fig. 1b ). The vortex beam is assumed to be Figure 1: Interferometric design of the plasmonic couplers. ( a ) Schematic explaining the approach used to design our holographic interfaces. Holographic couplers are designed by considering the interference between an incident beam with some complex wavefront, such as a vortex beam with a topological charge L i =−1, impinging on the metal interface at normal incidence and a converging SPP beam. ( b ) Computer-generated interferogram. The incident vortex beam is assumed to be a Gaussian-vortex beam. ( c ) Binary version of the interferogram in panel b . The bright lines represent the locations of equal phase of the two beams, where maximum constructive interference occurs. ( d ) Scanning electron micrograph of a fabricated holographic interface where the grooves are placed at the equi-phase locations. Scale bar, 1 μm. Full size image known as a Gaussian-vortex beam, characterized by a spiral wavefront of topological charge L i , where A ( r , z ) is the transverse Gaussian beam profile, r is the radial distance from the beam centre and z is the axial distance from the beam waist. In Fig. 1c , we compute the binary version of the hologram and in Fig. 1d , we show a micrograph of a holographic interface fabricated by focused ion beam (FIB) milling. Our structures were designed to work for visible light at a wavelength of 633 nm. Because of the transverse magnetic character of SPP modes, only the in-plane component of the SPP polarization can interfere with the incident light, which implies that about 10% of the SPP energy participates in the process. In Fig. 2 , we show two device designs: a coupler to convert an incident vortex beam of a given topological charge into a focused SPP wave and a conventional plasmonic lens for an incident Gaussian beam 18 , 19 , 20 . The performance of both structures is explored via finite-difference time-domain (FDTD) simulations using commercial software (Lumerical FDTD). Light coming at normal incidence is focused from the substrate side onto the apertured holographic interface. Then, it is coupled to a particular SPP mode and eventually propagates at the gold–air interface. In accordance with the design, the classical plasmonic lens efficiently focuses only the incident Gaussian beam, whereas the holographic fork-like coupler L g =1, designed for L i =−1, focuses only the incident light with this particular topological charge. Figure 2: Numerical simulations of the SPP intensity distribution. FDTD simulations of the intensity distribution of SPPs (in the plane of the metallic pattern), generated by illuminating the plasmonic holograms at normal incidence with a Gaussian beam and vortex beams of topological charges L i =1 and L i =−1 with a Gaussian transverse intensity profile. The simulated field intensity distributions are displayed for the region immediately to the left of the gratings, identified with the dashed squares in the scanning electron microscopy image. In the first row, the hologram is created by interference of a converging SPP beam and a normally incident Gaussian beam, which can therefore launch a focused SPP beam when the Gaussian beam is normally incident on the hologram. When the incident wavefront is instead helical, characterized by a non-zero OAM, the hologram launches surface waves that do not interfere constructively along the symmetry axis of the device and therefore are not properly focused. Conversely in the second row, when the hologram is designed for an incident vortex beam of well-defined and non-zero OAM, the SPP beam is correctly focused by the structure when illuminated with the correct topological charge ( L i =−1, second row). L g is the angular momentum provided by the coupler to the incident beam as discussed in the text. In all of the simulations, light is incident from the substrate side (SiO 2 ) at λ =633-nm at normal incidence. The period of the grooves is ~600-nm in accordance with the wavelength of SPPs but spatially varies according to the interference pattern; see Fig. 1b . Scale bar, 1-μm. Full size image Conservation of OAM and physical interpretation Because in the detection process incident light carrying OAM is converted into SPPs which carry no OAM, it is useful to consider how angular momentum is conserved in the course of the interaction with the structured metallic surface. This is conceptually a phase-matching process analogous to what occurs in a standard plasmonic coupler, made of periodic corrugations, which couples an incident plane wave to a SPP by providing a wavevector parallel to the surface, of magnitude inversely proportional to the grating period so that the tangential wavevector is conserved in the interaction. Recent experiments have shown that plasmonic metasurfaces consisting of suitably arranged metallic nanostructures can be designed to impart to the scattered wavefront an OAM of any topological charge 21 , 22 . In the present work, instead, we excite focused SPPs on the metallic nanostructured surfaces using optical vortex beams. One can understand this process in terms of conservation of angular momentum. Assuming a vortex beam (equation (1)), angular momentum conservation requires L i + L g =0, where L g is provided by the holographic coupler to cancel L i . Physically, the plasmonic lens with a fork-like dislocation fringe appearing at its centre is designed to impart an opposite OAM to the incident vortex beam so that it can excite a surface wave with no OAM, that is, a focused SPP. By design, the incident vortex beam ( L i =−1) converts into a SPP, which focuses along the axis of the device where an array of sub-wavelength holes funnels the light through the gold film towards the surface of the photodiode (schematic in Fig. 3a ). The sub-wavelength array of holes is placed in the focal region of the plasmonic coupler to act as a spatial filter. This coupler is positioned at the correct distance from the hologram to allow for preferential transmission through the holes of the focused SPP generated by a vortex beam of specific OAM. To avoid direct coupling of light into the detector, we choose the grating period to be much smaller than the SPP wavelength. The size of the holes is also sub-wavelength, to funnel light into the diode via extraordinary optical transmission 23 , 24 . An incident beam with the correct OAM will be focused at the centre of the device, whereas beams with any other topological charge will be defocused away from the array of holes and hence will not be detected. Figure 3b is an electron micrograph of the device with L g =−1, whereas Fig. 3c presents the one with L g =0. Figure 3: Experiment to detect the OAM with patterned photodiodes. ( a ) Cross section of the patterned photodiodes. The holographic interface couples incident radiation into SPPs, which are then funneled as light into the detector by an array of sub-wavelength holes. The figure also shows the electron micrographs (scale bar, 1 μm) of two holograms patterned on top of the detectors: ( b ) ( L g =1) is designed to focus a vortex beam with OAM L i =−1 onto the array of holes and ( c ) ( L g =0) is a conventional plasmonic lens that focuses a Gaussian beam with L i =0. ( d ) The function of the spatial lightwave modulator (SLM) is to impart to the incident laser beam a spiral-shaped wavefront of well-defined OAM. A halfwave plate is used to control the incident polarization. The generated photocurrent is measured with a Keithley model 2400 ammeter. Full size image Experimental results Figure 3d illustrates the experimental set-up used to characterize our integrated OAM detectors. The results of our experiment are summarized in Fig. 4 . Figure 4a shows the response of a photodiode patterned with a hologram created by the interference between a converging SPP and a Gaussian beam. Figure 4b presents the same measurements as in Fig. 4a but obtained using a photodiode patterned with the hologram designed by interference of converging SPP and an incident vortex beam of charge L i =−1. In Fig. 4a , the maximum photocurrent is observed for an incident Gaussian beam, that is, when the incident beam matches the design of the hologram, and the signal decreases considerably (below 10%) for beams with OAM. Conversely, Fig. 4b shows that the maximum photocurrent is measured for an incident vortex beam with L i =−1. Figure 4: Detected photocurrent as a function of the incident polarization and OAMs. Photocurrent is measured for two different holographic photodiodes, one patterned with a classical plasmonic lens ( Fig. 3d , L g =0) that focuses a Gaussian beam ( a ) and the other with a plasmonic lens ( Fig. 3c , L g =1) that focuses a vortex beam with L i =−1 ( b ). Each colour denotes a vortex beam incident with a different topological charge. The inset shows the orientation of the polarization of the incident electric field with respect to the grooves. Full size image We varied the incident polarization and we confirmed that, for both photodiodes, the maximum signal is obtained for an incident polarization oriented normal to the grooves of the holographic interface, that is, when the coupling to SPPs is maximized. Discussion The OAM conservation condition previously discussed implies that the centre of the vortex beam must be correctly aligned with the edge of the dislocation in the holographic coupler. We studied the effect of lateral displacements with respect to the centre of the hologram. The results of these simulations show that the OAM selectivity of the detector remains reasonable, even for lateral displacements ( Supplementary Fig. S1 ). In this work, we used standard silicon-based photodiodes which have a very large active area (about 13 mm 2 ) and exhibit a responsivity ~200 mA W −1 at λ =633 nm. Once patterned, the device performance is considerably degraded, with measured responsivity in the tens of microamps per watt. There are several reasons for this: first, we only detect light that is funneled through the sub-wavelength apertures. From FDTD simulations, we estimate that around 5% of the incident power coupled to the SPP is transmitted through the interface, eventually exciting SPPs on the other side of the interface and free-space photons. Only the free-space photons on the backside of the substrate are detected by the photodiode. We estimate from simulations that around 2% of the incident light reaches the detector ( Supplementary Fig. S2 ). It is important to point out that the wavelength used for the experiment is close to the intra-band absorption in gold where losses are considerable, thus shortening the SPPs propagation distance. This has several implications: it limits not only the size of the gratings but also the length of the array of holes to the SPP propagation length ( Supplementary Fig. S3 ); thus, only a small portion of the photodiode is actually used for the detection. Detection of OAM can be achieved with substantially higher efficiencies at longer wavelengths, especially for the frequencies in the region of interest for optical communications 25 . Note that by replacing the detector with an avalanche photodiode, one could increase the responsivity by a factor ~10 3 up to ~1 mA W −1 . Detection of higher OAM can be achieved with good selectivity by modifying the holographic coupler; see the example in Supplementary Fig. S4 with a doubly charged OAM detector. Further increase of the selectivity between adjacent OAMs would require precise tailoring of the transmitting array of holes as discussed in Supplementary Fig. S4 . In conclusion, we demonstrated a new technique for the design of plasmonic couplers for beams with OAM. The concept, inspired by the principle of holography, relies on coherent scattering of light from free space into SPPs by patterning suitable grooves at locations where the two beams have the same phase. We demonstrated that holographic surfaces can convert free-space laser beams with different topological charges into focused SPP waves. In this way, we extended the functionality of a standard photodiode, enabling the sorting of the various incident OAM states of light. The holographic method of generating plasmonic couplers has potential for applications in various areas of integrated optics and can be used, in a straightforward way, to design other interfaces, for example, for the generation of plasmonic non-diffracting beams 26 , 27 . Methods Detector fabrication We patterned our vortex hologram on the front window of commercially available silicon detectors, which were first coated with a 200-nm-thick gold film using electron beam evaporation. The shallow grooves forming the hologram were defined by FIB milling (Zeiss NVision 40). The groove width is ~150-nm, 75-nm deep (halfway through the gold film) and the spacing is given by the simulated interference pattern, as discussed in the introduction. A second FIB milling is performed at higher exposure to create an array of sub-wavelength holes in the focal region of the plasmonic coupler. As the holes pierce all the way through the gold film, the holes spacing is chosen as ~200-nm, which is smaller than the free-space wavelength to avoid direct coupling of the light into the detector. OAM measurements A TEM 00 laser mode of a linearly polarized He-Ne laser, emitting at 633-nm, is incident on a programmable spatial lightwave modulator (Hamamatsu LCOS-SLM X 10468) to modify its OAM and is later directed to the silicon photodetector (Hamamatsu S2386-18K). A halfwave plate is inserted along the beam path to control the polarization of the incident light onto the detector. The current is monitored at the photodiode with a Keithley model 2400. Additional information How to cite this article: Genevet, P. et al . Holographic detection of the orbital angular momentum of light with plasmonic photodiodes. Nat. Commun. 3:1278 doi: 10.1038/ncomms2293 (2012). | (Phys.org)—At a time when communication networks are scrambling for ways to transmit more data over limited bandwidth, a type of twisted light wave is gaining new attention. Called an optical vortex or vortex beam, this complex beam resembles a corkscrew, with waves that rotate as they travel. Now, applied physicists at the Harvard School of Engineering and Applied Sciences (SEAS) have created a new device that enables a conventional optical detector (which would normally only measure the light's intensity) to pick up on that rotation. The device, described in the journal Nature Communications, has the potential to add capacity to future optical communications networks. "Sophisticated optical detectors for vortex beams have been developed before, but they have always been complex, expensive, and bulky," says principal investigator Federico Capasso, Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering. In contrast, the new device simply adds a metallic pattern to the window of a commercially available, low-cost photodetector. Each pattern is designed to couple with a particular type of incoming vortex beam by matching its orbital angular momentum—the number of twists per wavelength in an optical vortex. Sensitive to the beam's "twistiness," this new detector can effectively distinguish between different types of vortex beams. Existing communications systems maximize bandwidth by sending many messages simultaneously, each a fraction of a wavelength apart; this is known as wavelength division multiplexing. Vortex beams can add an additional level of multiplexing and therefore should expand the capacity of these systems. "In recent years, researchers have come to realize that there is a limit to the information transfer rate of about 100 terabits per second per fiber for communication systems that use wavelength division multiplexing to increase the capacity of single-mode optical fibers," explains Capasso. "In the future, this capacity could be greatly increased by using vortex beams transmitted on special multicore or multimode fibers. For a transmission system based on this 'spatial division multiplexing' to provide the extra capacity, special detectors capable of sorting out the type of vortex transmitted will be essential." The new detector is able to tell one type of vortex beam from another due to its precise nanoscale patterning. When a vortex beam with the correct number of coils per wavelength strikes the gold plating on the detector's surface, it encounters a holographic interference pattern that has been etched into the gold. This nanoscale patterning allows the light to excite the metal's electrons in exactly the right way to produce a focused electromagnetic wave, known as a surface plasmon. The light component of this wave then shines through a series of perforations in the gold, and lands on the photodetector below. If the incoming light doesn't match the interference pattern, the plasmon beam fails to focus or converge and is blocked from reaching the detector. Capasso's research team has demonstrated this process using vortex beams with orbital angular momentum of −1, 0, and 1. "In principle, an array of many different couplers and detectors could be set up to read data transmitted on a very large number of channels," says lead author Patrice Genevet, a research associate in applied physics at SEAS. "With this approach, we transform detectors that were originally only sensitive to the intensity of light, so that they monitor the twist of the wavefronts. More than just detecting a specific twisted beam, our detectors gather additional information on the phase of the light beam." The device's ability to detect and distinguish vortex beams is important for optical communications, but its capabilities may extend beyond what has been demonstrated. "Using the same holographic approach, the same device patterned in different ways should be able to couple any type of free-space light beam into any type of surface wave," says Genevet. | www.nature.com/ncomms/journal/ … /abs/ncomms2293.html |
Physics | New quantum dot microscope shows electric potentials of individual atoms | Christian Wagner et al, Quantitative imaging of electric surface potentials with single-atom sensitivity, Nature Materials (2019). DOI: 10.1038/s41563-019-0382-8 Journal information: Nature Materials | http://dx.doi.org/10.1038/s41563-019-0382-8 | https://phys.org/news/2019-06-quantum-dot-microscope-electric-potentials.html | Abstract Because materials consist of positive nuclei and negative electrons, electric potentials are omnipresent at the atomic scale. However, due to the long range of the Coulomb interaction, large-scale structures completely outshine small ones. This makes the isolation and quantification of the electric potentials that originate from nanoscale objects such as atoms or molecules very challenging. Here we report a non-contact scanning probe technique that addresses this challenge. It exploits a quantum dot sensor and the joint electrostatic screening by tip and surface, thus enabling quantitative surface potential imaging across all relevant length scales down to single atoms. We apply the technique to the characterization of a nanostructured surface, thereby extracting workfunction changes and dipole moments for important reference systems. This authenticates the method as a versatile tool to study the building blocks of materials and devices down to the atomic scale. Main Electrostatic interactions are a key element in the functionality of many nanoscale materials and systems. For example, the performance of organic and inorganic semiconductor devices is affected by electric dipoles at the relevant interfaces 1 , 2 , 3 , 4 . Because both established and novel device concepts aim for the few-nanometre scale 5 , 6 , 7 , 8 , the relevance of microscopic electric potentials in functional materials and devices increases continually. On a more fundamental level, the measurement of electric potentials can also give valuable insights into primary mechanisms at surfaces and interfaces, such as reconstruction or relaxation, mechanical distortion, charge transfer and chemical interaction 9 , which all create electric potentials at the atomic scale. However, the importance of electrostatic interactions is not limited to semiconductor and solid-state materials. Also, the structure and aggregation of biomolecules, for example, are steered by the interactions between polarized functional groups 10 , 11 , and electrostatic interactions play an important role in catalysis, too 12 . In many contexts, surfaces are the natural environment in which to study nanoscale electric potentials, either because of their intrinsic importance (semiconductor devices, catalysis) or because they offer a convenient substrate for immobilization and accessibility (biomolecules). The state of the art in electric potential imaging at surfaces is Kelvin probe force microscopy (KPFM). KPFM is suitable for use with structure sizes of several tens of nanometres 4 , 13 , 14 , 15 , 16 . For smaller structures, in the realm of single atoms or molecules 17 and cutting-edge semiconductor devices 5 , KPFM is problematic as high resolution can only be reached at small tip–surface distances where chemical forces start acting. Their influence hampers a quantitative interpretation of the data 18 . KPFM images with sub-molecular resolution obtained with carbon-monoxide-decorated tips 19 , 20 suffer from the same limitation, notwithstanding continuous efforts to improve these methods 20 , 21 , 22 and are moreover slow and limited to small surface areas (typically one single molecule). Thus, a versatile, fast and quantifiable scanning probe method for imaging electric potentials at the atomic scale is lacking. Recently we reported that a single molecule, when attached to the tip of a non-contact atomic force/scanning tunnelling microscope (NC-AFM/STM) by controlled manipulation 23 , 24 , may act as a quantum dot (QD) and can be used as a sensor to detect and image electric potentials, resulting in scanning quantum dot microscopy (SQDM) 25 , 26 . Here, we present a rigorous analysis of the corresponding imaging mechanism and show that SQDM can be used to map out surface potential distributions and dielectric surface topographies quantitatively. Most notably, we find that the screening action of the combined tip/surface system induces an exponential decay of electric potentials with lateral distance from the probing tip. This effect leads to the exceptionally high lateral resolution of SQDM. A detailed investigation of this exponential screening leads us to an image deconvolution algorithm that, in conjunction with far-reaching instrumental developments 27 , transforms SQDM into a powerful imaging technique for electric surface potential imaging in ever smaller nanostructures and novel materials. Principle and formalism of SQDM imaging A schematic drawing of the molecular QD at the tip apex of an NC-AFM/STM is shown in Fig. 1a 25 , 26 . This set-up can be considered as a single-electron box consisting of two capacitances in series 28 , but it can also be understood as an electrostatic boundary value problem where the potential Φ QD at the QD at r is determined by the shape of the confining boundary \({\cal{T}}\) (the conductive surfaces of tip and sample connected at infinity, Fig. 1c ) and by the potential distribution Φ s ( r ′) on it. Fig. 1: Principle and quantitative nature of SQDM. a , Principle of electric potential sensing with a molecular QD. When placed above a bare surface, the QD changes its charge by ± e if its potential Φ QD = α 0 V b reaches the threshold values Φ ± at \(V_{\mathrm{b}} = V^ \pm _0\) . The local topography (green) and the surface potential Φ s (violet) of a nanostructure (here, adatom) change the QD potential Φ QD , as α 0 changes to α and V * is added to V b . In the equivalent circuit diagram superimposed on the left, the QD is marked by a dashed box. b , Δ f ( V b ) = −d F z /d z × f 0 /(2 k 0 ) curves (qPlus type NC-AFM with k 0 = 1,800 N m −1 , f 0 = 31,200 Hz, amplitude A = 0.2 Å) in the V b ranges where the QD changes its charge state (upper panel V − ; lower panel V + ), measured above the empty surface (black) and above a nanostructure (violet). The change from α 0 to α and the additional contribution V * (see a ) lead to a shift from \(V_0^ \pm\) to V ± . c , Illustration of SQDM as a boundary value problem with the QD at r . Tip and surface are connected at infinity. d , The PSF γ * describes the contribution of the potential \(\varPhi _{\mathrm{s}}({\bf{r}}_{||}^\prime )\) (shades of violet illustrate magnitude) to Φ QD at all possible positions r || (visualized by green lines). For flat surfaces, reciprocally, Φ QD at a certain lateral position r || is the sum over all local potentials on the sample surface at positions \({\bf{r}}_{||}^\prime\) weighted with γ *. e , Series of V * profiles across a PTCDA/Ag(111) island edge measured with different tips and tip heights as indicated. The expected profiles for corresponding measurements with KPFM 13 and with an ideal point probe ( z = 25 Å) are shown for comparison. The measured workfunction change for a PTCDA layer corresponds to Δ W = − eV * at positive r || . Values for Δ W reported in the literature are indicated as black squares. Full size image It is the principle of SQDM to compensate the local variations of Φ s and of the sample topography beneath the QD encountered during scanning by adjusting (and recording) the sample bias V b . The condition Φ QD = const. indicates a correct compensation. This does not put special requirements on the choice of the QD and on its theoretical description. Hence, we may use the approximation of a point-like QD. Because compensation is verified at r , this is the point where the influence of the surface potential is measured. We can relate the information in this imaging plane back to the properties of the surface itself by defining a specific \({\cal{T}}\) that approximates the experimental situation and solving the corresponding boundary value problem. For Dirichlet boundary conditions we obtain 29 $$\varPhi _{{\mathrm{QD}}}({\bf{r}}) = - \frac{{\varepsilon _0}}{e}\left[ {\mathop {\iint}\limits_{{\mathrm{sample}}} {\left( {\varPhi _{\mathrm{s}}({\bf{r}}^\prime ) + V_{\mathrm{b}}} \right)} \frac{{\partial G({\bf{r}},{\bf{r}}^\prime )}}{{\partial {\bf{n}}^\prime }}{\mathrm{d}}^2{\bf{r}}^\prime + \mathop {\iint}\limits_{{\mathrm{tip}}} {\varPhi _{\mathrm{s}}} ({\bf{r}} + {\bf{R}})\frac{{\partial G({\bf{r}},{\bf{r}} + {\bf{R}})}}{{\partial {\bf{n}}^\prime }}{\mathrm{d}}^2{\bf{R}}} \right]$$ (1) (we discuss the case of non-conductive surfaces elsewhere 30 ). Here, n ′ is the surface normal at r ′ or r + R , respectively, and G is the Green’s function, which encodes the boundary shape via \(G({\bf{r}},{\bf{r}}^\prime ) = 0\,\forall {\bf{r}}^\prime \in {\cal{T}}\) (Fig. 1c ). All nanostructure-related charges that are just inside the volume enclosed by \({\cal{T}}\) create, together with their image charges, the locally varying surface potential Φ s in equation ( 1 ). Because the tip and QD have a fixed spatial relation and are sufficiently far from the sample, we can assume that ∂ G ( r , r + R )/∂ n ′ barely varies with r (during scanning) and consequently the second integral in equation ( 1 ) is a constant Φ T . Thus, from now on, Φ s refers to the sample surface potential only and we describe the shape of \({\cal{T}}\) on the sample side as a topography of height t d superimposed onto a plane such that r = ( r || , z ) and \({\bf{r}}^\prime = ({\bf{r}}_{||}^\prime ,t_{\mathrm{d}})\) . Equation ( 1 ) is central as it relates data on Φ QD in the imaging plane at z to the desired surface properties. For a chosen shape of \({\cal{T}}\) , \(\varPhi _{\mathrm{s}}({\bf{r}}_{||}^\prime )\) as calculated by inversion of equation ( 1 ) corresponds to the potential distribution, which, if applied to \({\cal{T}}\) , would reproduce the Φ QD data in the imaging plane. Thus, a more accurate representation of \({\cal{T}}\) yields a better recovery of \(\varPhi _{\mathrm{s}}({\bf{r}}_{||}^\prime )\) . As it turns out, the approximation of a planar surface with t d = 0, which we will adopt in the following, provides excellent results for the systems investigated here, where generally t d ≪ z . A discussion of alternative approximations for t d and their consequences is given in ref. 30 . Moreover, we adopt the reasonable assumption of an axially symmetric tip. For these conditions, ∂ G /∂ n ′ depends only on the relative distance \(|{\bf{r}}_{||} - {\bf{r}}_{||}^\prime |\) and corresponds to the point spread function (PSF) \(\gamma (|{\bf{r}}_{||} - {\bf{r}}_{||}^\prime |,z) \equiv - \varepsilon _0/e \times \partial G/\partial {\bf{n}}^\prime\) of SQDM (Fig. 1d ). Then, equation ( 1 ) becomes (from now on we drop the explicit reference to z ) $$\varPhi _{{\mathrm{QD}}}({\bf{r}}_{||}) = \left( {V^ \ast ({\bf{r}}_{||}) + V_{\mathrm{b}}} \right)\alpha ({\bf{r}}_{||}) + \varPhi _{\mathrm{T}}$$ (2) with the following definitions for α and V *: $$\alpha ({\bf{r}}_{||}) = \mathop {\iint}\limits_{{\mathrm{sample}}} \gamma (|{\bf{r}}_{||} - {\bf{r}}_{||}^\prime |){\mathrm{d}}^2{\bf{r}}_{||}^\prime = \frac{{\partial \varPhi _{{\mathrm{QD}}}}}{{\partial V_{\mathrm{b}}}}$$ (3) $$V^ \ast ({\bf{r}}_{||}) = \mathop {\iint}\limits_{{\mathrm{sample}}} {\varPhi _{\mathrm{s}}} ({\bf{r}}_{||}^\prime )\gamma ^ \ast (|{\bf{r}}_{||} - {\bf{r}}_{||}^\prime |){\mathrm{d}}^2{\bf{r}}_{||}^\prime$$ (4) Analogous equations can be derived for a more general \({\cal{T}}\) with t d ≠ 0 (ref. 30 ). Here, α is the gating efficiency, γ * = γ / α is the PSF normalized to one, and V * quantifies the effect of the surface potential distribution Φ s on Φ QD , expressed in the form of an equivalent (additional) bias potential. For extended, homogeneous objects with constant Φ s , V * is in fact the surface potential Φ s = V *. Inhomogeneous potential distributions \(\varPhi _{\mathrm{s}}({\bf{r}}_{||}^\prime )\) can be obtained from V * via a deconvolution with the PSF γ * (equation ( 4 )), as we will discuss later. First we demonstrate how α and V * can be extracted from an actual SQDM measurement. At two specific Φ QD values, Φ + and Φ − , the QD charge state changes as an electron tunnels across the contact between tip and QD 25 . These charging events cause sharp dips in the frequency shift Δ f ( V b ) of the NC-AFM (Fig. 1b ), corresponding to steps in the tip–surface force 31 , 32 . These dips at V b = V ± ( r || ) are indicators that the charging condition Φ QD = Φ ± has been reached and that the compensation of V * was hence successful. Thus, V ± are the primary measurands of SQDM. To image V ± , we have developed a feedback controller that maintains the charging condition either for V + or for V − as the surface is scanned twice at constant height 27 (see Methods ). After some algebra we can derive the following two relations from equation ( 2 ) and the charging conditions: $$\alpha _{{\mathrm{rel}}}({\bf{r}}_{||}) = \frac{{\alpha ({\bf{r}}_{||})}}{{\alpha _0}} = \frac{{V_0^ + - V_0^ - }}{{V^ + ({\bf{r}}_{||}) - V^ - ({\bf{r}}_{||})}}$$ (5) and $$V^ \ast ({\bf{r}}_{||}) = \frac{{V_0^ - }}{{\alpha _{{\mathrm{rel}}}({\bf{r}}_{||})}} - V^ - ({\bf{r}}_{||})$$ (6) Because SQDM measures variations in Φ s and α , we have selected a reference point on the surface where we define z , set Φ s ≡ 0 and α ≡ α 0 and denote the measured V ± values as \(V_0^ \pm\) . With this, we have established V *( r || ) and α rel ( r || ) as the secondary SQDM measurands. Note that the absolute value of α can be determined from V ± ( z ) and corresponding force change data sets 33 , which is, however, not required for SQDM. Measuring workfunction changes We demonstrate the quantitative measurement of the surface potential Φ s for a homogeneous sample by looking at workfunction changes. The corresponding experiment was carried out on PTCDA (3,4,9,10-perylene-tetracarboxylic dianhydride) adsorbed on the Ag(111) surface with a PTCDA molecule as the SQDM sensor 25 , 26 using a qPlus NC-AFM. Although PTCDA/Ag(111) is an extremely well-studied benchmark system 34 , no consensus has yet been reached regarding the workfunction change Δ W upon adsorption of PTCDA on Ag(111). In fact, values between −0.1 eV and +0.27 eV have been reported from photoemission experiments 35 , 36 , 37 (black squares, Fig. 1e ). In Fig. 1e we show line profiles of V *, measured from the bare Ag(111) surface across PTCDA island edges deep into a compact PTCDA island for different tip preparations, tip heights and even different NC-AFM tuning forks. Remarkably, the line scans practically collapse onto a single curve, proving that the workfunction changes as determined by SQDM are robust and reproducible, and yield a value of Δ W = − eΦ s = − eV * = (145 ± 10) meV from Ag(111) to PTCDA/Ag(111). We compare the workfunction determined by SQDM to results obtained from density functional theory (DFT) calculations. We employ a fully self-consistent implementation, \({\mathrm{vdW}}_{{\mathrm{sc}}}^{{\mathrm{surf}}}\) (ref. 38 ), of the Tkatchenko–Scheffler vdW surf functional 39 , in combination with PBE 40 (see Methods ). The calculated workfunction shift between Ag(111) and PTCDA/Ag(111) is Δ W = 90 meV. Compared to PBE, which predicts a value of 240 meV, \({\mathrm{vdW}}_{{\mathrm{sc}}}^{{\mathrm{surf}}}\) yields an improved though not perfect agreement with the experimental value. This result stresses the fundamental importance of van der Waals interactions for electronic processes at molecule–metal interfaces and, on a more general note, shows that SQDM is able to set benchmarks for the development of ab initio theory. Turning back to the experiment, the curves in Fig. 1e also reveal the sharpness with which we measure the potential distribution at the island edge. To put our experimental V *( r || ) profile into perspective, we compare it with two extreme cases: (1) a simulated estimate for a corresponding classical KPFM experiment 13 (grey solid curve) in which the entire tip acts as a sensor for the electric potential; (2) a simulation of the measurement with an idealized hypothetical point-like sensor for electrostatic potentials (red dotted curve in Fig. 1e ; also compare Fig. 2a ). Although it is expected that the V * resolution of SQDM, because of its nanoscopic sensor, is superior to KPFM, the profile we observe in experiments is even sharper than that of the ideal point probe. This surprising finding asks for a closer analysis of the physics behind the PSF γ *, which, according to equation ( 4 ), determines how a step in the surface potential Φ s is smeared into V *. Knowing γ * will ultimately allow the reconstruction of arbitrary potential distributions \(\varPhi _{\mathrm{s}}({\bf{r}}_{||}^\prime )\) via equation ( 4 ). Fig. 2: Electrostatic screening and image deconvolution in SQDM. a , Simulated potential Φ QD above a charge/image–charge dipole on a conductive surface. A point-like potential probe at a lateral distance r ||, x is indicated by a cross. b , Cross-section through the potential Φ QD at the height of the point probe in a . c , Same as a , but with a second conductive plane (‘tip’) above the point probe. The potential distribution can now be simulated via an infinite series of image dipoles. Two images are indicated as dashed lines. The potential Φ QD at the point probe position is reduced compared to a . d , Cross-section through the potential Φ QD at the height of the probe in c . The log-scale plot reveals the exponential decay of Φ QD with lateral distance from the dipole. e , Illustration of the deconvolution process in which the Φ s image is recovered from the measured V * image using Φ QD ( r ||, x ) from d as the PSF γ *. f , Cross-section through the Φ s image of a CO molecule on Ag(111) as obtained from deconvolution. The full-width at half-maximum is only 1.6 nm. g , Simulation of an SQDM V * image for the hypothetical set-up without a tip ( a ). Simulation based on the image in i . h , Measured SQDM V * image (slope tracking controller, STC) of a complex surface with extended PTCDA islands and smaller nanostructures (compare Fig. 3 ). i , SQDM Φ s image as obtained from the V * image in h via deconvolution (see Methods ). All nanostructures are now equally well visible. g – i , Scale bars, 10 nm. Full size image Functional form of γ * and image deconvolution To analyse the shape of γ *, we must solve the boundary value problem for the chosen shape of \({\cal{T}}\) . To reach a generic solution, we also approximate the tip as planar, which is justified as the PtIr tips used in our experiments are rather blunt on the mesoscopic scale, which is also confirmed by our experimental results. Other tip shapes would not invalidate the general conclusions drawn here 30 . For Dirichlet conditions, the gradient \(\partial G({\bf{r}}_{||},{\bf{r}}_{||}^\prime )/\partial {\bf{n}}^\prime\) (and thus γ *) is proportional to the potential at r || of a test charge placed at \({\bf{r}}_{||}^\prime\) and shifted slightly into the volume enclosed by the grounded surface \({\cal{T}}\) , which creates a minimal perturbation of \(\varPhi _{\mathrm{s}}({\bf{r}}_{||}^\prime )\) . The respective \(\gamma _{{\mathrm{PP}}}^ \ast\) for grounded parallel planes separated by z t = z + d can then be calculated via an infinite series of image charges (Fig. 2c ), which screen the test charge placed at \(({\bf{r}}_{||}^\prime = 0,z_{\mathrm{c}} \ll z)\) . This series has no closed solution 41 , but there exists an asymptotic expression for large r || that clearly reveals an (even faster than) exponential decay of \(\gamma _{{\mathrm{PP}}}^ \ast\) with | r || | (ref. 42 ): $$\gamma _{{\mathrm{pp}}}^ \ast (|{\bf{r}}_{||}|,z,z_{\mathrm{t}}) \propto \sqrt {\frac{8}{{|{\bf{r}}_{||}|z_{\mathrm{t}}}}} {\mathrm{sin}}\left( {\frac{z}{{z_{\mathrm{t}}}}\uppi } \right){\mathrm{sin}}\left( {\frac{{z_{\mathrm{c}}}}{{z_{\mathrm{t}}}}\uppi } \right){\rm{e}}^{ - \frac{\uppi }{{z_{\mathrm{t}}}}|{\bf{r}}_{||}|}$$ (7) To put this result into perspective, we compare it with the PSF of a hypothetical ideal point probe (instead of tip and QD) (Fig. 2a ), which behaves as \({\mathbf{p}} \cdot {\hat{\mathbf r}}/r^2\) (Fig. 2b ), as expected for the test-charge/image-charge dipole. The exponential decay of the PSF γ * (which is not exclusive to the parallel-plane approximation) puts the sensing of electrostatic potentials via SQDM in line with STM, for which the tunnelling probability also decays exponentially with distance. In both cases the result is a superior lateral resolution, because the influence of objects that are not immediately beneath the probe is strongly suppressed (compare Fig. 2g,h ). Remarkably, in SQDM this is achieved in spite of the long range of electrostatic fields, while the tunnelling in STM is intrinsically short-ranged. Perpendicular to the surface there is, however, no exponential decay in SQDM. This preserves its superior sensitivity at large tip heights. Beyond the tip height dependence (equation ( 7 )), γ * depends only weakly on details of the tip. If anything, blunter tips yield a higher resolution, as their screening is stronger. This is in marked contrast to KPFM 43 . Knowing \(\gamma _{{\mathrm{PP}}}^ \ast\) , we can now obtain Φ s by a deconvolution of V * (equations ( 4 ) and ( 6 ) and Fig. 2e,f,i ; see Methods ). The second SQDM measurement quantity α rel (equation ( 5 )) is not related to Φ s but to the shape of \({\cal{T}}\) via equation ( 3 ). Considerations summarized in the Methods and detailed in ref. 30 show that the dielectric topography \(t_{\mathrm{d}}({\bf{r}}_{||}^\prime )\) can be obtained from α rel ( r || ) via deconvolution, in the same way as Φ s is obtained from V *. An example image and its interpretation We demonstrate the quantitative imaging of electric surface potentials and dielectric topography for a Ag(111) surface on which Ag adatoms, CO molecules and PTCDA molecules have been deposited. As the STM image Fig. 3a reveals, the surface (in a manner of speaking a ‘nanotechnology construction site’) contains various types of nanostructures: CO molecules, AgCO complexes, subsurface defects, a compact PTCDA monolayer, Ag adatoms on the PTCDA layer and PTCDA–Ag 2 complexes that were made by controlled manipulation 44 . Fig. 3: SQDM images of nanostructures on Ag(111). a , STM image ( V b = 20 mV) showing various adsorbates and defects on a Ag(111) surface. b , SQDM Φ s image (STC) of the area in a recorded at z = 20 Å. All features from a are visible; however, the mobile CO molecules have moved to different locations. The thin black line traces the PTCDA island edge from a . The rows of inequivalent PTCDA molecules on the island edges and within the island are resolved. See Supplementary Information for corresponding V ± images. c , SQDM dielectric topography image of the area in a . The absolute scale is obtained from calibration measurements of α rel ( z ). The subsurface defects do not exhibit any topographic feature. Image size in a – c , 60 × 60 nm 2 . d , Cutouts from several SQDM images of individual nanostructures including a PTCDA–Ag 2 complex 44 . Scale bar, 1 nm. The colour scale refers to b and d . Full size image The surface potential Φ s obtained by deconvolution (Fig. 3b ) reveals a rich electrostatic landscape in which all features of the STM image appear as sources of complex patterns. Particularly remarkable is the deeply structured edge of the PTCDA island, and even inside the PTCDA layer a regular pattern is discernible. The fringes and granular structures are deconvolution artefacts that arise because deconvolution is an ill-posed problem. Note that the colour scale in Fig. 3b,d is calibrated as the surface potential Φ s , as the workfunction change Δ W and as the surface dipole density Π ⊥ , all measured relative to the bare Ag surface. The three quantities are related straightforwardly by Δ W = − eΦ s = − eΠ ⊥ / ε 0 , where the second equality is the Helmholtz equation 45 . The surface potential of individual objects in Fig. 3d reveals, in addition to the vertical dipole density Π ⊥ that is colour-coded, lateral multipoles such as the quadrupole moment of PTCDA and the dipole moment of PTCDA–Ag 2 . Evidently, this information, which is not available from STM, helps with the identification of nanoscale objects. Their identification is further supported by the dielectric topography in Fig. 3c . We note that the dielectric topography of all nanostructures has the same sign as in the STM image (protrusion or depression compared to bare Ag(111)). This confirms our interpretation of t d as a (dielectric) topography. In fact, the absence of a topographic signal for certain defects allows us to classify them as subsurface defects, which, by their electric potential (clearly imaged in Fig. 3b ), scatter surface state electrons (as is obvious from Fig. 3a ). We also note the appearance of CO molecules as depressions in the dielectric topography image. This is a direct consequence of the pushback effect by which the molecule depletes the spill-out electron density of the metal substrate. The dielectric topography and the surface potential also show a number of small adsorbates on the edge of the PTCDA island that are barely visible in the STM image. On the basis of high-resolution images such as the ones in Fig. 3d we calculate surface dipole moments P ⊥ of individual nanostructures by two-dimensional (2D) integration. The results are shown in Fig. 4a . We find that adsorption of CO on Ag(111) increases the dipole of CO to 0.75 D beyond its gas-phase value of 0.12 D, while CO adsorption on an Ag adatom on Ag(111) (which itself has a dipole of 0.66 D) results in a complex with a dipole moment of 1.65 D. The large positive dipole for the AgCO complex is of particular relevance, because this structure comes very close to the situation when a CO molecule is attached to the apex of an SPM tip, a common method to enhance image resolution 46 , 47 . It has been conjectured before that the dipole of this CO-functionalized tip plays a crucial role in the correct interpretation of the corresponding images 21 , 48 , 49 , 50 . Here we report a measurement of the dipole moment in a situation that closely resembles the adsorption of CO on the exposed tip apex. Another interesting effect in Fig. 4a is the depolarization of the PTCDA surface dipole once a molecular island is formed. The interaction between the parallel dipoles leads to a mutual reduction from −0.65 D for an isolated PTCDA molecule to −0.45 D as the intermolecular distance is decreased to ~1 nm. Fig. 4: Surface dipoles of selected nanostructures and dipole density within a layer. a , Surface dipole moments of adatoms, molecules or complexes with indicated error bars and DFT simulation results. Error bars approximate the uncertainty due to fringing artefacts in the deconvolved images. b , Cutout STM (bottom) and SQDM Φ s image (top) from Fig. 3 . Black lines indicate adjacent rows of identically adsorbed PTCDA molecules. Scale bar, 5 nm. c , Enlarged view of the island border region marked in b . The orientation and size of PTCDA molecules at the border is indicated. Scale bar, 1 nm. d , The line profile through the Φ s image along the blue arrow in b reveals a modulation caused by rows of inequivalent PTCDA molecules. Black vertical lines correspond to the rows marked in b . Full size image Dipoles that are created by an adsorbate on a metal surface result from a delicate interplay between the original charge distribution in the adsorbate, (de)polarization effects, charge transfers and the deformation of the metal charge density. Their measurement thus provides a sensitive benchmark to validate our ability to describe these processes quantitatively. Therefore, we compare our experimental results with DFT simulations. The dipoles computed with the \({\mathrm{PBE}} + {\mathrm{vdW}}_{{\mathrm{sc}}}^{{\mathrm{surf}}}\) functional for CO, the Ag adatom and the AgCO complex, adsorbed on Ag(111), are −0.30 D, +0.70 D and +1.66 D, respectively, in good overall agreement with experiment, with the well-known exception of CO (refs. 51 , 52 ). The close-up image in Fig. 4b reveals a modulation of the surface potential Φ s above the PTCDA island. Its periodicity of 1.7 nm matches the arrangement of inequivalent molecules in the PTCDA/Ag(111) unit cell. In the STM image in Fig. 3a the inequivalent molecules appear with different brightnesses. From scanning tunnelling spectroscopy it is known that charge transfer upon adsorption into the LUMO (lowest unoccupied molecular orbital) is smaller for the bright molecule than for the dark one 53 . However, because it is also known that the brighter molecule adsorbs at a larger height above the surface 54 , it is not clear which of the two has the larger dipole in the end (a larger charge transfer has the tendency to increase the charge transfer dipole, but so does a larger adsorption height, which moreover will lead to a smaller pushback dipole of opposite direction). The line profile and the lateral grid in Fig. 4b,d reveal that the bright molecule has a lower inward pointing dipole density than the dark one. This shows that the charge transfer dipole at the PTCDA/Ag(111) interface is stronger than the pushback dipole, and that the larger charge transfer into the dark molecule overcompensates the effect of its smaller adsorption height. A further striking feature of the PTCDA island is the strong modulation of the surface potential at its boundary. Comparing to a real-space model of the molecular layer, we can assign this structure to the uncompensated quadrupoles of PTCDA at the border where molecules alternately expose positively (hydrogen) or negatively (oxygen) charged groups. This strong electrostatic corrugation is an important reason for the preferential adsorption of small contaminants at the island edges, as found in Fig. 3 . Outlook SQDM offers a fresh way to look at the nanoscale world. The formalism of SQDM, based on a Dirichlet boundary value problem, is now fully clarified. We illustrate the power of the method by presenting workfunction, surface dipole and electric potential measurements. A dedicated SQDM controller simplifies the recording of SQDM images to a point where SQDM requires no more effort than other atomic-resolution scanning probe techniques. SQDM with a PTCDA quantum dot can easily be applied to other materials like NaCl, which can be prepared with submonolayer coverage on Ag 55 (see Supplementary Information ). An even wider range of applications can be reached through the use of quantum dots that are lithographically fabricated at the end of the tip. The large tip–surface separations at which SQDM can operate also make it particularly promising for the study of rough surfaces or, for example, biomolecules with a distinct 3D structure. Methods SQDM controller SQDM imaging requires two images ( V + and V − ) of the same sample area to be recorded one after another (compare Fig. 1b ). We developed and used two different types of controller for SQDM, both of which provide a voltage Δ V that is added to V b to track and compensate the changes in V + or V − as the tip scans the surface. Both were implemented on a commercial rapid control prototyping hardware from dSPACE. They are two-degree-of-freedom (2DOF) controllers with a feedback and a feedforward (FF) part, which mainly differ in the type of tracked feature of each Δ f dip. For the first 2DOF controller, the feedback part is an extremum seeking controller (ESC) 56 tracking the minimum in Δ f ( V b + Δ V ), while the second controller is an integral controller that tracks a specific Δ f value at the slope of each peak (STC). The ESC computes the derivative dΔ f /d V b via a small modulation of V b with a frequency f mod and adjusts Δ V such that the derivative remains zero. The advantage of this controller is its robustness. The disadvantage is its slower speed as the small Δ f detection bandwidth requires f mod to be equally small. The STC tracks a Δ f reference value on the slope of the dip and uses the Δ f deviation from this reference as the error signal to an integral controller. It is significantly faster than the ESC but could potentially introduce small systematic errors when the dips change their position on a non-constant background (Kelvin parabola). In the experiments presented here this error was, however, estimated to be below 1 mV. To maximize the scanning speed, the FF part of both controllers uses the previous scan line as a reference for the current line. In this way, the feedback controller has to regulate only the difference between the previous and the current line to zero. Because at least one scan line (the reference) has to be acquired without the FF and thus more slowly, the scan speed can be adjusted during image acquisition. Using the STC, the total acquisition time of the V ± image information for Fig. 3 was 2 h. More details on the 2DOF are given elsewhere 27 . DFT simulations All the calculations presented in this work are obtained with a fully self-consistent implementation of the Tkatchenko–Scheffler \({\mathrm{vdW}}_{{\mathrm{sc}}}^{{\mathrm{surf}}}\) functional 38 , in combination with PBE in the full-potential all-electron code FHI-aims. The metal surfaces presented in this work are built with six Ag layers. A vacuum of 60 Å is used to prevent spurious interactions between periodic images. We used a (6 × 6) unit cell with a total of 216 silver atoms. The molecule/atom is placed in the centre. Several unit cells have been tested; the one we employ prevents any spurious lateral interaction. During the relaxation the topmost metal layer and the molecule are allowed to relax. We set the convergence criterion to 0.01 eV Å −1 for the maximum force and employ a Monkhorst–Pack grid of (2 × 2 × 1). The energy calculations are performed with a grid of (4 × 4 × 1) k -points. For all calculations, we set a convergence criterion of 10 −5 electrons for the electron density, 10 −6 eV for the total energy and 10 −3 eV for the sum of eigenvalues. Our calculations include scalar relativistic effects via the scaled zeroth-order regular approximation (ZORA). The value of the dipole is linked to the change in electron density due to molecular adsorption. First, we compute the induced electron density as a function of z . This is the (2D integrated) difference between the electron density of the whole system and the densities of the isolated components, that is, the metallic surface and the molecule/atom. Second, the delocalized charge is defined as the integral of the induced density along the z axis. Finally, the change in the potential energy is computed with a second integration along the same direction. In other words, the induced dipole is obtained by solving the 1D Poisson equation along the z direction. Dielectric topography To analyse variations in α rel , we drop the assumption of a planar surface. Recalling the boundary value problem, topographic features in the region around r ′ affect the screening of the test charge (at r ′) and thus its potential at r , which is proportional to γ . For example, more material around r ′ (if r ′ itself is in a depression) increases the screening and thus decreases γ (Fig. 1c ). As the screening depends on the given material’s polarizability, dielectric nanostructures screen more weakly than metallic ones. Because modifications of the substrate’s metallic charge density by an adsorbate are effectively modifications in the metallic topography, they must be added to the screening effect of the (typically dielectric) adsorbate itself. Adsorbates that push back the metal charge density could thus effectively appear as depressions as is reported for CO molecules in Fig. 3c . Leaking of metallic charge into the adsorbate via hybridization has the opposite effect. We lump all screening effects, originating from variations in real topography and dielectric properties, into a single dielectric topography t d that equals the effective metallic surface topography that would cause the observed variations in α rel . To obtain t d from α rel , we need to analyse the properties of γ (equation ( 3 )). The non-local screening of the test charge makes \(\gamma ({\bf{r}}_{||},{\bf{r}}_{||}^\prime )\) a functional of \(t_{\mathrm{d}}({\bf{r}}_{||}^{\prime\prime} )\) which depends on t d in an entire region of the surface. Here, we disregard this aspect, while a comprehensive analysis is given elsewhere 30 . Without non-locality, γ can be expressed as a function \(\gamma ({\bf{r}}_{||},{\bf{r}}_{||}^\prime ) = f({\bf{r}}_{||},{\bf{r}}_{||}^\prime ,t_{\mathrm{d}}({\bf{r}}_{||}^\prime ))\) . Inserting the first-order Taylor expansion of f with respect to t d into equation ( 3 ) and dividing by α 0 yields the desired relation in which α rel is a convolution of t d with a PSF \(\gamma _{{\mathrm{topo}}} = \partial f/\partial t_{\mathrm{d}}|_{t_{\mathrm{d}} = 0}\) : $$\alpha _{{\mathrm{rel}}}({\bf{r}}_{||}) = \frac{1}{{\alpha _0}}\mathop {\iint}\limits_{{\mathrm{sample}}} f ({\bf{r}}_{||},{\bf{r}}_{||}^\prime ,0){\mathrm{d}}^2{\bf{r}}_{||}^\prime + \frac{1}{{\alpha _0}}\mathop {\iint}\limits_{{\mathrm{sample}}} {\gamma _{{\mathrm{topo}}}} ({\bf{r}}_{||},{\bf{r}}_{||}^\prime )t_{\mathrm{d}}({\bf{r}}_{||}^\prime ){\mathrm{d}}^2{\bf{r}}_{||}^\prime$$ As \(f({\bf{r}}_{||},{\bf{r}}_{||}^\prime ,0)\) corresponds to γ in the absence of any topography t d , the first integral equals α 0 and the first term is therefore simply 1. We can obtain the shape of γ topo from the consideration that a local topographic feature (that is, a small polarizable object) in the homogeneous field above the otherwise flat sample (Taylor expansion of f ) represents nothing else but a local dipole with a moment proportional to V b . Hence, the contribution of such a feature to α is similar to that of an actual dipole to Φ QD . Therefore, γ topo has the same shape as \(\gamma _{{\mathrm{PP}}}^ \ast\) \(\left( {\gamma _{{\mathrm{topo}}} \propto \gamma _{{\mathrm{pp}}}^ \ast } \right)\) , which is fully borne out by a finite element simulation 30 . The norm of γ topo is found by using a calibration function g ≈ ( α rel ( z + Δ z ) − α rel ( z ))/Δ z , which is experimentally obtained by varying z by Δ z . We use a Δ z value of 1 Å. Typical experimental values are in the range of g ≈ 0.03 Å −1 . Deconvolution For the deconvolution of V * we calculate the PSF \(\gamma ^ \ast (|{\bf{r}}_{||} - {\bf{r}}_{||}^\prime |,z)\) . To be suitable for deconvolution on an m × n image, the PSF is stored as a k × k kernel matrix K with k = 2 l + 1, where k and l are integers. To enable a correct treatment of pixels close to the image borders, we expand the (deconvolved) image to a size of ( m + 2 l ) × ( n + 2 l ) by adding l rows or columns on each side of the original image. Rows of K correspond to the x component of \({\bf{r}}_{||} - {\bf{r}}_{||}^\prime\) , and columns correspond to the y component. Note that the value of matrix elements in K only depends on the distance \(|{\bf{r}}_{||} - {\bf{r}}_{||}^\prime |\) . The kernel matrix always has the same mesh size as the SQDM V * and α rel images to be deconvolved (here typically 1–3 Å per pixel). The sum over all k 2 kernel matrix elements is normalized to 1 as required for γ *. The size k of the kernel matrix is chosen such that the values at its edges are sufficiently close to zero, \({K}_{0l} \lesssim 5 \times 10^{ - 5}{K}_{ll}\) . Due to the exponential decay of the PSF with lateral distance, this is already the case for kernels that measure only 150 × 150 Å 2 . The elements of \(K(|{\bf{r}}_{||} - {\bf{r}}_{||}^\prime |,z)\) are computed via the image charge method as discussed in the text (Fig. 2c ). We use a point charge placed 1 Å above the surface and 100,000 image charges, which results in a full convergence of the potential between the tip and surface planes in the relevant lateral distance range. The matrix elements are then calculated as the lateral potential distribution d = 7 Å beneath the tip plane 25 . Because SQDM scans are constant height scans, the effective QD–sample separation z − t d ( r || ) varies with t d , which, in turn, leads to variations of the actual PSF (cf. equation ( 7 )). As one consequence, identical nanostructures on substrate terraces of different height will not show the same V * image contrast. We compensate this effect during deconvolution by dynamically adjusting the PSF to the local t d ( r || ) value. To allow this, we calculate an entire set of K matrices for a range of tip–surface distances. We assume a locally planar surface of height t d , which can then be obtained by inverting the relation α rel = 1 + gt d with a value for g obtained via the calibration procedure described in the 'Dielectric topography' section. This t d ( r || ) = ( α rel ( r || ) − 1)/ g is then used with the given z to select the correct K for each pixel in the V * image. Thus, we retain the model of a planar sample surface for which γ * can be easily computed and has axial symmetry, but we appropriately adjust the separation between both planes when determining the PSF for each pixel. A rigorous derivation of the underlying formalism is given elsewhere 30 . We deconvolve V * and α rel images using an iterative nonlinear deconvolution algorithm with a Tikhonov–Phillips regularization term to suppress over-amplification of noise and other deconvolution artefacts. We found in empirical tests that for our SQDM images, regularization based on the L 2 norm (like Tikhonov–Phillips) leads to better results than regularization using the L 1 norm (like the popular total variation minimization approach). To apply the Tikhonov–Phillips regularization we modify the expression for χ 2 and add a term that measures the squared variation between neighbouring pixels in the deconvolved image as $$\chi ^2 = \mathop {\sum}\limits_{i = 1}^m {\mathop {\sum}\limits_{j = 1}^n {(P_{ij}^{{\mathrm{exp}}} - P_{ij}^{{\mathrm{conv}}})} } + \lambda \left[ {\left( {P_{ij} - P_{(i + 1)j}} \right)^2 + \left( {P_{ij} - P_{i(j + 1)}} \right)^2} \right]$$ We have found that good deconvolution results are obtained with a value of λ = 0.0036 s −2 , where s measures the image resolution in Å per pixel. This scaling is applied to enable deconvolution results that are independent of image resolution, because images with higher resolution are automatically smoother on a pixel-by-pixel level. Code availability The custom code that was used for the deconvolution in this study is available from the corresponding author upon reasonable request. | A team of researchers from Jülich in cooperation with the University of Magdeburg has developed a new method to measure the electric potentials of a sample at atomic accuracy. Using conventional methods, it was virtually impossible until now to quantitatively record the electric potentials that occur in the immediate vicinity of individual molecules or atoms. The new scanning quantum dot microscopy method, which was recently presented in the journal Nature Materials by scientists from Forschungszentrum Jülich together with partners from two other institutions, could open up new opportunities for chip manufacture or the characterization of biomolecules such as DNA. The positive atomic nuclei and negative electrons of which all matter consists produce electric potential fields that superpose and compensate each other, even over very short distances. Conventional methods do not permit quantitative measurements of these small-area fields, which are responsible for many material properties and functions on the nanoscale. Almost all established methods capable of imaging such potentials are based on the measurement of forces that are caused by electric charges. Yet these forces are difficult to distinguish from other forces that occur on the nanoscale, which prevents quantitative measurements. Four years ago, however, scientists from Forschungszentrum Jülich discovered a method based on a completely different principle. Scanning quantum dot microscopy involves attaching a single organic molecule—the quantum dot—to the tip of an atomic force microscope. This molecule then serves as a probe. "The molecule is so small that we can attach individual electrons from the tip of the atomic force microscope to the molecule in a controlled manner," explains Dr. Christian Wagner, head of the Controlled Mechanical Manipulation of Molecules group at Jülich's Peter Grünberg Institute (PGI-3). The researchers immediately recognized how promising the method was and filed a patent application. However, practical application was still a long way off. "Initially, it was simply a surprising effect that was limited in its applicability. That has all changed now. Not only can we visualize the electric fields of individual atoms and molecules, we can also quantify them precisely," explains Wagner. "This was confirmed by a comparison with theoretical calculations conducted by our collaborators from Luxembourg. In addition, we can image large areas of a sample and thus show a variety of nanostructures at once. And we only need one hour for a detailed image." The Jülich researchers spent years investigating the method and finally developed a coherent theory. The reason for the very sharp images is an effect that permits the microscope tip to remain at a relatively large distance from the sample, roughly two to three nanometres—unimaginable for a normal atomic force microscope. Dr. Christian Wagner with a model of the PTCDA molecule, which serves as a quantum dot . Credit: Forschungszentrum Jülich / Sascha Kreklau In this context, it is important to know that all elements of a sample generate electric fields that influences the quantum dot and can therefore be measured. The microscope tip acts as a protective shield that dampens the disruptive fields from areas of the sample that are further away. "The influence of the shielded electric fields thus decreases exponentially, and the quantum dot only detects the immediate surrounding area," explains Wagner. "Our resolution is thus much sharper than could be expected from even an ideal point probe." The Jülich researchers owe the speed at which the complete sample surface can be measured to their partners from Otto von Guericke University Magdeburg. Engineers there developed a controller that helped to automate the complex, repeated sequence of scanning the sample. "An atomic force microscope works a bit like a record player," says Wagner. "The tip moves across the sample and pieces together a complete image of the surface. In previous scanning quantum dot microscopy work, however, we had to move to an individual site on the sample, measure a spectrum, move to the next site, measure another spectrum, and so on, in order to combine these measurements into a single image. With the Magdeburg engineers' controller, we can now simply scan the whole surface, just like using a normal atomic force microscope. While it used to take us 5-6 hours for a single molecule, we can now image sample areas with hundreds of molecules in just one hour." There are some disadvantages as well, however. Preparing the measurements takes a lot of time and effort. The molecule serving as the quantum dot for the measurement has to be attached to the tip beforehand—and this is only possible in a vacuum at low temperatures. In contrast, normal atomic force microscopes also work at room temperature, with no need for a vacuum or complicated preparations. And yet, Prof. Stefan Tautz, director at PGI-3, is optimistic: "This does not have to limit our options. Our method is still new, and we are excited for the first projects so we can show what it can really do." There are many fields of application for quantum dot microscopy. Semiconductor electronics is pushing scale boundaries in areas where a single atom can make a difference for functionality. Electrostatic interaction also plays an important role in other functional materials, such as catalysts. The characterization of biomolecules is another avenue. Thanks to the comparatively large distance between the tip and the sample, the method is also suitable for rough surfaces—such as the surface of DNA molecules, with their characteristic 3-D structure. | 10.1038/s41563-019-0382-8 |
Biology | Scientists can now design new proteins from scratch with specific functions | Po-Ssu Huang et al, The coming of age of de novo protein design, Nature (2016). DOI: 10.1038/nature19946 Journal information: Nature | http://dx.doi.org/10.1038/nature19946 | https://phys.org/news/2020-02-scientists-proteins-specific-functions.html | Abstract There are 20 200 possible amino-acid sequences for a 200-residue protein, of which the natural evolutionary process has sampled only an infinitesimal subset. De novo protein design explores the full sequence space, guided by the physical principles that underlie protein folding. Computational methodology has advanced to the point that a wide range of structures can be designed from scratch with atomic-level accuracy. Almost all protein engineering so far has involved the modification of naturally occurring proteins; it should now be possible to design new functional proteins from the ground up to tackle current challenges in biomedicine and nanotechnology. Main Proteins mediate the fundamental processes of life, and the beautiful and varied ways in which they do this have been the focus of much biomedical research for the past 50 years. Protein-based materials have the potential to solve a vast array of technical challenges. Functions that naturally occurring proteins mediate include: the use of solar energy to manufacture complex molecules; the ultrasensitive detection of small molecules (olfactory receptors 1 ) and of light (rhodopsin 2 ); the conversion of pH gradients into chemical bonds (ATP synthase 3 ); and the transformation of chemical energy into work (actin and myosin 4 ). Not only are these functions remarkable but they are encoded in sequences of amino acids with extreme economy. Such sequences specify the three-dimensional structure of the proteins, and the spontaneous folding of extended polypeptide chains into these structures is the simplest case of biological self-organization. Despite the advances in technology of the past 100 years, human-made machines cannot compete with the precision of function of proteins at the nanoscale and they cannot be produced by self-assembly. The properties of naturally occurring proteins are even more remarkable when considering that they are essentially accidents of evolution. Instead of a well-thought-out plan to develop a machine to use proton flow to convert ADP to ATP, selective pressure operated on randomly arising variants of primordial proteins, and there were also hundreds of millions of years in which to get it right. In this Review, we propose that if the fundamentals of protein folding and protein biochemistry and biophysics can be understood, it should become possible to design from the ground up a vast world of customized proteins that could both inform basic knowledge of how proteins work and address many of the important challenges that society faces. We focus specifically on the problem of de novo protein design: the generation of new proteins on the basis of physical principles with sequences unrelated to those in nature. We describe the methodological advances that underlie progress in de novo protein design as well as provide an overview of the diversity of designed structures for which the high-resolution X-ray crystallography structure or nuclear magnetic resonance (NMR) structure is in atomic agreement with the design model. Almost all protein engineering so far has involved the modification of naturally occurring proteins to tune or alter their function using techniques such as directed evolution 5 , 6 , 7 , which involves cycles of generating and selecting variation in the laboratory. Because these efforts have been extensively reviewed 8 , 9 and are essentially extensions of evolutionary processes, they will not be discussed here. It is useful to begin by considering the fraction of protein sequence space that is occupied by naturally occurring proteins ( Fig. 1a ). The number of distinct sequences that are possible for a protein of typical length is 20 200 sequences (because each of the protein's 200 residues can be one of 20 amino acids), and the number of distinct proteins that are produced by extant organisms is on the order of 10 12 . Evidently, evolution has explored only a tiny region of the sequence space that is accessible to proteins. And because evolution proceeds by incremental mutation and selection, naturally occurring proteins are not spread uniformly across the full sequence space; instead, they are clustered tightly into families. The huge space that is unlikely to be sampled during evolution is the arena for de novo protein design. Consequently, evolutionary processes are not a good guide for its exploration — as discussed already, they proceed incrementally and at random. Functional folded proteins have been retrieved from random-sequence libraries 10 , 11 , 12 but this is a laborious (and non-systematic) process. Instead, it should be possible to generate new proteins from scratch on the basis of our understanding of the principles of protein biophysics. Figure 1: Methods for de novo protein design. a , A schematic of the protein sequence space. Evolution has sampled only a tiny fraction of the total possible sequence space (blue), and the incremental nature of evolution results in tightly clustered families of native proteins (beige), which are analogous to archipelagoes in a vast sea of unexplored territory. Directed evolution is restricted to the region of sequence space that surrounds native proteins, whereas de novo protein design can explore the whole space. b , Structure prediction, fixed-backbone design and de novo protein design are global optimization problems with the same energy function but different degrees of freedom. In structure prediction, the sequence is fixed and the backbone structure is unknown; in fixed backbone protein design, the sequence is unknown but the structure is fixed; and in de novo protein design, neither is known. c , Example of an energy landscape generated from fixed-sequence protein-structure prediction calculations. The red dots represent lowest-energy structures from independent Monte Carlo trajectories, which are plotted according to their similarity to the target structure (black dot) along the x axis; structural similarity is measure by root-mean-square deviation (r.m.s.d.). In de novo design efforts, designed sequences for which the calculations converge on the target designed structure are selected for experimental characterization. d , Blind, de novo structure prediction (left) for the critical assessment of protein structure prediction (CASP)11 target T0806, which has no sequence similarity to any protein of known structure, using coevolution-derived contact constraints 27 . The crystal structure (Protein Data Bank accession code 5CJA ) is shown for comparison (right). The ability to predict the structure of proteins with new folds with this level of accuracy enables large-scale structural genomics by means of computer calculation rather than experiment. Full size image Our approach is built on the hypothesis that proteins fold into the lowest energy states that are accessible to their amino-acid sequences, as originally proposed by Christian Anfinsen 13 . Given a suitably accurate method for computing the energy of a protein chain, as well as methods for sampling the space of possible protein structures and sequences, it should be possible to design sequences that fold into new structures. There are two challenges in implementing this approach: first, the energy of a system cannot be computed with perfect accuracy; and second, the space of possible structures and sequences is very large and therefore difficult to search comprehensively. In this Review, we describe the physical basis for the energy function used in the design calculations and the approaches that are used to overcome the sampling problem. The discussion is based on our experience of developing the Rosetta structure prediction and design methodology 14 ; other de novo protein design software is described elsewhere 15 , 16 , 17 . Considerable recent progress in protein design is attributable not only to the advances in understanding and computational methods that are the focus of this Review, but also to advances in two other areas. The first is computing: de novo protein design is computationally expensive, and the steady increase in the availability of computing power has greatly enabled the work that we describe, much of which was completed using volunteer computing through the Rosetta@home project. The second advance is the synthetic manufacture of DNA. Because the proteins that are being designed do not exist in nature, genes that encode their amino-acid sequences also do not exist. To produce designed proteins in an organism such as Escherichia coli , synthetic genes that encode the designed amino-acid sequences must first be manufactured. Methods for DNA synthesis have improved dramatically in the past 10 years, greatly reducing the cost of synthesizing genes for de novo designed proteins and increasing the number of computational designs that can be tested experimentally. Physical principles that underlie protein design The driving force for protein folding is the burial of hydrophobic residues in the protein's core, away from the solvent. To minimize the size of the cavity that the protein occupies in water, and to maximize van der Waals forces, the side chains in the core must be packed closely but without energetically unfavourable atomic overlaps. Polar groups that interact with the solvent in the unfolded state that become buried upon protein folding must form intra-protein hydrogen bonds to compensate, otherwise the large energy cost of stripping water will disfavour folding 18 . The hallmark features of globular protein structures follow from these considerations: α-helical and β-sheet secondary structures, in which the polar carbonyl and amide groups of the polypeptide backbone can form hydrogen bonds, assemble in such a way that non-polar side chains fit together like the pieces of a jigsaw puzzle to form densely packed cores. Interactions of amino-acid side chains with neighbouring backbone atoms also contribute to the free energy of folding: these include hydrogen bonds at the termini of α-helices and steric and torsional effects that favour certain backbone geometries and disfavour others. For example, the amino acid proline has a rigid internal ring and is compatible with only a narrow range of backbones, whereas glycine, which lacks a side chain, enables tight bending of the backbone in loops between secondary structures. This picture of protein folding is implemented in an energy function that captures the interactions of the atoms in proteins with each other and with the solvent. The main contributors to this energy function are van der Waals forces that favour close atomic packing, steric repulsion, electrostatic interactions and hydrogen bonds, solvation and the torsion energies of backbone and side-chain bonds. Predicting and designing protein structures using such an energy function requires methods for sampling alternative backbone and side-chain conformations to identify structures and sequences with very low energy. Different methods are used for backbone and side-chain sampling ( Fig. 1b ). In side-chain sampling, discrete combinatorial optimization is used to identify amino acids and side-chain conformations (known as rotamers) that lead to low-energy, closely packed protein cores 19 , 20 , 21 , 22 . If the amino-acid sequence is known in advance, such as in the protein structure prediction problem (predicting the structure of a protein from its amino-acid sequence), the amino-acid identities have already been fixed and the search covers the discrete rotameric states of each side chain. But if the sequence is unknown, such as in the protein design problem (finding a sequence that folds into a specified structure), both the amino-acid identities and the rotameric states are sampled. Backbone sampling often frames the initial stages of the search as a discrete optimization problem by taking advantage of biases in the local sequence towards a subset of possible local structures. In the later stages of refinement, continuous optimization methods such as quasi-Newton minimization are used to fine-tune the packing and the electrostatic interactions and hydrogen bonding of the structure. Protein-structure prediction It is useful to first consider the ab initio structure prediction problem: finding the lowest energy structure for fixed amino-acid sequence in the absence of information about the structures of evolutionarily related proteins. Because the amino-acid sequence is fixed, side-chain combinatorial optimization covers only the various rotameric states and the backbone can be built from short fragments with similar local sequences 23 . An advantage of this approach is that sampling is very focused in regions where the local sequence strongly favours a particular local structure yet broad in regions where the local sequence is compatible with many conformations. It is still difficult to predict protein structures without homologues of known structure for all but the smallest proteins. The main challenge is the size of the backbone conformational space that must be sampled: the correct structure usually has a lower computed energy than all alternative structures, but it is very hard to find. However, if the sampling is guided by extra sources of information, such as co-evolution-based distance constraints 24 , 25 , structure-prediction calculations can find the native-state energy minimum ( Fig. 1c ). In such cases, accurate, blind predictions of complex protein structures can be made 26 , 27 ( Fig. 1d ). De novo protein design Unlike in the structure-prediction and fixed-backbone design problems, in the general ( de novo ) protein design problem, both the sequence and the exact structure of the backbone are unknown ( Fig. 1b ). Given this, how do we effectively sample backbones from scratch? Because only a small proportion of backbone conformations can accommodate sequences with almost-perfect core packing and hydrogen bonding between the buried hydrogen-bond donors and acceptors, design calculations generally begin with a large set of (more than 10,000) alternative conformations. These initial backbones can be made either by assembling short peptide fragments 28 , 29 or by using algebraic equations to specify the geometry parametrically 30 , 31 , 32 , 33 , 34 , 35 . For each designed backbone conformation, combinatorial sequence-optimization calculations are used to identify the lowest-energy sequence for the structure. Ab initio structure-prediction calculations are then carried out to determine whether the designed structure is the lowest-energy state of the designed sequence — this is an important in silico consistency check. De novo designs are usually experimentally characterized only if structure-prediction calculations that start from the designed sequence strongly converge on the designed structure ( Fig. 1c ). Only a finite number of backbones can be sampled computationally. To tackle the important challenge of sequence-independent backbone construction, it is necessary to reduce the enormous space of possible backbone structures to those that are capable of being designed — that is, to those for which there is a reasonable probability that a sequence exists whose lowest-energy state is the structure. Progress towards this goal has required the investigation of sequence-independent constraints on backbone geometry. One such constraint comes from the connectivity of the polypeptide chain and the requirement that the polar atoms of the backbone either make hydrogen bonds within the chain in α-helices or β-sheets or come into contact with the solvent in exposed loops. This constraint immediately restricts the length of the secondary structures that are permitted for a given topology 36 . Another constraint comes from the limited flexibility of the polypeptide chain, which restricts the lengths of the loops that connect α-helices and β-sheets in various packing orientations 37 . Simulations and analyses of protein structures have revealed sequence-independent design principles that relate the lengths of helices, strands and loops when packed together that greatly facilitate the construction of topologies that consist of α-helices and β-sheets 36 , 37 . Even with these constraints, the space of possible backbones is still large. To meet the twin goals of bringing the principles that underlie protein folding and structure into sharp focus and generating robust and stable scaffolds for future functional design efforts, much de novo protein-design work has placed an emphasis on designing ideal protein structures with unkinked α-helices and β-strands and minimal loops. By contrast, most naturally occurring proteins contain irregular, non-canonical features that arise either from selection for function or from neutral drift. Such features complicate the structural analysis of proteins and reduce the free energy of folding. (During evolution, there was probably little pressure to optimize the free energy of folding beyond 8 kcal per mol, which corresponds to a folded-state population of more than 99.999%.) Ideal αβ folds A wide range of ideal αβ protein structures have been designed using the sequence-independent design principles 36 , 37 ( Fig. 2 ). The design approach consists of several steps. First, an overall topology 'blueprint' 29 that is consistent with the backbone design principles is created to specify the lengths, packing arrangement and order of the constituent α-helices and β-strands, as well as the lengths of the connecting loops. Second, protein backbones that are compatible with the blueprint are assembled from protein structure fragments using a Monte Carlo approach ( Fig. 2a ). Third, combinatorial rotamer optimization is used to identify a low-energy amino-acid sequence for each backbone. Fourth, alternating cycles of backbone relaxation and sequence optimization are performed to achieve a sequence–structure pair with very low energy. Last, sequences that converge on the corresponding designed structure in structure prediction calculations are tested experimentally. This design approach was applied to the idealized backbones shown in Fig. 2 . Synthetic genes encoding the new designed proteins were generated, and the proteins were produced in E. coli . The purified proteins were found to be extremely stable and had structures that were almost identical to those of the design models 28 , 36 , 37 , 38 ( Fig. 2 ). Figure 2: Designing αβ proteins. a , Sampling alternative backbones for a β-strand-turn-α-helix blueprint through fragment assembly. b–g , De novo designed ideal αβ proteins with high-resolution NMR or X-ray structures that are in very close agreement with design models 28 , 36 , 37 . b , Top7. c , Ferredoxin folds of varying shapes and sizes. d , Rossmann 2×2 folds. e , IF3-like fold. f , P-loop 2×2 fold. g , Rossmann 3×1 fold. h , Larger, more complex structures that were generated from domains in b and c 38 . Full size image Repeat proteins The effort to construct de novo proteins with ideal backbone arrangements has led to the design of proteins with internal symmetry in which a single idealized unit is repeated numerous times 39 , 40 , 41 ( Fig. 3 ). Internal symmetry reduces the size of the sequence space that must be searched and enables a relatively small unit with a known sequence–structure combination to be reused repeatedly to build larger proteins ( Fig. 3a ). The constraint of internal symmetry is particularly strong for closed structures in which the final repeat unit is juxtaposed with the first, such as in α-helical toroids 41 ( Fig. 3b ) and the TIM barrel 42 ( Fig. 3c ). In the TIM barrel, the backbone design principles, together with the geometry of closed β-sheets, makes four-fold symmetry the highest that can be attained and forces the two α-helices in each α–β–α–β unit to differ in length 42 . Both closed-repeat and open-repeat protein designs have been produced by introducing synthetic genes into E. coli , followed by experimental characterization of the purified proteins. High-resolution X-ray crystallography structures for the designs were found to be almost identical to the design models. The α-helical repeat structures have sequences and structures ( Fig. 3d ) that differ greatly from those found so far in nature, which suggests that naturally occurring proteins sample only a tiny fraction of the stable protein structures that can be realized 43 . These new repeated proteins are exceptionally stable; several of the open structures are denatured only by guanidine hydrochloride at concentrations of more than 6 M (D. Barrick, personal communication). By contrast, an approach to 'stitch' protein structures together from large helix-containing fragments of naturally occurring proteins generates structures with irregularities that are similar to those found in native structures 44 that present opportunities for the subsequent design of function. Contact information from native structures has also been used to guide the design of new backbone arrangements 45 , including a scaffold that presents an epitope from respiratory syncytial virus to elicit a neutralizing immune response 46 . Figure 3: Designing proteins with internal symmetry. a , The propagation of a single repeat unit generates a larger structure. b–d , De novo designed repeat proteins with high-resolution X-ray structures that are in very close agreement with design models. b , De novo α-helical toroids 41 . c , An ideal TIM barrel with four-fold symmetry. Packing features (white) and polar-fold determinants (pink spheres) are shown 42 . d , Tandem repeat proteins with a variety of twists and curvatures that go beyond the topologies that are observed in nature 43 . Full size image Parametric helical bundles The use of parametric equations is a complementary approach to generating ideal backbone arrangements that provides considerable control over the global structure. Equations developed by Francis Crick enable the generation of idealized bundles of α-helices in parallel or antiparallel orientations in which the helices have arbitrary lengths, phasing, relative orientations and twists 47 ( Fig. 4a ). The helical bundles can be used directly in sequence-design calculations, yielding multiple-subunit oligomeric structures, or the helices can first be connected with loops to yield a single chain. Many helical bundles have been designed in this way 30 , 31 , 33 , 34 , 48 , 49 , 50 , 51 , 52 ( Fig. 4 ), including a peptide that binds to carbon nanotubes 53 , parallel self-assembling helical channels 31 , an ion transporter 34 , cages 54 and an α-helical barrel with installed hydrolytic activity 55 . The combination of parametric backbone generation with combinatorial side-chain optimization has enabled the design of larger, more diverse helical bundles 33 ; like many de novo designed proteins, these parametrically designed proteins are extremely stable, remaining folded in 7 M guanidine hydrochloride at 95 °C. Figure 4: De novo design using parametric backbone generation. a , Parameters that describe helical bundle geometry. b , The first de novo designed helical bundles to be structurally validated: α 3 D (ref. 48 ) (left) and RH4 (ref. 30 ) (right), a right-handed coiled coil. c , Functional de novo helical bundles: a carbon nanotube-binding helix 53 (left), and a Zn 2+ antiporter membrane protein (known as Rocker) 34 . d , Single-chain hyperstable helical bundles 33 : a right-handed four-helix bundle (left) and untwisted three-helix bundles (right). e , Homo-oligomeric single-ring helical bundles 31 , 33 , 51 , 52 . f , Homo-oligomeric de novo helical hairpins that form double-layered channels with hydrogen-bond network-mediated specificity 63 ; the polar networks are shown as expanded cross-sections. C n indicates an n -fold cyclic symmetry operation: for example, C2 structures are homodimers and C3 structures are homotrimers. Full size image Hydrogen-bond networks The principles we have outlined for the de novo design of monomeric folds are necessary but not sufficient for controlling the specificity of protein interactions, which despite progress 56 , 57 , 58 , 59 , 60 remains a challenge 61 . Binding is driven by the balance between the burial of hydrophobic packing residues and peripheral polar interactions that help to solvate the monomeric state and provide structural specificity. In contrast to the double helix of DNA, in which regular arrays of central hydrogen bonds lead to the formation of a high-specificity heterodimer, the hydrogen bonds that form at the interfaces of naturally occurring proteins are placed irregularly and are very difficult to design 62 . A challenge when designing polar interactions is to ensure that all buried hydrogen-bond donors and acceptors form intraprotein hydrogen bonds. In the past year, it has become possible to design with atomic-level accuracy extensive networks of hydrogen bonds in which almost all of the donors and acceptors are satisfied 63 . This approach has enabled helical-bundle oligomers to be generated with a specificity that is determined by regular arrays of central hydrogen-bond networks, analogous to Watson–Crick base-pairing in DNA 64 . Identification of the rare backbones that can harbour more than one network of hydrogen bonds required the parametric generation of thousands of backbones. In the field of DNA nanotechnology 61 , the limited set of Watson–Crick hydrogen bonds has been harnessed to build a wide range of shapes 65 , 66 ; it should become possible to use similar 'digital' design principles to build structures from proteins using modular hydrogen-bond networks to encode specificity. The design of new functions The advances described in this Review, most of which were made in the past 3 years, demonstrate that a fundamental understanding of the principles of protein structure and protein folding has been achieved. This knowledge has enabled a wide variety of exceptionally stable protein structures and assemblies to be designed with atomic-level accuracy. (The high-resolution structures for all of the protein designs described in this Review, as determined by NMR, X-ray crystallography or electron microscopy, are in close agreement with the design models.) The potential for designing new functions on the basis of these scaffolds and the more general use of de novo backbone design methods is underscored by the achievements of computational protein-design efforts, in which scaffolds from naturally occurring proteins have been repurposed to carry out different functions. Such efforts have yielded enzymes that have attained high catalytic efficiencies through directed evolution 67 , 68 , 69 , 70 , 71 , 72 , 73 , inhibitors of protein–protein interactions that can protect animals from viral infection 74 and small-molecule binding proteins that can be incorporated into in vivo biosensors 75 , 76 , 77 . The design of precise interfaces between protein subunits has enabled the creation of self-assembling, cyclic homo-oligomers (ref. 78 and J. Fallas and G. Ueda, personal communication), tetrahedra 79 , 80 , octahedra 79 and open two-dimensional assemblies 81 ( Fig. 5 ). Protein interface design methods have been used to create one- or two-component assemblies with icosohedral symmetry and 60 subunits 82 or 120 subunits 83 , respectively. The high symmetry of these assemblies enables the multivalent presentation of antigens for vaccine applications, and the large volumes of their interior are well suited to packaging cargo for delivery to targets. Figure 5: Designing self-assembling nanomaterials. a , C2, C3, C4 and C5 symmetric homo-oligomers (ref. 78 and J. Fallas and G. Ueda, personal communication). b , Two-dimensional hexagonal lattice 81 . c–f , Self-assembling cages. c , A one-component tetrahedron (left) and a one-component octahedron 79 (right). d , Two-component tetrahedral nanoparticles 80 ; the two asymmetric components are coloured in blue and yellow. e , A one-component hyperstable icosahedron with a de novo helical bundle (red helices) fused in the centre of the face 82 . f , Two-component megadalton-scale icosahedra 83 ; the two components of each are coloured in blue and yellow. Full size image The design of constrained peptides Because of the level of control that de novo protein design offers, the capabilities of the next generation of designed functional proteins could greatly exceed those of first-generation designed proteins based on native scaffolds. There is also the tremendous potential for de novo protein design to go beyond nature to discover new folds by incorporating new chemistries and unnatural amino acids. An example of this is the design of hyperstable peptides, which are constrained by disulfide crosslinks and cyclic peptide linkages that connect the N and C termini 84 . In this case, extensions to the design methodology enabled the use of L -amino acids and D -amino acids within the same protein design ( Fig. 6 ). The structures of these peptides, determined experimentally through NMR and X-ray crystallography, are in close agreement to the design models, and despite the peptides being only 15–50 residues in length, most are extremely resistant to thermal and chemical denaturation. Figure 6: Designing hyperstable de novo constrained peptides. a , b , Disulfide crosslinked miniproteins with two ( a ) or three ( b ) disulfide linkages (yellow spheres). c , Cyclic peptides with covalently linked N termini and C termini. An asterisk denotes a heterochiral design that contains a mixture of L -amino acids and D -amino acids. Full size image Improving the robustness of de novo design A limitation of de novo protein design is that only a fraction of protein designs adopt stable folded structures when produced in E. coli . The most frequent reasons for failure are insolubility and the formation of unintended oligomeric states (polydispersity) — experimentally determined high-resolution structures of soluble and monodisperse designs are almost always very similar to those of the design models. Insolubility and polydispersity probably arise from unanticipated intermolecular hydrophobic interactions. Increasing the robustness of designs will require improvements in the accuracy of the energy function that underlies the design process (for example, explicit modelling of the interactions of protein atoms with specific bound water molecules), more explicit negative design to disfavour alternative states and other advances in computational methodology. As the decreasing cost of synthesizing DNA enables the experimental characterization of larger numbers of protein designs, it should become increasingly possible to identify the features that differ between soluble and insoluble designs. Insight can be obtained by considering the success rate for each class of design that is described in this Review. The highest success rate from the work of our group was obtained for the cyclic and disulfide stapled peptides 84 , for which seven of eight designs were soluble and monodisperse and had structures that were almost identical to the design models; the chemical staples limit alternative conformations of the designs in this class. These designs were also synthesized chemically — the lower success rate for proteins that are expressed recombinantly might be due in part to the toxicity of such proteins in E. coli or to other complexities of the bacterium's biology. The α-helical bundles that are mediated by networks of hydrogen bonds had a solubility of about 90%, and more than 60% of the bundles were monodisperse and in the designed oligomerization state 63 . Because a large energetic penalty is incurred if buried polar groups do not form hydrogen bonds, altered core-packing arrangements in which hydrogen bonds are not formed are disfavoured. Of the α-helical repeat designs 43 , 90% were soluble and 64% were monodisperse. Almost all of the monodisperse designs had small-angle X-ray scattering data that were consistent with the design models 84 . Here, the sequence repetition probably favours structures with internal repeats over alternative structures. Outlook and challenges A fundamental problem encountered when redesigning naturally occurring proteins to deliver new functions such as catalytic sites is that the alteration of a large number of amino-acid residues to introduce the function will inevitably change aspects of the structure; this is demonstrated by crystal structures of designed enzymes that have unanticipated loop reconfigurations 85 . Native proteins are often marginally stable, and sequence changes can lead to unfolding or aggregation. The very high stability of de novo designed proteins should make them more robust starting points for creating new functions. The next steps in protein design are not without challenges. The ideality of almost all of the de novo structures designed so far probably contributes to their stability, and the introduction of functional sites and binding interfaces will inevitably compromise this ideality. Proteins that bind to other proteins usually have hydrophobic residues on their surface and are therefore more prone to aggregation than the idealized polar surfaces of most of the proteins that have been described in this Review, and the active sites of enzymes have some mobility to enable substrates to enter and products to leave. Recessed cavities, which are not incorporated into most de novo designed proteins at present, will be required for ligand and substrate binding. Naturally occurring proteins provide numerous examples of the rich functionality, including allostery and signalling, that can emerge in protein systems with multiple low-energy states and moving parts that can be toggled by external stimuli. To achieve such capabilities, which could have widespread applications in the design of molecular machines to tackle problems ranging from tumour recognition to computing, will require proteins to be designed with multiple, distinct energy minima. (By contrast, the de novo designs in Figs 2 , 3 , 4 , 5 , 6 each have a single, deep energy minimum ( Fig. 1c ).) The creation of a zinc-transporting transmembrane protein that has two alternative states demonstrates that protein design can now start to achieve such complexity 34 . Overcoming these challenges in the years ahead is an exciting prospect. Success would signal a technological advance that is analogous to the transition from the Stone Age to the Iron Age. Instead of building new proteins from those that already exist in nature, protein designers can now strive to precisely craft new molecules to solve specific problems — just as modern technology does outside of the realm of biology. Accession codes Accessions Protein Data Bank 5CJA | Proteins are the molecular machines that make all living things hum—they stop deadly infections, heal cells and capture energy from the sun. Yet because our basic understanding of how proteins work has until now remained a mystery, humans have only been able to harness the power of proteins by modifying ones we happen to find in nature. This is beginning to change. Enabled by decades of basic research, the rise of inexpensive computing, and the genomics revolution in reading and writing DNA, scientists can now design new proteins from scratch with specific functions. David Baker, Professor of Biochemistry and Director of the Institute for Protein Design at the University of Washington will speak about how algorithmic processes such as de novo design predict protein structures, protein folding mechanisms, and new protein functions. Computational protein design is now being used to create proteins with novel structures using iterative structure prediction and experimental structure characterization. These results suggest that new proteins—encoded by synthetic genes—can be designed on computers with atomic-level accuracy. In April 2019, the Institute for Protein Design (IPD) was selected as part of The Audacious Project, a successor to the TED Prize. As a result, the IPD is expanding its research on vaccine design, targeted drug delivery, 'smart' therapeutics, next-generation nanomaterials and more. Institute for Protein Design, University of Washington Credit: Ian C Haydon/UW Institute for Protein Design David Baker, Director of the Institute for Protein Design at University of Washington Credit: Ian C Haydo /UW Institute for Protein Design | 10.1038/nature19946 |
Nano | Scientists discover nonlocal effects of biexciton emission in large semiconductor nanocrystals | Peng Huang et al, Nonlocal interaction enhanced biexciton emission in large CsPbBr3 nanocrystals, eLight (2023). DOI: 10.1186/s43593-023-00045-3 | https://dx.doi.org/10.1186/s43593-023-00045-3 | https://phys.org/news/2023-05-scientists-nonlocal-effects-biexciton-emission.html | Abstract Biexciton emission in quantum dots is an efficient way to generate entangled photon pairs, which are key resources in quantum informatics. Compared with epitaxial grown quantum dots, chemically synthesized colloidal quantum dots show advantages of tunable wavelength and easy integration to realize quantum light sources. However, biexciton efficiency of colloidal quantum dots has been limited by Auger recombination. In this paper, we reported nonlocal interaction enhanced biexciton emission with efficiency up to 80% in large perovskite nanocrystals (> 20 nm). The nonlocal interaction between carriers and excitons leads to the abnormal exponential decrease of Auger recombination with volume in large nanocrystals, which distinguishes with the linear scaling in small counterparts. Such an exponential decrease of Auger recombination results in long lifetime of biexcitons, responsible for the required high biexciton efficiency. The discovery of nonlocal effects in large semiconductor nanocrystals provides new strategies to achieve high efficiency multiple excitons for quantum optics and energy conversation applications. 1 Introduction Due to the size-dependent quantum confinement effects, quantum dots (QDs) exhibit excitonic properties with tunable absorption and emission properties for modern photonics [ 1 , 2 , 3 , 4 , 5 ]. Efficient single photon [ 6 , 7 , 8 ] and entangled photons [ 9 , 10 , 11 ] emission, which are key resources in quantum informatics [ 12 , 13 , 14 , 15 ], have been successfully demonstrated by combining QDs and microcavities [ 6 , 7 , 8 ]. Besides epitaxial grown QDs (InGaAs, GaAs) [ 16 ], chemically synthesized colloidal QDs are also considered as ideal quantum emitters [ 17 , 18 , 19 , 20 , 21 , 22 ] with advantages of high photoluminescence (PL) efficiency, tunable color, and easy integration [ 23 , 24 ]. Because of the strong electron–electron coulomb repulsion in small CdSe and perovskite QDs (strong quantum confined), excellent single photon emitters were successfully demonstrated [ 19 , 25 , 26 , 27 , 28 ]. For colloidal QDs, biexciton emission is usually generated under high excitation intensity, which suffers from serious Auger recombination [ 17 , 29 ]. It has been found that the Auger recombination rate decreases linearly with volume increasing in quantum confined QDs [ 30 , 31 ]. In comparison, Auger recombination in bulk materials only slightly affects the biexciton recombination due to the lower carrier density and momentum conservation [ 32 ]. It can be described by adapting the charge carrier recombination ABC ABC model with a constant Auger coefficient [ 33 ]. To gain high biexciton efficiency, thick shelled CdSe/CdS nanocrystals were developed to suppress Auger recombination by reducing the wave function overlap between the electrons and holes [ 34 , 35 , 36 , 37 ]. Accordingly, large colloidal QDs may be suitable candidates to generate efficient biexciton emission, however, has been rarely investigated. Herein we reported that the Auger recombination rate in large perovskite nanocrystals can be exponentially decreased due to the nonlocal effects. Nonlocal effects refer to the influence of wave spatial dispersion on the light-matter interactions [ 38 ]. The spatial wavefunction usually contains a term of {e^{ik \cdot r}} {e^{ik \cdot r}} , where {\varvec{k}} {\varvec{k}} is the wave vector and {\varvec{r}} {\varvec{r}} is the spatial coordinate. For the nanostructure with radius of R R , nonlocal effects are negligible when the phase kR\ll 1 kR\ll 1 , while become significant when kR kR approaches to 1 [ 39 , 40 ]. In plasmonics, nonlocal effects have been successfully demonstrated to explain optical response in metallic nanostructures [ 41 ]. For exciton recombination in QDs which emits a photon, nonlocal interactions need to be considered when the size of QDs ( R R ) approaches to \frac{{\lambda }_{p}}{2n\pi } \frac{{\lambda }_{p}}{2n\pi } where {\lambda }_{p} {\lambda }_{p} is the wavelength of the photon and n n is the refraction index of surrounding [ 39 ]. For CsPbBr 3 QDs, {\lambda }_{p}\sim 520 nm {\lambda }_{p}\sim 520 nm and n\sim 2 n\sim 2 , resulting in \frac{{\lambda }_{p}}{2n\pi } \sim 40 nm \frac{{\lambda }_{p}}{2n\pi } \sim 40 nm . Auger recombination can be described as energy shift from an exciton to another electron or hole, or a process in which one electron or hole absorbs an exciton to a higher energy level. Accordingly, the nonlocal effects of Auger recombination are mainly determined by wavefunction of exciton. At room temperature, the estimated spatial wavelength of an exciton in CsPbBr 3 is ~ 14 nm (refer to theory part in supplementary material), enabling the possibility to observe the nonlocal interaction enhanced biexciton emission in large nanocrystals with size of > 14 nm. Benefiting from the unique defects tolerance ability of perovskite nanocrystals [ 42 ], we here observed high biexciton efficiency in large CsPbBr 3 nanocrystals. As schematically shown in Fig. 1 , there is a linear relation between biexciton Auger recombination lifetime and volume for small nanocrystals. The maximum biexciton lifetime is ~ 100 ps due to the strong Auger recombination. For bulk materials, Auger recombination is mainly related to the carrier density and band structure with a constant coefficient [ 32 ]. For example, a bulk crystal with carrier density of 10 18 is predicted to have a biexciton lifetime of ~ 10 ns [ 43 , 44 ]. In the meso-scale region, the nonlocal effects are expected to alternate the biexciton lifetime with volume from linear scaling to exponential, which is observed in large CsPbBr 3 nanocrystals for the first time. Fig.1 The illustration of volume dependent biexciton Auger lifetime and nonlocal effects in perovskites. Auger recombination lifetime linearly increases with volume in strong confinement regime (left side), while exponentially increases in weak confinement regime (center) due to nonlocal effects. The exciton wavefunction is spatially confined in small nanocrystals, and turns to oscillation in large nanocrystals. For bulk materials, Auger recombination is mainly related to the carrier density and band structure with a constant coefficient [ 32 ]. For example, a bulk crystal with carrier density of 10 18 is predicted to have a biexciton lifetime of ~ 10 ns [ 43 , 44 ] Full size image 2 Results 2.1 Morphologies and optical properties of large CsPbBr 3 nanocrystals The large CsPbBr 3 nanocrystals with spheroidal shape were fabricated by using hot-injection [ 45 ] method, and the samples with rod shape were synthesized using a modified ligand assisted reprecipitation (LARP) method [ 46 ] (see details in method). Figure 2 a–c shows the high-resolution transmission electron microscopy (HRTEM) images with fast Fourier transform (FFT) patterns of three typical CsPbBr 3 nanocrystals (Spheroids 1–3), which exhibit spheroidal shape of dodecahedron. The average diameters of these spheroids are 12.9 ± 1.1 nm (Spheroid 1), 17.5 ± 1.4 nm (Spheroid 2), and 25.8 ± 1.8 nm (Spheroid 3) and their volumes are ~ 1100 nm 3 , ~ 2800 nm 3 , and ~ 9000 nm 3 , respectively. Additional file 1 : Fig. S1 shows the typical HRTEM and FFT images of nanorod samples (Rod 1 and Rod 2). Rod 1 has an average width of 12.0 ± 1.8 nm and an average length of 22.1 nm ± 3.4 nm. Rod 2 has an average width of 13.0 ± 1.9 nm and an average length of 27.4 nm ± 4.8 nm. The volumes of Rod 1 and Rod 2 are ~ 3000 nm 3 and ~ 4600 nm 3 , respectively. As indicated by FFT patterns and powder X-ray diffractions (XRD) (see Additional file 1 : Fig. S2), large CsPbBr 3 nanocrystals have an orthorhombic structure. Figure 2 d–f presents the PL and UV–vis absorption spectra of Spheroids 1–3. These spheroid samples show similar PL peaks with centers around 518 nm (2.39 eV) and full width at half maximum (FWHM) of ~ 80 meV. The PL peak centers are close to the bulk bandgap of CsPbBr 3 (~ 2.36 eV at 300 K [ 47 ]), suggesting the weak quantum confinement. The PL and UV–vis absorption spectra of Rods 1–2 are shown in Additional file 1 : Fig. S3. Furthermore, these samples have high PL quantum yield (QY) (Spheroids 1–3 > 90%, Rods 1 and 2 > 70%) and good photo stability for spectroscopic measurements. Fig. 2 Morphology, structure, and optical properties of large CsPbBr 3 nanocrystals. a-c, TEM, HRTEM images with FFT patterns of 12.9 ± 1.1 nm, 17.5 ± 1.4 nm, and 25.8 ± 1.8 nm Spheroids 1–3. d-f, The PL (red) and UV–vis absorption (black) spectra of Spheroids 1–3 Full size image 2.2 Biexciton emission of large CsPbBr 3 nanocrystals The properties of biexcitons in nanocrystals can be measured by recording time resolved emission intensity of ensembles or single particles. Figure 3 a shows an example of fluence-dependent decay traces from Spheroid 3 ensemble. As the pump power increases from \langle N\rangle =0.15 \langle N\rangle =0.15 to \langle N\rangle =2.4 \langle N\rangle =2.4 ( \langle N\rangle \langle N\rangle is the number of exciton pairs per particle on average), a faster PL component appears. As shown in the inset of Fig. 3 b, the observed fast PL component can be assigned to the biexciton recombination due to the square relationship between its intensity and \langle N\rangle \langle N\rangle . Figure 3 b shows a typical analysis of biexciton lifetime of Spheroid 3 ensemble. Under the power of \langle N\rangle =0.15 \langle N\rangle =0.15 , the PL decay curve is nearly single exponential, corresponding to the single-exciton recombination. Additional file 1 : Fig. S4 shows the single-exciton recombination dynamics of these samples. The single-exciton lifetime for Spheroid 3 is ~ 35.0 ns. By subtracting the PL decay curves under the higher excitation like \langle N\rangle =0.9 \langle N\rangle =0.9 with that of \langle N\rangle =0.15 \langle N\rangle =0.15 , the biexciton decay curve can be derived and fitted using single-exponential decay function. Biexciton lifetime of Spheroid 3 is ~ 6.3 ns. Figure 3 c shows the excitation dependent emission intensity of single-exciton PL and total PL from Spheroid 3 ensemble. According to the fitting curves (solid lines) using standard saturation model [ 34 ], the calculated biexciton quantum yield (QY xx ) is ~ 78%. Fig.3 Biexciton emission properties of Spheroid 3 (First row: ensemble; Second row: single particle). a Excitation-fluence dependent PL decay curves of a typical ensemble sample measured using a micro-liquid film method. b The PL recombination dynamics of biexcitons (solid lines) that extracted by subtracting the single-exciton component. They are fitted using mono-exponential decay functions (dotted lines), yielding the biexciton lifetime {\tau }_{\mathrm{xx}}\approx 6.3 {\tau }_{\mathrm{xx}}\approx 6.3 ns. The inset shows excitation-fluence dependent single-exciton (black dots) and faster component (purple dots) PL intensity extracted from ( a ). The excitation-fluence dependence of faster component PL intensity follows quadratic scaling (purple dotted line) of biexcitons. c Excitation-fluence dependent single-exciton (black dots) and total PL intensities (red dots) of ensemble sample. Solid lines are the fits to the emission saturation model for single-exciton PL (black) and total PL (red), respectively, yielding QY xx of ~ 78%. d Representative blinking traces and corresponding distribution of intensities for single particle sample. Red line was chosen as the threshold between “on” and “off” states. e Typical second-order photon correlation curve from single particle sample. The calculated QY xx is ~ 80% Full size image The biexciton properties were further investigated by applying single particle spectroscopy. Figure 3 d shows the PL intensity traces of a typical single CsPbBr 3 nanocrystal from Spheroid 3 ensemble, which was measured using single-photon avalanche photodiodes (APDs) and time-tagged, time-resolved (TTTR) mode of the time-correlated single-photon counting (TCSPC) module. The results show that Spheroid 3 is nearly nonblinking with large fraction of “On” time (PL intensity traces of other samples, see Additional file 1 : Fig. S5). Figure 3 e shows the second-order photon correlation curves ( {g}^{(2)} {g}^{(2)} ) of a typical single CsPbBr 3 nanocrystal from Spheroid 3 under low excitation intensity. QY xx can also be derived by calculating the area ratio between the center peak and side peak in the {g}^{(2)} {g}^{(2)} curve of single particle [ 48 ]. The results indicate that Spheroid 3 has the QY xx of ~ 80%. The PL dynamical behaviors of single particle were further investigated by recording the transient PL spectra under different excitation intensities. Additional file 1 : Fig. S6 shows the PL intensity traces and PL decay traces at two different powers with \langle N\rangle =0.15 \langle N\rangle =0.15 and \langle N\rangle =1.2 \langle N\rangle =1.2 of single Spheroid 3. A biexciton lifetime of 7.5 ns was measured using a similar operation to subtract the contribution of single exciton recombination. By using similar measurements, we further studied the samples with different size (Spheroids 1 with 1100 nm 3 and 2 with 2800 nm 3 ) and shape (Rods 1 with 3000 nm 3 and 2 with 4600 nm 3 ). Since the exciton quantum yield of our samples is above 90% and different scaling rule between the recombination of biexciton and that of single excitons with traps, the non-radiative recombination of single excitons can be ignored by subtracting operations [ 49 ]. The biexciton properties of these samples are shown in Additional file 1 : Fig. S7–S10. The key features are summarized in Table 1 , which indicates that both the biexciton efficiency and lifetime of ensemble increase with volume increasing. Their maximum values approach to ~ 80% and ~ 6.3 ns, respectively. It is also noted that the Auger recombination can be suppressed for the samples with volume over 3000 nm 3 . Table 1 The biexciton features of CsPbBr 3 nanocrystals including quantum yields, lifetime, and Auger lifetime Full size table 3 Discussion To elucidate the size dependence of biexciton emission in CsPbBr 3 nanocrystals, we extracted the reported exciton and biexciton lifetimes of perovskite nanocrystals from previous works [ 50 , 51 , 52 ]. Combining with our results of large perovskite nanocrystals, Fig. 4 a–c plots the biexciton radiative lifetime (defined as one quarter of exciton radiative lifetime [ 53 ]), biexciton Auger recombination lifetime, and biexciton lifetime of CsPbBr 3 nanocrystals with volume increasing. As shown in Fig. 4 a, the biexciton radiative lifetime of CsPbBr 3 nanocrystals slightly varies with volume less than ~ 1000 nm 3 . When the volume exceeds ~ 1000 nm 3 , the radiative lifetime increases in power law [ 50 ]. In comparison, the biexciton lifetime shows linear scaling with volume in the small volume range, but transits to nonlinear when the volume exceeds ~ 1000 nm 3 (Fig. 4 c). To understand the nonlinear increase of biexciton lifetime in large nanocrystals, we further analyze the variation of Auger lifetime with volume in CsPbBr 3 nanocrystals. As shown in Fig. 4 b, Auger recombination lifetime linearly increases in small volume range (< 1000 nm 3 ) but turns to an exponential increase in large volume range. These experimental results are consistent with the nonlocal model described in Fig. 1 , confirming the significance of nonlocal effects of large nanocrystals. Fig.4 Analysis of biexciton lifetime from CsPbBr 3 nanocrystals. a Size dependence of biexciton radiative lifetime, defined as one quarter of single exciton lifetime [ 53 ], and its fitting (blue line) by {\tau }_{X}={\tau }_{X0}+a{V}^{\alpha } {\tau }_{X}={\tau }_{X0}+a{V}^{\alpha } . The fitting parameters take {\tau }_{X0}=4000 {\tau }_{X0}=4000 ps, a=55 a=55 , and \mathrm{\alpha }=0.68 \mathrm{\alpha }=0.68 . b Size dependence of biexciton Auger lifetime and its fitting (red line) by {\tau }_{\text{Auger}}=b{V}^{\beta }{e}^{{\left(\frac{V}{{V}_{0}}\right)}^{\gamma }} {\tau }_{\text{Auger}}=b{V}^{\beta }{e}^{{\left(\frac{V}{{V}_{0}}\right)}^{\gamma }} . The fitting parameters take b=0.05 b=0.05 and {V}_{0}=450 {V}_{0}=450 nm 3 under the theoretical values of \beta =1 \beta =1 (from reference [ 14 ]) and \gamma =\frac{2}{3} \gamma =\frac{2}{3} . c Size dependence of biexciton lifetime and its fitting (black line) by {\uptau }_{XX}=\frac{{\uptau }_{r,XX}\times {\uptau }_{\text{Auger}}}{{\uptau }_{r,XX}+{\uptau }_{\text{Auger}}} {\uptau }_{XX}=\frac{{\uptau }_{r,XX}\times {\uptau }_{\text{Auger}}}{{\uptau }_{r,XX}+{\uptau }_{\text{Auger}}} Full size image The biexciton radiative lifetime can be modeled by {\tau }_{\mathrm{X}}={\tau }_{\mathrm{X}0}+a{V}^{\alpha } {\tau }_{\mathrm{X}}={\tau }_{\mathrm{X}0}+a{V}^{\alpha } ( {\tau }_{\mathrm{X}0} {\tau }_{\mathrm{X}0} , a a , and \alpha \alpha are fitting parameters [ 50 ]), where {\tau }_{\mathrm{X}} {\tau }_{\mathrm{X}} and the particle volume V V take the units of ps and nm 3 . As shown in Fig. 4 a, the fitting curve (solid line) of biexciton radiative lifetime ( {\tau }_{r,\mathrm{XX}} {\tau }_{r,\mathrm{XX}} ) is in good agreement with experimental data (dots). On the other hand, the Auger recombination lifetime with volume can be simulated using {\tau }_{\text{Auger}}=b{V}^{\beta }{e}^{{\left(\frac{V}{{V}_{0}}\right)}^{\gamma }} {\tau }_{\text{Auger}}=b{V}^{\beta }{e}^{{\left(\frac{V}{{V}_{0}}\right)}^{\gamma }} ( b b , \beta \beta , {V}_{0} {V}_{0} , and \gamma \gamma are fitting parameters), where V V and {\tau }_{\text{Auger}} {\tau }_{\text{Auger}} take the units of nm 3 and ps. Considering nonlocal effects in large nanocrystals, the model leads to {V}_{0}=\frac{4\pi }{3}{\left(\frac{{\lambda }_{X}}{\pi }\right)}^{3}\approx 370 {V}_{0}=\frac{4\pi }{3}{\left(\frac{{\lambda }_{X}}{\pi }\right)}^{3}\approx 370 nm 3 and \gamma =\frac{2}{3} \gamma =\frac{2}{3} (see theory part in Additional file 1 : Material S1). When the volume takes about 1000 nm 3 ( \sim 2{V}_{0} \sim 2{V}_{0} ), the relationship between {\tau }_{\text{Auger}} {\tau }_{\text{Auger}} and V V changes from linear to exponential, corresponding to a transition from strong confinement to weak confinement. Combing {\tau }_{r,\mathrm{XX}} {\tau }_{r,\mathrm{XX}} and {\tau }_{\text{Auger}} {\tau }_{\text{Auger}} , we can also fit the biexciton lifetime using {\tau }_{\mathrm{XX}}=\frac{{\tau }_{r,\mathrm{XX}}\times {\tau }_{\text{Auger}}}{{\tau }_{r,\mathrm{XX}}+{\tau }_{\text{Auger}}} {\tau }_{\mathrm{XX}}=\frac{{\tau }_{r,\mathrm{XX}}\times {\tau }_{\text{Auger}}}{{\tau }_{r,\mathrm{XX}}+{\tau }_{\text{Auger}}} (see Fig. 4 c). The Auger recombination dominates biexciton dynamics in small nanocrystals, while radiative recombination plays a critical role in large nanocrystals, where nonlocal effects make the Auger recombination negligible. In conclusion, we discovered nonlocal effects of biexciton emission in CsPbBr 3 nanocrystals by comparing our spectroscopic results of large nanocrystals with previously reported small nanocrystals. Such a nonlocal effect can be illustrated by considering the influence of nonlocal effects on Auger recombination. With volume increasing, the Auger recombination rate of large CsPbBr 3 nanocrystals can be greatly reduced to achieve high biexciton efficiency up to 80%. The discovered nonlocal effects in large nanocrystals not only provide a guideline to fabricate advanced quantum emitters with efficient biexciton (multiple exciton) emission, but also create new opportunities to explore semiconductor nanocrystals beyond strong quantum confinement. 4 Methods 4.1 Materials Lead (II) bromide (PbBr 2 , > 99.99%, Alfa Aesar), Cesium bromide (CsBr, > 99.99%, Alfa Aesar), Lead (II) oxide (PbO, > 99.999%, Aladdin), Cesium carbonate (Cs 2 CO 3 , > 99.99%, MREDA), Phenacyl bromide (C 6 H 5 COCH 2 Br, > 99%, Amethyst Chemicals), 1-octadecene(ODE, > 90%, Alfa Aesar), Oleic acid (> 90%, Aladdin), Octylamine (> 90%, Aladdin), Hexylamine (> 90%, Aladdin), n-tetradecylamine (> 99%, Aladdin), Dimethyl sulfoxide (DMSO, > 99.99%, Aladdin), Isopropanol (> 99.99%, Beijing TongGuang Fine Chemicals), Hexane (> 99.99%, Beijing TongGuang Fine Chemicals) and Toluene (> 99.99%, Beijing TongGuang Fine Chemicals) were used. 4.2 Synthesis of large CsPbBr 3 nano-spheroids In a typical synthesis, the precursor solution of Cs-oleate was prepared by dissolving 1.2 mmol (390.8 mg) of Cs 2 CO 3 into a mixture of 18 mL of 1-octadecene (ODE) and 2 mL of oleic acid in a three-neck round-bottom flask. The reaction mixture was degassed with nitrogen for 1 h at 120 °C, and then kept at 80 °C for further use. The Pb precursor solution was prepared by dissolving 0.2 mmol of PbO (44.6 mg), 0.6 mmol of phenacyl bromide (119.4 mg) into a mixture of 1 mL of oleic acid, and 5 mL of ODE in a three-neck round-bottom flask. The precursor solution was degassed at 120 °C for 1 h and then increased to 220 °C and a fixed amount of amine was injected (n-tetradecylamine is used for Spheroid 1, oleylamine is used for Spheroid 2 and Spheroid 3). The solution turned to yellow around 15 min. After that, 0.5 mL of Cs-oleate precursor solution was swiftly injected into the yellow solution at different temperatures and annealed 30 min for obtaining different size nanocrystals (reaction temperature for Spheroid 1, Spheroid 2 and Spheroid 3 are 200 °C, 220 °C and 240 °C respectively). Crude solution was collected with ice-quenching. 6 mL methyl acetate was added to the 1 mL crude solution and then centrifuged at 8000 rpm for 5 min. Finally, the nanocrystals were precipitated and redispersed in hexane. 4.3 Synthesis of CsPbBr 3 nanorods A mixture of 0.2 mmol CsBr and 0.2 mmol PbBr 2 was dissolved in 1 mL of DMSO with 120 μL of oleylamine or 60 μL of octylamine and 0.5 mL of oleic acid to form a precursor solution. The sample of Rod 1 was fabricated using octylamine as ligands, while the sample of Rod 2 was fabricated using oleylamine. 1 mL of precursor solution was dropped into 8 mL isopropanol with vigorous stirring. The obtained solution was precipitated by the centrifugation at 6000 rpm for 5 min. The precipitates can be redissolved into 3 mL toluene or hexane to form colloidal nanocrystals solutions. 4.4 Sample preparations for spectroscopic measurements Colloidal solutions with ~ 0.1 optical density at exciton absorption peak for UV–vis optical absorption and PL spectra. The samples for transient PL spectra were fabricated by transferring a droplet (~ 10 µL) of the colloidal nanocrystal solutions mentioned above to a clean cover slip (micro-liquid films). The samples for single particle spectra were prepared by diluting the solution of the nanocrystals in a polymethyl methacrylate (2.5% by weight in toluene) solution, and spin-casting (5000 rpm, 60 s) the solution on a clean cover slip. 4.5 Morphology and structure characterizations TEM and HRTEM images were collected using Tecnai G2 F30 S-TWIN transmission electron microscope. XRD patterns were recorded using a Rigaku SmartLab SE X-ray diffractometer equipped with a Cu K α-radiation source (wavelength of 1.5405 Å). 4.6 Characterizations of the optical properties of solutions For conventional spectra experiments, UV–vis optical absorption spectra were collected on a UV-6100 UV–vis spectrophotometer (Shanghai Mapada Instruments, China). Steady-state PL spectra were measured using a F-380 fluorescence spectrometer. The absolute PL QY of solutions were determined using a fluorescence spectrometer (calibrated multichannel spectrometer, PMA12) with an integrated sphere (C9920-02, Hamamatsu Photonics, Japan). 4.7 Time-resolved PL spectra measurements An inverted fluorescence microscope (Olympus IX 83) equipped with a 60 × oil immersion objective (N.A. = 1.49) and suitable spectral filters was used to collect optical spectra of micro-liquid films and single particle. The laser spot is about 400 μm 2 . To measure the transient PL spectra of micro-liquid films and single particle, a 405 nm pump wave laser (PicoQuant) with 1 MHz repetition and ~ 50 ps pulse width was used as the excitation light source and the photoluminescence signal was recorded by a single-photon counting system (a Micro Photon Devices MPD-050-CTD-FC photon detectors with 165 ps resolution and PicoQunat Picoharp 300 TCSPC module). Photoluminescence intensity traces of single particle was collected with the same single-photon counting system mentioned above in a time-tagged time-resolved (TTTR) mode. Second-order photon intensity correlation measurements were performed with the same system and module with a Hanbury-Brown and Twiss (HBT) experiment set-up comprised of a 50/50 beam splitter and two single-photon detectors. 4.8 Model of emission saturation for biexciton quantum yield calculation. Calculation of biexciton quantum yield from the PL saturation curve is adapted from the methods in previous work [ 34 ]. The photon number absorbed per dot per excitation pulse ( \langle N\rangle \langle N\rangle ) is assumed to follow the Poisson distribution, the probability of finding a QD in m-exciton state is P\left(m,\langle N\rangle \right)={\langle N\rangle }^{m}{e}^{-\langle N\rangle }/m! P\left(m,\langle N\rangle \right)={\langle N\rangle }^{m}{e}^{-\langle N\rangle }/m! . At a given pulse, the total PL intensity can model as: I={\sum }_{m=1}^{\infty }[{\sum }_{m}^{\infty }P\left(m,\langle N\rangle \right)]{Q}_{m\mathrm{x}} I={\sum }_{m=1}^{\infty }[{\sum }_{m}^{\infty }P\left(m,\langle N\rangle \right)]{Q}_{m\mathrm{x}} (1) For multi-exciton emission, the m m -excitons quantum yield {Q}_{m\mathrm{x}}={k}_{r,\mathrm{mx}}/({k}_{r,m\mathrm{x}}+{k}_{nr,m\mathrm{x}}) {Q}_{m\mathrm{x}}={k}_{r,\mathrm{mx}}/({k}_{r,m\mathrm{x}}+{k}_{nr,m\mathrm{x}}) , scaling of the multi-exciton radiative and non-radiative rates are {\mathrm{k}}_{r,\mathrm{mx}}={m}^{2}{k}_{r,\mathrm{x}} {\mathrm{k}}_{r,\mathrm{mx}}={m}^{2}{k}_{r,\mathrm{x}} and {\mathrm{k}}_{nr,\mathrm{mx}}={m}^{2}(m-1){k}_{nr,\mathrm{x}} {\mathrm{k}}_{nr,\mathrm{mx}}={m}^{2}(m-1){k}_{nr,\mathrm{x}} . {k}_{r,\mathrm{x}} {k}_{r,\mathrm{x}} and {k}_{nr,\mathrm{x}} {k}_{nr,\mathrm{x}} are the scaling of the single-exciton radiative and non-radiative rates. Therefore, the total emission intensity curve ( I I ) is essential a function of {Q}_{2\mathrm{x}} {Q}_{2\mathrm{x}} and \langle N\rangle \langle N\rangle . Fitting the total emission intensity curve, we can obtain the biexciton quantum yield {Q}_{2\mathrm{x}} {Q}_{2\mathrm{x}} = QY xx . For single-exciton-only PL intensity, the single exciton emission saturates following the equation: {I}_{\mathrm{x}}={\sum }_{m=1}^{\infty }P\left(m,\langle N\rangle \right){Q}_{\mathrm{x}}={Q}_{\mathrm{x}}\left(1-P\left(0,\langle N\rangle \right)\right)={Q}_{\mathrm{x}}\left(1-{e}^{-\langle N\rangle }\right){=Q}_{\mathrm{x}}\left(1-{e}^{-C\mu }\right) {I}_{\mathrm{x}}={\sum }_{m=1}^{\infty }P\left(m,\langle N\rangle \right){Q}_{\mathrm{x}}={Q}_{\mathrm{x}}\left(1-P\left(0,\langle N\rangle \right)\right)={Q}_{\mathrm{x}}\left(1-{e}^{-\langle N\rangle }\right){=Q}_{\mathrm{x}}\left(1-{e}^{-C\mu }\right) (2) where P\left(m,\langle N\rangle \right) P\left(m,\langle N\rangle \right) is a Poisson distribution function, {Q}_{\mathrm{x}} {Q}_{\mathrm{x}} is the single-exciton quantum yield, C C is the scaling factor depending on the absorption cross section, \mu \mu is the excitation intensity. Therefore, C C and \langle N\rangle \langle N\rangle can be determined by fitting the single-exciton-only PL curve with the function above. Data availability The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. | In a new paper published in eLight, a team of scientists led by Professors Haizheng Zhong and Yongyou Zhang from the Beijing Institute of Technology and Professor Haiyan Qin from Zhejiang University have discovered nonlocal effects in large semiconductor nanocrystals. They provide new strategies to achieve high-efficiency multiple excitons for quantum optics and energy conversation applications. Auger recombination in bulk materials only slightly affects the biexciton recombination due to the lower carrier density and momentum conservation. Thick-shelled CdSe/CdS nanocrystals were developed to suppress Auger recombination to gain high biexciton efficiency. The research team achieved this by reducing the wave function overlap between the electrons and holes. Large colloidal QDs may be suitable candidates to generate efficient biexciton emission, but have rarely been investigated. The research team reported that the Auger recombination rate in large perovskite nanocrystals could be exponentially decreased due to the nonlocal effects. Nonlocal effects refer to the influence of wave spatial dispersion on the light-matter interactions. Nonlocal effects have been successfully demonstrated in plasmonics to explain optical response in metallic nanostructures. Auger recombination can be described as an energy shift from an exciton to another electron or hole or a process in which one electron or hole absorbs an exciton to a higher energy level. Accordingly, the nonlocal effects of Auger recombination are mainly determined by the wavefunction of the exciton. At room temperature, the estimated wavelength of an exciton in CsPbBr3 is ~14 nm, enabling the possibility to observe the nonlocal interaction enhanced biexciton emission in large nanocrystals with a size of > 14 nm. Benefiting from the unique defects tolerance ability of perovskite nanocrystals, the research team observed high biexciton efficiency in large CsPbBr3 nanocrystals. There is a linear relation between the biexciton Auger recombination lifetime and volume for small nanocrystals. The maximum biexciton lifetime is ~100 ps due to the strong Auger recombination. For bulk materials, Auger recombination is mainly related to the carrier density and band structure with a constant coefficient. For example, a bulk crystal with a carrier density of 1018 is predicted to have a biexciton lifetime of ~10 ns. In the mesoscale region, the nonlocal effects are expected to alternate the biexciton lifetime with volume from linear scaling to exponential, which is observed in large CsPbBr3 nanocrystals for the first time. In conclusion, the research team discovered nonlocal effects of biexciton emission in CsPbBr3 nanocrystals by comparing their spectroscopic results of large nanocrystals with previously reported small nanocrystals. Such a nonlocal effect can be illustrated by considering the nonlocal interactions between carriers and excitons on Auger recombination. With volume increasing, the Auger recombination rate of large CsPbBr3 nanocrystals can be exponentially reduced to achieve high biexciton efficiency of up to 80%. The discovered nonlocal effects in large nanocrystals provide a guideline to fabricate advanced quantum emitters with efficient biexciton (multiple excitons) emission and create new opportunities to explore semiconductor nanocrystals beyond strong quantum confinement. | 10.1186/s43593-023-00045-3 |
Earth | EPA fails to follow landmark law to protect children from pesticides in food | Olga V. Naidenko, Application of the Food Quality Protection Act children's health safety factor in the U.S. EPA pesticide risk assessments, Environmental Health (2020). DOI: 10.1186/s12940-020-0571-6 Journal information: Environmental Health | http://dx.doi.org/10.1186/s12940-020-0571-6 | https://phys.org/news/2020-02-epa-landmark-law-children-pesticides.html | Abstract Background The Food Quality Protection Act of 1996, or FQPA, required the Environmental Protection Agency to set allowable levels for pesticides in a way that would “ensure that there is a reasonable certainty that no harm will result to infants and children from aggregate exposure to the pesticide chemical residue.” The act stipulated that an additional tenfold margin of safety for pesticide risk assessments shall be applied to account for pre- and postnatal toxicity and for any data gaps regarding pesticide exposure and toxicity, unless there are reliable data to demonstrate that a different margin would be safe for infants and children. Discussion To examine the implementation of the FQPA-mandated additional margin of safety, this analysis reviews 59 pesticide risk assessments published by the EPA between 2011 and 2019. The list includes 12 pesticides used in the largest amount in the U.S.; a group of 35 pesticides detected on fruits and vegetables; and 12 organophosphate pesticides. For the non-organophosphate pesticides reviewed here, the EPA applied an additional children’s health safety factor in 13% of acute dietary exposure scenarios and 12% of chronic dietary exposure scenarios. For incidental oral, dermal and inhalation exposures, additional FQPA factors were applied for 15, 31, and 41%, respectively, of the non-organophosphate pesticides, primarily due to data uncertainties. For the organophosphate pesticides as a group, a tenfold children’s health safety factor was proposed in 2015. Notably, in 2017 that decision was reversed for chlorpyrifos. Conclusions For the majority of pesticides reviewed in this study, the EPA did not apply an additional FQPA safety factor, missing an opportunity to fully use the FQPA authority for protecting children’s health. Peer Review reports Background A recent study published in Environmental Health concluded that the U.S. has lagged behind other agricultural nations in banning harmful pesticides and suggested that pesticide bans can catalyze the transition to safer alternatives [ 1 ]. A similarly important question is whether, in the current practice of pesticide registrations in the U.S., all measures are taken to protect the public, especially infants and children, from the potentially harmful effects of pesticides. The Food Quality Protection Act (FQPA) of 1996 (Public Law 104–170) is considered a landmark pesticide legislation because of its requirement to ensure a “reasonable certainty that no harm will result to infants and children from aggregate exposure to the pesticide chemical residue.” The FQPA passage led to restrictions on certain neurotoxic insecticides, including limitations on residential and agricultural spraying of organophosphate insecticides; lower levels of organophosphate residues on produce; and a 70% decline in the use of organophosphates between 2000 and 2012 in the United States [ 2 ]. The FQPA stipulated that “an additional tenfold margin of safety for the pesticide chemical residue and other sources of exposure shall be applied for infants and children to take into account potential pre- and postnatal toxicity and completeness of the data with respect to exposure and toxicity to infants and children.” The act also stated, “notwithstanding such requirement for an additional margin of safety, the Administrator may use a different margin of safety for the pesticide chemical residue only if, on the basis of reliable data, such margin will be safe for infants and children.” Initially, the tenfold children’s health safety factor was viewed as a key component in pesticide risk assessment. In a 2002 policy guidance on the determination of the appropriate FQPA safety factors, the EPA referred to “a presumption in favor of applying an additional 10X safety factor”, and a factor greater than 10X was also considered as an option [ 3 ]. Figure 1 demonstrates the relationship between various uncertainty factors and the FQPA children’s health safety factor within pesticide risk assessments [ 3 , 4 , 5 , 6 ]. As described in numerous research publications and two reports published by the National Research Council in 2006 [ 5 ] and in 2009 [ 6 ], such assessments typically use a tenfold uncertainty factor to account for inter-species variations, or extrapolation from animal studies to humans; and another tenfold uncertainty factor to account for differences within the human population (Fig. 1 , upper portion of the chart). This practice reflects the concept of a 100-fold margin of safety, which dates back to 1954 [ 7 ]. Research over the past two decades pointed out that the tenfold factors for interspecies and intraspecies differences may not fully represent the range of sensitivities within the human population which can be greater than tenfold [ 6 ]. Yet, these two default factors remain in practice and are used in the EPA pesticide risk assessments. Together with the 100-fold margin of safety, risk assessments can also include data uncertainty factors that address data gaps and limitations in the existing studies [ 3 , 4 ]. The FQPA introduced an additional 10-fold margin of safety, and the FQPA-mandated safety assessment applies both to the children’s health safety factor and to other sources of uncertainty (Fig. 1 , lower portion of the chart) [ 3 , 4 ]. Fig. 1 Relationship between the FQPA safety factors and other safety and uncertainty factors used in pesticide risk assessment. Graphic based on the reports by the EPA [ 3 , 4 ] and the National Research Council [ 5 , 6 ] Full size image The application of an additional children’s health factor in pesticide risk assessments was infrequent from the start of FQPA implementation, as the U.S. Government Accounting Office (now called Government Accountability Office) reported in 2000 [ 8 ]. In 2001, the Consumers Union reported that of 82 pesticide safety determinations for organophosphate insecticides made in the first 5 years following the FQPA passage, a tenfold children’s health safety factor was applied in only 13 decisions (16%) [ 9 ]. The EPA Office of Inspector General stated in 2006 that the EPA “has primarily measured its success and the impact of FQPA by adherence to its reregistration schedule rather than by reductions in risk to children’s health” [ 10 ]. In the same year, the National Research Council reported that, of 59 pesticides reviewed, EPA “found it unnecessary to apply an FQPA factor—that is, it uses a factor of 1—for all but 11 chemicals” [ 5 ]. The report added that for the five pesticides where the EPA applied a tenfold FQPA factor, “severe developmental toxicity end points, such as multiple malformations and fetal death, were observed in laboratory animals” [ 5 ]. The latest analysis of the FQPA implementation was published in 2013, when the Government Accountability Office examined the EPA’s safety factor decisions made between 1996 and 2012 and found that in 22% of those decisions, the EPA applied the default tenfold factor [ 11 ]. Thus, a new review of the EPA’s implementation of the FQPA safety factor is both timely and needed. Selection of pesticides for the analysis Pesticides selected for this analysis comprise three groups. The first group, consisting of 12 pesticides used in the largest volume in U.S. agriculture (Table 1 ), was identified from U.S. Geological Survey data for 2016, the latest year available [ 12 ]. The second group consists of 35 pesticides detected on fruits and vegetables in testing conducted by the U.S. Department of Agriculture between 2016 and 2018 (Table 2 ) [ 25 ]. The third group consists of 12 organophosphate insecticides that have been reviewed by the EPA since 2015 (Table 3 ). Table 1 Pesticides used in the largest volume in the U.S. agriculture Full size table Table 2 Pesticides detected on fruits and vegetables Full size table Table 3 Organophosphate pesticides Full size table To confirm which pesticides are used in the largest volume in U.S. agriculture, the 20 most-used pesticides were first identified from USGS data [ 12 ]. From that group, sulfur, sulfuric acid, and petroleum oil were excluded as chemical substances that have extensive industrial and other uses outside of the agricultural market. The fumigants metam and metam potassium were excluded, because the EPA considers them non-food chemicals that are not subject to FQPA review [ 70 ]. Chloropicrin was excluded because there are no established tolerances for chloropicrin on food, and therefore, according to the EPA, the FQPA does not apply to this pesticide [ 71 ]. The herbicides metolachlor and S-metolachlor were combined for the purposes of this analysis. The fumigant dichloropropene has several isomers, with 1,3-dichloropropene being the most common, sold under the trade name Telone. The USGS pesticide estimates are reported for “dichloropropene” without specification of the isomer. For subsequent review, this study focused on 1,3-dichloropropene. The USGS database uses two models for estimating pesticide use, EPest-low and EPest-high [ 12 ], and both estimates are included in Table 1 , rounded to a whole number. For ethephon, both high and low use estimates correspond to around 9 million. Next, this study reviewed pesticides detected on fruits and vegetables in testing conducted by the USDA between 2016 and 2018 [ 25 ]. The USDA tests different types of produce in different years, and a three-year time frame gives a more complete overview of pesticide occurrence on produce. Pesticides detected on 30% or more of the samples were selected for the analysis in this study. Three pesticides detected on more than 29.5% of the tested samples were also included: bifenthrin, flonicamid and metalaxyl. The USDA testing is conducted on fruits and vegetables that are washed and, if needed for a specific type of produce, peeled prior to laboratory analyses [ 25 ]. Thus, these test results represent pesticide residues that would be directly ingested with the diet. The detection of pesticides on produce does not mean that the sample(s) violate the EPA’s legal limits for pesticides. Rather, the USDA test data are a reflection of current use patterns for pesticides in fruit and vegetable production in the U.S. Third group included organophosphates with active EPA registrations for which (1) the latest USGS data indicated that over 0.01 million pounds were applied annually [ 12 ]; and (2) an EPA review published in 2015 or later could be identified on the regulations.gov website. The FQPA regulations require the EPA to review each registered pesticide every 15 years [ 72 ]. The latest EPA documents with FQPA determinations were identified from the EPA website and the materials posted on the U.S. government website Regulations.gov . Not all documents reviewed here represent the finalized human health risk assessments. Reviewed documents also include draft assessments posted for public comments; scoping documents in support of pesticide registration reviews; Federal Register publications designating pesticide tolerances (allowable levels of pesticide on specific foods); and, in the case of chlorpyrifos, an EPA document with a denial of a public petition. While some of the reviewed documents may not constitute the final EPA decision on the FQPA factor application for a particular pesticide, those documents reflect the EPA’s view on the FQPA factor determination, even in a draft form, and are therefore included in the present analysis. Special cases are noted for the following pesticides. For the herbicide atrazine, the FQPA determination is for chronic dietary exposure to the atrazine metabolite hydroxyatrazine, with the FQPA factor of 1X [ 14 ]. For the four pyrethroid insecticides analyzed in this study (bifenthrin, cypermethrin, fenpropathrin, permethrin), the FQPA factor determination is based on a policy document released in July 2019, which stated that “the total FQPA safety factor for pyrethroids can be reduced to 1X for all populations” [ 29 ]. For the fungicide metalaxyl, the EPA did not assess a chronic dietary exposure scenario, and this pesticide is not included in the FQPA summary statistics for chronic dietary exposures presented in this study. Metalaxyl is included in the analysis of FQPA determinations for other scenarios (acute dietary exposure; incidental oral exposure; and inhalation exposure, all with a 1X FQPA factor) [ 49 ]. For the fungicide ametoctradin, the EPA did not conduct dietary, residential, occupational, or aggregate exposure assessments [ 27 ], and thus this pesticide was not included in further analysis in this study. The FQPA factor determinations for dietary exposures to non-organophosphates Of the 47 non-organophosphate pesticides reviewed in this study, the EPA evaluated acute dietary exposure scenarios for 31 chemicals and chronic dietary exposure scenarios for 41 chemicals (Tables 1 and 2 ). Among those, an additional FQPA factor was applied for acute dietary exposures for 4 pesticides (13% of reviewed scenarios): dimethomorph (10X); iprodione (10X); azoxystrobin (3X); and tebuconazole (3X). Additional FQPA factor was applied for chronic dietary exposures for 5 pesticides (12% of reviewed scenarios). For the remaining dietary exposure scenarios reviewed here, no additional children’s health safety factor was applied, an approach that the EPA describes as an FQPA factor of 1X. Of the five pesticides with an added FQPA factor for chronic dietary exposures, three pesticides have a 10X FQPA factor (chlorpropham, glufosinate and iprodione) and two have a 3X FQPA factor (tebuconazole and thiophanate-methyl). The Additional file 1 accompanying this article quotes the EPA explanations for the assignment of additional FQPA factors. These rationales focus on data uncertainties, such as the use of a LOAEL instead of a NOAEL for setting the reference dose (azoxystrobin, dimethomorph, glufosinate, iprodione and tebuconazole); uncertainty regarding potential thyroid toxicity (chlorpropham); and a missing study (thiophanate-methyl). In light of the recent report by Donley on the differences in pesticide oversight in the U.S. and other countries [ 1 ], it is notable that the three pesticides with a 10X FQPA safety factor for chronic dietary exposures have use restrictions in the European Union and are classified as “not approved” in the EU pesticide database as of January 2020 [ 73 ]. In animal studies, chlorpropham was associated with an increase in Leydig cell tumors in the testes of male rats and the potential for endocrine disruption [ 74 , 75 ]. Glufosinate shows “critical indications of neurotoxicity” and changes in brain enzyme function and brain morphometrics; these effects were observed in the offspring at the lowest dose of glufosinate tested, and no “No Observed Adverse Effects Level” dose was identified [ 22 ]. Neurotoxic effects of glufosinate were also reported in the peer-reviewed scientific literature [ 76 , 77 ]. Finally, iprodione acts as an antiandrogenic compound, causing effects such as delayed onset of male puberty, altered anogenital distance, abnormal sperm, reductions in serum testosterone levels and persistence of areolas; it also increases the incidence of tumors in different species and organs, including interstitial Leydig cell tumors in rats and ovary luteomas and liver cell tumors in mice [ 47 , 78 , 79 ]. The EPA has classified iprodione as “Likely to be Carcinogenic to Humans” [ 47 ]. Overall, review of the three pesticides with a 10X FQPA factor for chronic dietary exposures is reminiscent of the conclusion of the National Research Council which stated that the EPA applied the 10X FQPA factor in a small number of cases where severe developmental toxicity was observed [ 5 ]. Further, neurotoxicity and developmental toxicity was also reported for the fungicide tebuconazole. In animal studies, tebuconazole caused malformations in nervous system development, changes in brain morphometric parameters, and decreases in motor activity, for which no “No Observed Adverse Effects Level” dose was identified [ 54 ]. The EPA classifies tebuconazole as “Group C, possible human carcinogen” [ 54 ]. Peer reviewed studies reported that tebuconazole alters testosterone production and testicular morphometry [ 80 , 81 ]. The European Union classifies tebuconazole as “suspected of damaging the unborn child”, and a 2014 review by the European Food Safety Agency noted that a classification of “may damage the unborn child” might be considered [ 82 ]. These effects are similar to the toxicity findings for the three pesticides described above and could warrant a 10X additional safety factor. In fact, when EPA scientists conducted the first FQPA assessment for this pesticide in 1998, they recommended a 10X FQPA safety factor for acute dietary exposures to tebuconazole for females age 13 and older and for infants and children [ 83 ]. In 2008, the EPA assigned a 3X FQPA factor for tebuconazole by referring to an unpublished benchmark dose analysis of toxicological studies for this pesticide [ 84 ], and that decision was maintained in a 2018 assessment [ 54 ]. Thiophanate-methyl, the fifth pesticide identified in this study that has an additional FQPA factor for chronic dietary exposures, is classified by the EPA as likely carcinogenic to humans, “based on thyroid tumors in rats and liver tumors in mice and evidence of aneugenicity” [ 56 ]. Endocrine disruption activity of thiophanate-methyl and its impacts on the thyroid and other endocrine pathways have been reported in peer-reviewed studies [ 85 , 86 ]. In 2005, the EPA published a 3X FQPA safety factor for thiophanate-methyl for both acute and chronic dietary exposures, “due to an incomplete toxicity database” [ 87 ]; subsequently, the FQPA factor for acute dietary exposure to thiophanate-methyl was removed (reduced to 1X), while a 3X FQPA factor was retained for chronic dietary exposure, due to the lack of a developmental thyroid study [ 56 ]. The EPA has not required a developmental neurotoxicity study for thiophanate-methyl [ 56 ] and dismissed an argument that a 10X FQPA factor should be applied to this pesticide due to its endocrine disrupting effects [ 87 ]. It thus remains to be seen what the FQPA determination in the final EPA assessment for thiophanate-methyl will be. The FQPA factor determinations for organophosphates Next, this study analyzed 12 organophosphate pesticides for which the FQPA determinations were made in 2015 and thereafter: acephate, bensulide, chlorethoxyfos, chlorpyrifos, diazinon, dicrotophos, dimethoate, ethoprop, malathion, phosmet, terbufos, and tribufos (Table 3 ). The choice of 2015 as the cut-off year is due to the 2015 publication of the EPA’s “Literature Review on Neurodevelopment Effects & FQPA Safety Factor Determination for the Organophosphate Pesticides” which stated that the FQPA 10X Safety Factor will be retained for organophosphates “for the population subgroups that include infants, children, youths, and women of childbearing age for all exposure scenarios” [ 88 ]. This decision was re-affirmed in an updated literature review on the topic, completed by the EPA in 2016 and released in 2017 [ 89 ]. Of the 12 organophosphates reviewed in this article, the 10X FQPA factor is applied in the latest published human health assessment documents for 11 chemicals, with the notable exception of chlorpyrifos (Table 3 ). For example, for acephate, the 10X uncertainty factor was applied “for infants, children, youth, and women of child-bearing age for all exposure scenarios due to uncertainty in the human dose-response relationship for neurodevelopmental effects” in a draft human health risk assessment published in 2018 [ 59 ]. The 10X FQPA factors were also assigned to the other ten organophosphates listed in Table 3 [ 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 ]. For chlorpyrifos, a human health risk assessment document published in 2016 applied a 10X FQPA factor [ 90 ], in line with the EPA’s latest policy documents for organophosphates. This decision was overturned in 2017 with the denial of a petition to revoke chlorpyrifos tolerances [ 58 ]. For this analysis, the FQPA determination deserves being quoted in its entirety: “Dow AgroSciences submitted a comparative cholinesterase study (CCA) for chlorpyrifos. CCA studies are specially designed studies to compare the dose-response relationship in juvenile and adult rats. This CCA study includes two components: (1) Acute, single dosing in post-natal day 11 and young adult rats and (2) 11-days of repeating dosing in rat pups from [postnatal day] 11-21 and 11-days of repeated dosing in adult rats. The CCA study for chlorpyrifos is considered by EPA to be high quality and well-designed. The preliminary risk assessment for chlorpyrifos’ reports BMD estimates from this CCA study. Specifically, for the repeated dosing portion of the study, the BMD 10s of 0.80 (0.69 BMDL 10 ) and 1.0 (0.95 BMDL 10 ) mg/kg/day respectively for female pups and adults support the FQPA safety factor of 1X for the AChE inhibition endpoint used in the 2006 OP CRA [Cumulative Risk Assessment].” [ 58 ] Thus, with a single paragraph, a 10X FQPA additional safety factor for chlorpyrifos was replaced with a 1X factor, a reversal in the overall trend of using a 10X FQPA factor for organophosphates. This decision is notable for several reasons. First, the text refers to a non-peer reviewed, non-published study from a company that manufactures chlorpyrifos. Second, this decision contradicts the EPA’s earlier position on the FQPA factor for organophosphates [ 88 , 89 ]. Finally, it stands in stark contrast to peer-reviewed scientific literature and epidemiological studies demonstrating chlorpyrifos toxicity to infants and children [ 2 , 91 , 92 ]. As of January 2020, chlorpyrifos is classified as “not approved” in the European Union pesticides database [ 73 ], and it is scheduled for a phase out from the European market by April 2020 due to concerns about possible genotoxicity and developmental neurotoxicity associated with this pesticide. The FQPA assessments for non-dietary exposure pathways In addition to dietary exposures, pesticide risk assessments consider other exposure pathways, such as incidental oral, inhalation, and dermal exposures that can contribute to aggregate exposure for a specific pesticide. These scenarios are relevant for pesticide exposures in the residential settings (both indoor and outdoor) such as pesticide applications on lawns and turf, insect repellent sprays, pet treatments, as well for incidental exposure from pesticide-treated materials, paints and preservatives [ 93 ]. Of note, restriction on the residential use of organophosphates is considered to be one of the accomplishments of the FQPA, such as the elimination of homeowner uses of chlorpyrifos in 2000 [ 94 ]. Non-dietary scenarios also apply to exposure from aerial drift of pesticides sprayed near places where children live, play, and study [ 95 ]. Risk assessment for exposure pathways other than diet may draw on toxicological studies specific to those exposure routes and the FQPA factor determinations may differ between different exposure scenarios. For some pesticides analyzed in this study, the EPA assigned an FQPA factor for just one additional exposure pathway, or for no pathways other than dietary exposure. For example, for glyphosate there is an FQPA assessment for incidental oral exposure, with an FQPA factor of 1X. For other pesticides, the EPA published FQPA factors for multiple exposure pathways and duration scenarios. For example, for metolachlor, incidental oral exposure, dermal exposure and inhalation exposure scenarios all have an FQPA factor of 1X. Table 4 lists the non-organophosphate pesticides analyzed in this study for which the EPA assigned an FQPA factor for at least one non-dietary scenario, with a total of 30 pesticides and 80 non-dietary exposure scenarios. Pyrethroid insecticides are not included in this part of the analysis, as finalized human health risk assessments for the four pyrethroid insecticides analyzed in this study, bifenthrin, cypermethrin, fenpropathrin, and permethrin, are not yet available (see Discussion below). Overall, for incidental oral, dermal and inhalation exposures for non-organophosphate pesticides, additional FQPA safety factors were applied for 15, 31, and 41% of pesticides, respectively. The additional FQPA factor is most common for inhalation assessments; 11 out of 27 pesticides analyzed here have an FQPA factor greater than 1X for this exposure pathway (Table 4 ). Table 4 The FQPA factor determinations for non-dietary exposures Full size table For five non-organophosphate pesticides with an additional FQPA safety factor for dietary exposures, an additional 10X factor was applied for other exposure scenarios for chlorpropham, glufosinate, and iprodione, and a 3X FQPA factor is applied for non-dietary exposure scenarios for tebuconazole and thiophanate-methyl. Among pesticides with a 1X FQPA factor for dietary exposures, 3X and 30X FQPA factors were applied, respectively, for the acute and repeated residential inhalation exposure scenarios for the fungicide chlorothalonil [ 21 ]. The FQPA factor of 30X was composed of 3X for the use of a LOAEL from the acute inhalation study (no NOAEL observed) and a 10X factor for the extrapolation of findings of an acute study to longer durations of exposure (see Additional file 1 ). The EPA describes chlorothalonil as “highly toxic via inhalation” and “Likely to be Carcinogenic to Humans” [ 21 ]; it is currently not approved for use in the European Union. The EPA also applied a 10X FQPA factor for inhalation exposures to the fungicides cyprodinil, thiabendazole, and trifloxystrobin, as well as for the herbicide 2,4-D and one form of the herbicide dicamba. These FQPA factors were assigned due to database uncertainties, such as a missing study or an extrapolation from a LOAEL to a NOAEL (see Additional file 1 ). Finally, among the organophosphate pesticides listed in Table 3 , 10X FQPA factors were assigned for non-dietary exposure scenarios for acephate, chlorethoxyfos, diazinon, dicrotophos, dimethoate, ethoprop, malathion, and tribufos [ 59 , 60 , 61 , 62 , 65 , 67 , 68 , 69 ]. For bensulide [ 63 ] and phosmet [ 64 ], 10X FQPA safety factors were assigned for all exposure pathways except for inhalation exposures, where a factor of 30X was applied due to “uncertainty in the human dose-response relationship for neurodevelopmental effects” and a lack of necessary inhalation studies for these pesticides. Discussion The goal of this study is to stimulate discussion on the implementation of the children’s health safety factor in pesticide risk assessments. Overall, this analysis documented a rather limited use of additional FQPA factors in the EPA risk assessments of non-organophosphate pesticides. For acute dietary, chronic dietary, incidental oral, dermal and inhalation scenarios, respectively, 13, 12, 15, 31 and 41% of reviewed pesticides have an additional FQPA factor for these exposure pathways. These statistics are similar to what was reported in the first decade after the FQPA passage [ 5 , 8 , 9 , 10 , 11 ]. Importantly, even if an additional FQPA factor is assigned, it does not necessarily represent children’s health protection; for non-organophosphate pesticides, the EPA’s primary rationale for applying an additional FQPA factor is due to data uncertainties. In 2006, the EPA Office of Inspector General wrote that significant challenges remain in the FQPA implementation and that addressing these challenges could improve children’s health [ 96 ]. For example, the Office of Inspector General noted that the EPA’s required testing for pesticide registrations “does not include sufficient evaluation of behavior, learning, or memory in developing animals.” In view of the author, the EPA has not yet succeeded in addressing fine neurological changes that may occur following pesticide exposures, as exemplified in the case of chlorpyrifos where high quality data from human studies were dismissed in favor of a mechanistic study conducted by the pesticide manufacturer [ 58 ]. Over the course of this research project, new EPA documents relevant to the FQPA determinations have been published. First, the EPA’s “Re-Evaluation of the FQPA Safety Factor for Pyrethroids” released in July 2019 stated that the FQPA safety factor for pyrethroids can be reduced to 1X for all populations [ 29 ]. This approach would remove the 3X FQPA safety factors that were previously applied for pyrethroid exposures for children 6 and younger for scenarios such as acute dietary exposures, incidental oral exposures, dermal, and inhalation exposures [ 97 , 98 , 99 ]. As of January 2020, the EPA’s proposal on the FQPA determination for pyrethroids has not yet been finalized and remained open for public comment. However, the re-evaluation document gives an indication of the EPA’s projected removal of FQPA factors for this group of insecticides. Since final determinations are not yet available, pyrethroids were not included in the analysis of non-dietary exposure pathways presented in this study; if the EPA finalizes the proposal of applying a 1X FQPA factor to pyrethroids, this would further decrease the frequency of the FQPA factor application for non-dietary exposure scenarios. Second, two recent EPA assessments for the herbicide metolachlor are now available for comparison, from 2018 [ 100 ] and 2019 [ 15 ]. In both documents, a 1X FQPA factor was applied for all exposure scenarios. Yet, in the 2019 assessment, the EPA proposed to establish the reference dose for metolachlor based on an older rather than a newer animal toxicology study, which would increase the exposure limit to metolachlor by 2.7-fold [ 15 ]. In risk assessments published in 1995, 2014 and 2018, the chronic reference dose for metolachlor was based on a 1-year toxicity study in dogs conducted in 1989, with the reported No Observed Adverse Effect Level (NOAEL) of 9.7 mg/kg/day [ 100 ]. The same study and the same NOAEL have been used in the European Union assessment of metolachlor and the development of the European “acceptable daily intake” dose for chronic exposures to this chemical [ 101 ]. Rather than requiring a new, more refined toxicology study, in the 2019 assessment for metolachlor the EPA used a 1981 study on rats, with a reported NOAEL of 26 mg/kg/day, resulting in a higher chronic reference dose [ 15 ]. This decision diverges from the EPA’s own guidance on the FQPA implementation which emphasized the identification of the most sensitive toxicity effects for reference dose development [ 3 ]. This study raises two themes for future research and public policy: first, the inclusion of human data whenever available; and second, the development of novel approaches for assessing pre- and postnatal toxicity. The importance of both topics was noted in earlier reviews [ 5 , 102 ]. In a landmark 1993 report, “Pesticides in the Diets of Infants and Children” [ 103 ], the National Research Council highlighted the vulnerability of infants and children to pesticides, setting the groundwork for the Food Quality Protection Act. Five years after the passage of the FQPA, the Centers for Disease Control and Prevention (CDC) published their first report documenting the presence of organophosphate pesticide metabolites in urine [ 104 ]. Since that time, biomonitoring studies conducted by the CDC and by independent researchers reported the presence of multiple pesticides and their metabolites in the American population, including the herbicides 2,4-D and glyphosate; the neonicotinoid insecticides acetamiprid, clothianidin and imidacloprid; organophosphate and pyrethroid insecticides; and fungicide metabolites [ 105 , 106 , 107 , 108 ]. In 2006, the National Research Council wrote that biomonitoring has become central to “identifying, controlling, and preventing population exposures to potentially harmful environmental chemicals” [ 109 ]. More research is needed on pesticide exposures for children under 6, since the CDC biomonitoring program focuses on populations 6 years of age and older [ 105 , 110 ], and on the methods for including biomonitoring data in pesticide risk assessments. In 2016, the EPA completed a “Framework for Incorporating Human Epidemiologic & Incident Data in Risk Assessments for Pesticides” which noted that toxicology and risk assessment are undergoing transformational changes driven by new research [ 111 ]. One such development relevant to risk assessment is the identification of Key Characteristics applicable to endocrine disruption [ 112 ], male and female reproductive toxicity [ 113 , 114 ] and carcinogenesis [ 115 , 116 ]. The Key Characteristics approach highlighted the diversity of toxicity pathways that can lead from exposure to environmental contaminants to an elevated risk of disease. It remains to be seen how the EPA would incorporate these new approaches into regulatory risk assessment for cancer [ 117 , 118 ], identification of endocrine disruptors [ 119 ], and risks to children’s health. Conclusions The use of an additional 10-fold margin of safety for children’s health protection remains highly relevant for pesticide risk assessments in light of the ever-growing research on the human health impacts of environmental contaminants and new exposure data generated by biomonitoring studies. Such a health-protective approach is especially important for pesticides that can cause harm to the nervous system, hormonal disruption and cancer. In the meanwhile, the limited application of the FQPA factor in pesticide risk assessments is a missed opportunity for the EPA to fully use the authority of the Food Quality Protection Act for protecting children’s health. Availability of data and materials This study used publicly accessible information from the U.S. Government website Regulations.gov . Abbreviations 2,4-D: 2,4-dichlorophenoxyacetic acid BMD: Benchmark dose BMDL: Benchmark dose lower confidence limit CDC: Centers for disease control and prevention EPA: U.S. Environmental Protection Agency FQPA: Food quality protection act of 1996 LOAEL: Lowest observed adverse effect level NOAEL: No observed adverse effect level UF: Uncertainty factor USDA: U.S. Department of Agriculture USGS: U.S. Geological Survey | The landmark Food Quality Protection Act requires the Environmental Protection Agency to protect children's health by applying an extra margin of safety to legal limits for pesticides in food. But an investigation by EWG, published this week in a peer-reviewed scientific journal, found that the EPA has failed to add the mandated children's health safety factor to the allowable limits for almost 90 percent of the most common pesticides. The study in Environmental Health examined the EPA's risk assessments for 47 non-organophosphate pesticides since 2011, including those most commonly found on fresh fruits and vegetables, and found that the required additional tenfold safety factor was applied in only five cases. "Given the potential health hazards of pesticides in our food, it is disturbing that the EPA has largely ignored the law's requirement to ensure adequate protection for children," said the study's author, Olga Naidenko, Ph.D., vice president for science investigations at EWG. "The added safety factor is essential to protect children from pesticides that can cause harm to the nervous system, hormonal disruption and cancer." The Food Quality Protection Act of 1996, or FQPA, requires the EPA to set allowable levels for pesticides in a way that would "ensure that there is a reasonable certainty that no harm will result to infants and children from aggregate exposure to the pesticide chemical residue." It was hailed as a revolutionary recognition of the fact that children are more vulnerable to the effects of chemical pesticides than adults. "Based on the strong consensus of the pediatric and the public health communities, the FQPA stated unequivocally that regulation of toxic pesticides must focus, first and foremost, on protecting infants and children," said Dr. Philip Landrigan, a pediatrician and epidemiologist who is director of the Program in Global Public Health and the Common Good at Boston College. "When the EPA fails to apply this principle, children may be exposed to levels of chemical pesticides that can profoundly harm their health." Landrigan chaired the committee that authored "Pesticides in the Diets of Infants and Children," a 1993 report from the National Academy of Sciences. The groundbreaking study led to the FQPA's passage with bipartisan support and the backing of both industry and environmentalists. "The FQPA was a revolution in how we think about pesticides' effects on children, but it does no good if the EPA doesn't use it," said EWG President Ken Cook. "It's not only necessary to protect kids' health, it's the law, and the EPA's failure to follow the law is an egregious betrayal of its responsibility." Naidenko's study also examined EPA risk assessments for a particularly toxic class of pesticides called organophosphates, which act in the same way as nerve gases like sarin and are known to harm children's brains and nervous systems. She found that under the Obama administration, the tenfold children's health safety factor was proposed for all organophosphate insecticides. By contrast, in four assessments of pyrethroid insecticides, the EPA under the Trump administration has proposed adding the FQPA safety factor to none. In human epidemiological studies conducted in the U.S. and in Denmark, exposure to pyrethroid insecticides was associated with increased risk of attention deficit hyperactivity disorder. In 2017, the EPA reversed the Obama administration's FQPA determination for chlorpyrifos, the most widely used organophosphate pesticide in the U.S. Despite the Trump EPA's decision, in the wake of bans by Hawaii, California and New York, the main U.S. chlorpyrifos manufacturer recently announced it will stop making this chemical. It remains to be seen whether the Trump EPA will uphold the tenfold FQPA determination for the entire group of organophosphates. The study also found that the Trump EPA has proposed to increase by 2.6-fold the allowable exposure to the herbicide metolachlor. The use of metolachlor has been on the rise for the past decade, with more than 60 million pounds sprayed annually, according to the U.S. Geological Survey. Biomonitoring studies conducted by the Centers for Disease Control and Prevention and by independent researchers reported the presence of multiple pesticides and their byproducts in the American population, including herbicides such as glyphosate and 2,4-D, the bee-killing neonicotinoid insecticides, organophosphate and pyrethroid insecticides, and fungicide metabolites. | 10.1186/s12940-020-0571-6 |
Biology | Solved: Mystery of marine nitrogen cycling in shelf waters | Katharina Kitzinger et al, Single cell analyses reveal contrasting life strategies of the two main nitrifiers in the ocean, Nature Communications (2020). DOI: 10.1038/s41467-020-14542-3 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-14542-3 | https://phys.org/news/2020-02-mystery-marine-nitrogen-shelf.html | Abstract Nitrification, the oxidation of ammonia via nitrite to nitrate, is a key process in marine nitrogen (N) cycling. Although oceanic ammonia and nitrite oxidation are balanced, ammonia-oxidizing archaea (AOA) vastly outnumber the main nitrite oxidizers, the bacterial Nitrospinae. The ecophysiological reasons for this discrepancy in abundance are unclear. Here, we compare substrate utilization and growth of Nitrospinae to AOA in the Gulf of Mexico. Based on our results, more than half of the Nitrospinae cellular N-demand is met by the organic-N compounds urea and cyanate, while AOA mainly assimilate ammonium. Nitrospinae have, under in situ conditions, around four-times higher biomass yield and five-times higher growth rates than AOA, despite their ten-fold lower abundance. Our combined results indicate that differences in mortality between Nitrospinae and AOA, rather than thermodynamics, biomass yield and cell size, determine the abundances of these main marine nitrifiers. Furthermore, there is no need to invoke yet undiscovered, abundant nitrite oxidizers to explain nitrification rates in the ocean. Introduction Nitrification is a key process in oceanic N-cycling as it oxidizes ammonia via nitrite to nitrate, which is the main source of nitrogen for many marine primary producers. In the oceans, ammonia is mainly oxidized to nitrite by ammonia oxidizing archaea (AOA) 1 , 2 and the resulting nitrite is further oxidized to nitrate by nitrite oxidizing bacteria (NOB). Most inorganic fixed N (i.e. nitrate, nitrite and ammonium) in the oceans is present in the form of nitrate (99%), and <0.1% occurs in the form of nitrite, suggesting that any nitrite formed by ammonia oxidizers is immediately oxidized to nitrate 3 , 4 . While the discovery of the AOA 5 , which comprise up to 40% of the microbial community 6 , resolved the longstanding mystery of the apparently missing ammonia oxidizers 4 , it raised the question as to whether there is an equally abundant, yet undiscovered nitrite oxidizer 7 . Such an organism has to date not been found and the known marine nitrite oxidizers have a 10-fold lower abundance than AOA 8 , 9 , 10 , 11 , 12 , 13 . The reasons for this discrepancy in abundance are poorly understood and could be due to ecophysiological differences between nitrite and ammonia oxidizers. These likely include the lower theoretical energy gain from nitrite oxidation compared to ammonia oxidation 14 and the larger cell sizes of NOB compared to AOA 5 , 11 , 15 , 16 . Nonetheless, it is unclear which factors keep nitrite and ammonia oxidation rates in balance due to the lack of knowledge concerning the in situ ecophysiology of marine nitrite and ammonia oxidizers. In part, this is because nitrite oxidation is rarely investigated as a standalone process in marine systems 8 , 9 , 17 , 18 , 19 and marine nitrite oxidizers are rarely quantified 9 , 12 , 20 , 21 . Based on the available data, marine nitrite oxidation is carried out primarily by members of the phylum Nitrospinae 8 , 9 , 11 , 22 , and to a lesser extent by members of the genera Nitrococcus 8 , 23 and Nitrospira 24 , 25 . To date, only two Nitrospinae pure cultures are available 15 , 26 and both belong to the genus Nitrospina , whilst most Nitrospinae detected by cultivation-independent approaches in the marine environment belong to the candidate genus “Nitromaritima” (Nitrospinae Clade 1) and Nitrospinae Clade 2 (refs 11 , 22 , 27 ) (Supplementary Fig. 1 ). The two cultivated Nitrospina species display relatively high growth rates, with doubling times of ~1 day 15 , 26 . The genome of one of the cultivated species, Nitrospina gracilis , has been sequenced, which revealed that the key enzyme for nitrite oxidation, nitrite oxidoreductase (NXR), is closely related to the NXR of Nitrospira and anammox bacteria 28 . Furthermore, Nitrospina gracilis was shown to use the reductive tricarboxylic acid cycle (TCA) cycle for autotrophic C-fixation 28 . In contrast, the ecophysiology of the more environmentally relevant Nitrospinae, “ Ca . Nitromaritima” (Nitrospinae Clade 1) and Nitrospinae Clade 2, is largely uncharacterized. A recent environmental study has suggested that these Nitrospinae clades, besides being the main known nitrite oxidizers in the oceans, also play a key role in dark carbon (C) fixation, fixing as much as, or more inorganic C in the ocean than the AOA 11 . So far, however, direct comparisons of in situ C-assimilation and growth rates of NOB and AOA are lacking. Another largely unexplored facet of Nitrospinae ecophysiology is their N-assimilation strategy. Genome-based studies have shown that many environmental Nitrospinae encode the enzymes urease and cyanase (the latter is also found in the cultured N. gracilis ), which allow for assimilation of the simple organic N-compounds urea and cyanate 11 , 22 , 28 , 29 . Direct evidence for in situ assimilation of organic N-compounds by NOB is missing. Another role of urease and cyanase could be in “reciprocal feeding”, where NOB provide ammonia oxidizers with ammonia derived from the organic N-compounds and then receive the resulting nitrite 29 , 30 . Thus, organic N-use likely affects the distribution and activity of marine Nitrospinae and their interactions with the AOA. Here, we determine key ecophysiological traits of Nitrospinae and compare them to those of AOA in the hypoxic shelf waters of the Gulf of Mexico (GoM). The GoM is an ideal study site to elucidate the in situ ecophysiology of these nitrite oxidizers, as it is an area characterized by high nitrite oxidation activity, which appears to be driven by Nitrospinae as the main NOB 18 . We investigate nitrite oxidation activity and growth rates of GoM Nitrospinae in comparison to GoM AOA in the same samples 31 by combining metagenomics and metatranscriptomics with stable isotope incubations and single cell techniques. Furthermore, the in situ assimilation of the dissolved organic N (DON) compounds urea and cyanate by Nitrospinae is determined, and Nitrospinae biomass yields are compared to those of the AOA. Our results show that GoM Nitrospinae and AOA display different N-utilization strategies. While Nitrospinae use mainly DON in form of urea to meet their N-requirements for assimilation, AOA predominantly assimilate ammonium. Furthermore, GoM Nitrospinae are highly efficient in converting energy to growth, and grow significantly faster than the far more abundant AOA. Our combined results indicate that in contrast to previous assumptions, the main mechanism that maintains the difference in abundance between AOA and Nitrospinae is a different mortality rate, rather than thermodynamics, biomass yield or cell size. Results and discussion Nitrite and ammonia oxidation in the Northern GoM Nitrite and ammonia oxidation rates were determined during an East–West sampling transect on the Louisiana Shelf of the GoM in July 2016 (Supplementary Fig. 2 ). Due to summertime eutrophic conditions 32 , bottom waters were hypoxic at the time (<63 µM oxygen, max. water depth at the sampled stations was 18.5 m). Hypoxic bottom waters generally coincided with highest median ammonium (320 nM), urea (69 nM), cyanate (11.5 nM), nitrite (848 nM) and nitrate (2250 nM) concentrations 31 (Fig. 1a–c , Supplementary Fig. 3 ). These concentrations are similar to previous observations 18 . Fig. 1: Depth profiles of nutrient concentrations, nitrite and ammonia oxidation rates, and Nitrospinae and AOA cell counts 31 at stations in the Northern GoM. a – c In situ oxygen, nitrite and nitrate concentration profiles at Station 1 ( a ), Station 2 ( b ), and Station 3 ( c ). d – f Nitrite and ammonia oxidation rates and Nitrospinae and AOA CARD-FISH counts at Station 1 ( d ), Station 2 ( e ), and Station 3 ( f ). Nitrite and ammonia oxidation rates are depicted as green and white bars, respectively, and were calculated from slopes across all time points of triplicate incubations. Error bars represent standard error of the slope. Surface nitrite and nitrate concentrations ( a – c ) as well as CARD-FISH counts ( d – f ), were taken from the same station, the day before stable isotope labelling experiments were carried out. Shaded gray areas indicate sediment (max. water depth was 18.5 m) (see also Supplementary Fig. 7 ). Full size image Nitrite and ammonia oxidation rates were in a similar range, with rates between 25 and 700 nM day −1 for nitrite oxidation and 80–2500 nM day −1 for ammonia oxidation 31 (Fig. 1d–f ). Nitrite oxidation rates were in the range of the few rates that have been reported previously from the GoM 18 and other oxygen deficient waters 3 , 8 , 9 , 19 . There was no clear relationship between nitrite and ammonia oxidation rates in the GoM (Supplementary Fig. 4 ). For example, ammonia oxidation outpaced nitrite oxidation rates at Station 2, whereas at Station 3, nitrite oxidation rates were higher than ammonia oxidation rates at 12 m and 14 m depth (Fig. 1e, f ). This suggests that nitrite and ammonia oxidation at individual stations and depths are not tightly linked, which is in line with previous observations in the GoM 18 and most likely can be attributed to the dynamic conditions in this region 18 . The local decoupling of nitrite and ammonia oxidation in the GOM provides a unique opportunity to study both processes independently. There was no correlation between the nitrite oxidation rates and nitrite concentration (Fig. 2a ); however, there was a significant positive correlation between ammonia oxidation rates and nitrite concentrations 31 (Fig. 2b ). This indicates that in the GoM, as in most of the ocean, ammonia oxidation, rather than nitrate reduction to nitrite, was the main source of nitrite 3 . Fig. 2: Correlations between nitrite and ammonia oxidation rates and nitrite concentrations across investigated stations. a Correlation between nitrite oxidation rate and nitrite concentration. b Correlation between ammonia oxidation rate and nitrite concentration (reproduced from ref. 31 ). The black line is the linear regression, R 2 was calculated on the basis of Pearson correlations, and was significant (two-sided t -test, t = 8.002, DF = 7, P = 9.10 × 10 −5 ). Error bars represent standard error of the process rates calculated from slopes across all time points and replicates. Full size image Nitrite oxidizing community; composition and abundance To identify the NOB responsible for nitrite oxidation in the GoM, 16S rRNA gene amplicon and deep metagenomic sequencing were performed, and in situ metatranscriptomes were obtained. The only detectable known NOB based on 16S rRNA gene sequences in both amplicon and metagenomic datasets belonged to the phylum Nitrospinae (Fig. 3 ). Nitrococcus , another marine NOB that is frequently found in shelf areas 23 , was not detected in our dataset. The metagenomes and metatranscriptomes were screened for the presence and transcription of the alpha subunit of nitrite oxidoreductase ( nxrA ), a key gene for nitrite oxidation. In line with the 16S rRNA gene results, almost all (84–98%) identified metagenomic nxrA fragments were affiliated with Nitrospinae (Supplementary Fig. 5 ). A further 2–15% of the metagenomic read fragments mapped to nxrA of the NOB genus Nitrolancea 33 . However, only the nxrA genes of Nitrospinae were detected in the metatranscriptomics datasets (Supplementary Fig. 5 ). Fig. 3: A maximum likelihood reconstruction of Nitrospinae 16S rRNA gene phylogeny. Nitrospinae 16S rRNA gene sequences retrieved from GoM metagenomes are indicated as “GoM Nitrospinae” and printed in bold Outgroup sequences represent cultured Deltaproteobacteria. GoM metagenomic read fragments (FPKM) were mapped onto the alignment and are shown next to the respective clades as circles. FPKM mapping to internal basal nodes were grouped and are displayed separately. The scale bar represents estimated nucleotide substitutions per site, and bootstrap values >90% are displayed. Full size image Based on the retrieved metagenomic Nitrospinae 16S rRNA gene reads, several co-occurring Nitrospinae were identified: 85–94% of the metagenomic Nitrospinae 16S rRNA reads were affiliated with Nitrospinae Clade 2, 2–11% were affiliated with “ Ca . Nitromaritima” (Nitrospinae Clade 1), and 0.1–2% were affiliated with the genus Nitrospina 11 , 22 (Fig. 3 ). Members of Nitrospinae Clade 2, the most abundant Nitrospinae in our dataset, are environmentally widespread and have previously been detected in metagenomes from open ocean waters, oxygen minimum zones and the seasonally anoxic Saanich inlet 11 , 27 . Additionally, our analyses of 16S rRNA gene sequences from global amplicon sequencing data sets (sequence read archive, SRA) revealed that phylotypes closely related (>99% identity 34 ) to GoM Nitrospinae Clade 2 occur worldwide in temperate and tropical ocean waters (Supplementary Fig. 6 ). To constrain absolute nitrite oxidizer cell numbers, in situ cell counts were performed by catalyzed reporter deposition fluorescence in situ hybridization (CARD-FISH) using specific probes for Nitrococcus 35 , Nitrospira 36 , and Nitrobacter 37 . We designed a new Nitrospinae-specific probe (Ntspn759), as the published Nitrospinae probes (Ntspn693 (ref. 35 ) and the recently published probe Ntspn-Mod 11 ) covered only a fraction of the known Nitrospinae, and did not cover all sequences in our dataset. The newly developed Ntspn759 probe targeted all of the obtained GoM Nitrospinae 16S rRNA gene sequences and 91% of the known 16S rRNA gene diversity of the family Nitrospinaceae, which contains all known Nitrospinae NOB ( Supplementary Methods ). The only NOB in the GoM detectable by CARD-FISH were Nitrospinae, which is in line with the observations from amplicon and metagenomic sequencing that Nitrospinae were the main NOB. Nitrospinae were hardly detectable by CARD-FISH in the surface waters, and numbers increased with depth, reaching up to 2.8 × 10 4 cells ml −1 just above the sediment. Based on CARD-FISH counts, Nitrospinae constituted at most 1% of the microbial community at all depths and stations (Fig. 1 , Supplementary Fig. 7 ). Nitrospinae CARD-FISH counts were an order of magnitude lower than those of the only detectable ammonia oxidizers, the AOA, in the same samples (using probe Thaum726) 31 , 38 , 39 (Fig. 1 , Supplementary Fig. 8a ). A similar difference in abundance between these two nitrifier groups was also seen in the 16S rRNA gene amplicon dataset and the abundance of Nitrospinae and AOA metagenome assembled genomes (MAGs) 31 (Supplementary Fig. 8b, c ). The lower abundance of NOB compared to AOA in marine systems has been observed before in metagenome, amplicon, and qPCR-based studies 10 , 12 , 13 , 40 , 41 , 42 . Our results confirm this trend using CARD-FISH, a more direct quantification method that is independent of DNA extraction and primer biases. In addition to the in situ Nitrospinae and AOA 31 counts, CARD-FISH counts were carried out at the end of the 15 N and 13 C incubations, which revealed that in some incubations, Nitrospinae and AOA abundances increased (up to five- and six-fold, respectively) within the incubation period of 24 h (Supplementary Data 1 ). Per cell nitrite and ammonia oxidation rates The per cell nitrite oxidation rate may play a key role in determining the abundance of NOB in the environment, as this rate largely determines the energy that can be gained at a single cell level. Such values have not been reported before for marine NOB, as absolute NOB cell numbers are rarely quantified at the same time as bulk nitrite oxidation rates. In fact, per cell nitrite oxidation rates have not been reported even for pure Nitrospina cultures. As the Nitrospinae were the only significant known NOB in the GoM, we were able to calculate per cell nitrite oxidation rates by assuming that all of the Nitrospinae detected by CARD-FISH were active (which is in line with our nanoSIMS data, see below). Average CARD-FISH cell counts between the start and the end of the incubations were combined with the bulk nitrite oxidation rates (Supplementary Data 1 ) to calculate per cell nitrite oxidation rates, which ranged from 21 to 106 fmol per cell per day. These rates were ~15-fold higher than the per cell ammonia oxidation rates of the AOA from the same samples (1–8 fmol-N cell −1 day −1 ) 31 (see Methods). These per cell nitrite oxidation rates are in line with those that can be estimated by combining qPCR data for Nitrospinae 16S rRNA gene abundance and bulk nitrite oxidation rates from the Eastern tropical North Pacific 9 , where Nitrospinae also dominate the NOB community. Those rates ranged from 0 to 107 fmol nitrite per cell per day, assuming that Nitrospinae from the Eastern tropical North Pacific, like N. gracilis 28 , have a single rRNA operon. The success of NOB in oxygen deficient waters has, amongst other factors, been attributed to a high affinity for oxygen 19 , 43 . Our incubations were carried out at in situ oxygen concentrations, ranging from 1 to 160 µM. There was no correlation between Nitrospinae per cell nitrite oxidation rates and oxygen concentrations (Supplementary Fig. 9 ). This indicates that the nitrite oxidizers in the GoM were never oxygen limited, but are well adapted to low oxygen concentrations, as observed previously in other regions 19 , 43 . Cellular carbon content of Nitrospinae and AOA Despite their low abundance, Nitrospinae have recently been estimated to be responsible for more dark carbon (C) fixation in marine systems than the highly abundant AOA 11 . This could imply that the bulk population C-content of the Nitrospinae is higher than the bulk population C-content of the AOA. Previous studies indicate that Nitrospinae cells are larger than AOA cells 5 , 11 , 15 , but the differences in cell and population size have never been quantified in situ and subsequently converted to cellular or population C-content. In order to quantify the C-content of the NOB and AOA populations in the GoM, cell volumes were calculated from nanoscale secondary ion mass spectrometry (nanoSIMS) measurements. The GoM Nitrospinae were on average four-fold larger in volume than the AOA. This is in contrast to previous estimates by Pachiadaki et al. 11 , who reported Nitrospinae cells to be 50-fold larger than AOA cells. However, their calculations were based on cell diameter estimates obtained from flow cytometry and assumed spherical cell shapes, whereas in the GoM and in culture, AOA resemble rods or prolate spheres and Nitrospinae cells are curved rods 5 , 15 , 16 , 26 . By applying a scaling factor for C-content based on cell biovolume 44 , we calculated that the GoM Nitrospinae contained approximately two times as much C per cell (100 ± 23 (SD) fg-C cell −1 ) as AOA (50 ± 16 (SD) fg-C cell −1 , Table 1 ). The AOA in the GoM were visibly larger (length × width = 0.6 ± 0.1 (SD) × 0.4 ± 0.1 (SD) µm) than many cultured marine AOA 5 , 44 , 45 , 46 (length × width = 0.5–2 × 0.15–0.26 µm) and those normally observed in environmental studies. As such, the GoM AOA cellular C-content was higher than previously determined, ranging from 9 to 17 fg-C cell −1 (refs 44 , 46 , 47 , 48 ). Table 1 Parameters for estimating biomass yield by GoM AOA and Nitrospinae. Full size table By combining the in situ Nitrospinae and AOA cell abundances and their per cell C-content, the bulk C-content of both nitrifier populations was estimated. The C-content at all investigated stations and depths ranged from 0.06 to 2.52 bulk-µg-C L −1 for the Nitrospinae population and 0.67–20.75 bulk-µg-C L −1 for the AOA population. Thus, the overall Nitrospinae population C-content was ~10-fold lower than that of the AOA population. In situ growth rates of Nitrospinae and AOA In situ growth rates for Nitrospinae have not been reported so far. NanoSIMS was performed on samples from Station 2, 14 m depth, which were amended with 13 C-bicarbonate and 15 N-ammonium (or 15 N-nitrite, see Methods) to determine single cell Nitrospinae growth rates. Autotrophic growth rates from C-fixation were 0.25 ± 0.01 (SE) day −1 and ammonium-based growth rates were 0.53 ± 0.03 (SE) day −1 (Fig. 4 ), corresponding to doubling times of 2.8 and 1.3 days, respectively. The discrepancy between C- and N-based growth may be partly due to C isotope dilution by the CARD-FISH procedure 49 , 50 . The dilution of Nitrospinae cellular carbon by 12 C-derived from the polycarbonate filters might also have affected the measured single cell 13 C-uptake rates (see Methods). Additionally, the discrepancy between C- and N-based growth could indicate that the Nitrospinae use intracellular C-storage compounds to support growth or were growing mixotrophically, for which there was some evidence in the Nitrospinae MAGs (see below). Fig. 4: Nitrospinae and AOA growth rates calculated from 13 C-bicarbonate and 15 N-ammonium assimilation measured by nanoSIMS. Nitrospinae 13 C-bicarbonate assimilation rates were determined from water samples after the addition of 15 N-ammonium and 13 C-bicarbonate, and 15 N-nitrite and 13 C-bicarbonate. AOA data were acquired from the incubation with added 15 N-ammonium and 13 C-bicarbonate only and were taken from Kitzinger et al. 31 . Number of cells analyzed per population is indicated above each boxplot. Boxplots depict the 25–75% quantile range, with the center line depicting the median (50% quantile); whiskers encompass data points within 1.5x the interquartile range. Data of each measured cell are shown as points; horizontal position was randomized for better visibility of individual data points. Nitrospinae had significantly higher growth rates than AOA, as indicated by stars (one-sided, two-sample Wilcoxon test, W = 1984, p = 4.04 × 10 −16 for growth based on 13 C-bicarbonate assimilation and W = 1464, p = 3.32 × 10 −12 for growth based on 15 N-ammonium assimilation). Full size image Compared to the Nitrospinae, the AOA in the GoM had significantly lower growth rates based on both 13 C-bicarbonate assimilation (0.04 ± 0.005 (SE) day −1 ) and 15 N-ammonium assimilation (0.23 ± 0.01 (SE) day −1 ) 31 (Fig. 4 ). It should be noted that the lower measured AOA autotrophic ( 13 C-based) growth rates may also be affected by the smaller cell size of AOA in comparison to Nitrospinae, which likely leads to a stronger C-isotope dilution effect due to 12 C-derived from the polycarbonate filters (see Methods). The measured lower growth rates of AOA compared to Nitrospinae were, however, also in good agreement with substantially lower per cell oxidation rates of AOA compared to Nitrospinae. In situ organic N use by Nitrospinae Intriguingly, the ammonium-based growth rate (0.5 day −1 ) of the Nitrospinae was substantially lower than that calculated from the increase in cell numbers during the incubation period, which corresponded to a growth rate of 1.2 day −1 (0.6 days doubling time). This indicates that the Nitrospinae may have been assimilating N-sources other than ammonium. Metagenomic studies and analysis of the N. gracilis genome have indicated that some Nitrospinae can use the simple dissolved organic N-compounds (DON) urea and cyanate as additional N-sources 11 , 22 , 27 , 28 , 29 . To assess whether this is the case in the environment, single cell N-assimilation based on the incorporation of 15 N-ammonium, 15 N-urea, 15 N-cyanate, and 15 N-nitrite was determined by nanoSIMS. All measured Nitrospinae cells were significantly enriched in 15 N for all tested substrates (Fig. 5 ). Furthermore, the Nitrospinae assimilated significantly more 15 N from all these compounds than surrounding microorganisms, including the AOA 31 . Intriguingly, ammonium and urea were assimilated equally by Nitrospinae, followed by cyanate. Nitrite assimilation by Nitrospinae was much lower compared to the other tested substrates. We calculated the growth rates of Nitrospinae and AOA from N-assimilation of all tested substrates combined (Supplementary Fig. 10 ). The combined N-based growth rate was 1.2 day −1 for Nitrospinae and 0.26 day −1 for AOA, which agrees well with cell count based growth rates of 1.2 day −1 and 0.25 day −1 for Nitrospinae and AOA, respectively (determined at Station 2, 14 m depth). This implies that GoM Nitrospinae and AOA could meet all of their cellular N-demand by using ammonium, urea and cyanate. In fact, when taken together, urea, and cyanate assimilation met more than half of the Nitrospinae cellular N-demand, while AOA mainly assimilated ammonium. Utilization of DON for N-assimilation is likely a key factor for the ecological success of Nitrospinae, as it allows them to avoid competition with AOA, whom they depend on for their substrate, nitrite. The use of reduced DON (or ammonium) is also favored over nitrite because six reducing equivalents are required to reduce nitrite to ammonium before assimilation, which is metabolically costly. Thus, from an ecophysiological perspective, utilization of DON as N-source by Nitrospinae is highly advantageous. Fig. 5: Nitrospinae single cell 15 N-assimilation from ammonium, urea, cyanate and nitrite measured by nanoSIMS. a Representative CARD-FISH images of Nitrospinae (green, stained by probe Ntspn759) and other cells (blue, stained by DAPI). b Corresponding nanoSIMS image of 15 N at% enrichment after addition of 15 N-ammonium, urea, cyanate or nitrite. Nitrospinae are marked by white outlines. Scale bar is 1 μm in all images. c 15 N at% enrichment of Nitrospinae (green), AOA (white) and other, non-targeted cells (gray) after incubation with 15 N-ammonium, 15 N-urea, 15 N-cyanate or 15 N-nitrite. AOA data were taken from Kitzinger et al. 31 for comparison. Note that non-targeted cells depicted here also include AOA cells, as no specific AOA probe was included in the Nitrospinae nanoSIMS measurements. Number of cells analyzed per category is indicated above each boxplot. Boxplots depict the 25–75% quantile range, with the center line depicting the median (50% quantile); whiskers encompass data points within 1.5x the interquartile range. NA is the natural abundance 15 N at% enrichment value (0.37%). Full size image Nitrospinae MAG analyses To assess the genomic basis for DON utilization by Nitrospinae, we screened the GoM metagenomes for the presence and transcription of Nitrospinae-like cyanase and urease genes. From five deeply sequenced metagenomes, we obtained seven Nitrospinae MAGs, representing three closely related Nitrospinae population clusters (hereafter referred to as population cluster A, B, and C). Nitrospinae population cluster A made up 0.003–0.358% and population cluster B 0.008–0.152% of the metagenomic reads, compared to the lower abundance population cluster C with 0.003–0.050% (Supplementary Table 1 ). All obtained MAGs were affiliated with Nitrospinae Clade 2 (Fig. 1 , Supplementary Fig. 1 ). In line with the observed assimilation of 15 N from 15 N-ammonium and 15 N-nitrite, the MAGs contained both ammonium and nitrite transporters, as well as assimilatory nitrite reductase genes (Supplementary Table 2 ). The nanoSIMS data implied that all measured Nitrospinae are capable of urea and cyanate assimilation. Accordingly, at least one MAG representative of each population cluster A, B and C contained urease and/or urea ABC-transporter genes, supporting the observed in situ assimilation of urea-derived N (Supplementary Table 2 ). Nitrospinae-affiliated urease genes were also transcribed in the GoM (Supplementary Fig. 11 ). Metagenomic read fragment abundance (FPKM) of Nitrospinae-affiliated ureC genes was very similar to FPKM values of Nitrospinae 16S rRNA (SSU) and rpoB gene abundance in all metagenome datasets (average FPKM ureC : FPKM SSU = 1.2, FPKM ureC : FPKM rpoB = 1.7), indicating that all GoM Nitrospinae encoded ureC . In contrast, clearly Nitrospinae-affiliated cyanase ( cynS) genes were much less abundant in the metagenome datasets (average FPKM cynS : FPKM SSU = 0.09, FPKM cynS : FPKM rpoB = 0.1). In fact, only one of the MAGs (population cluster B) contained the cynS gene (Supplementary Table 2 ); and its transcription was not detected in the metatranscriptomes (Supplementary Fig. 12 ). This contrasts with the obtained nanoSIMS data, where all measured Nitrospinae incorporated N from cyanate. The reason for this discrepancy is unknown. However, as cynS has previously been shown to undergo horizontal gene transfer 29 , 51 , it is possible that GoM Nitrospinae contain additional cyanases not closely related to previously known Nitrospinae cynS genes 31 . In addition to urea and cyanate utilization genes, the MAGs also encoded for spermidine, amino acid and (oligo-) peptide ABC-type transporters, which may provide additional N- and C-sources for growth. The presence of a sugar transport system likely taking up sucrose, a fumarate/malate/succinate transporter, as well as many uncharacterized ABC transporter systems further indicated that the GoM Nitrospinae have a potential for mixotrophic growth (Supplementary Table 2 ). Mixotrophic growth of GoM Nitrospinae might contribute to the differences observed in 13 C-bicarbonate and 15 N-based growth rates and may contribute to their high measured growth rates and environmental success. The Nitrospinae MAGs provided little evidence for alternative chemolithautotrophic energy generation pathways, which is in good agreement with recent findings from other oxygen deficient waters 27 . As in all other sequenced nitrite oxidizers, including N. gracilis 28 , the Nitrospinae MAGs encoded a copper containing nitrite reductase ( nirK ). Furthermore, the MAG with the lowest abundance encoded a putative NiFe 3b hydrogenase, similar to the one found in the genome of N. gracilis 28 . Overall, the potential for known alternative energy generating pathways was low in the obtained MAGs of Nitrospinae Clade 2. However, it cannot be excluded that Ca . Nitromaritima (Nitrospinae Clade 1) and Nitrospina , which also occur in the GoM at lower abundance, and for which no MAGs were obtained, do have additional metabolic versatility. In situ N- and C-assimilation rates of Nitrospinae and AOA Single cell and population N- and C-assimilation rates were calculated for Nitrospinae and AOA using the 15 N-enrichment and their cellular N-content as calculated from their biovolumes (see Methods). Average Nitrospinae N-assimilation in fmol-N per cell per day was 0.42 ± 0.03 (SE) for 15 N-ammonium, 0.43 ± 0.02 (SE) for 15 N-urea, 0.05 ± 0.01 (SE) for 15 N-cyanate and 0.003 ± 0.0004 (SE) for 15 N-nitrite. Thus, the combined Nitrospinae N-assimilation from all 15 N-substrates together was 0.91 fmol-N per cell per day. In comparison to Nitrospinae, the single cell N-assimilation rates (in fmol-N per cell per day) of AOA were significantly lower, with 0.11 ± 0.01 (SE) for 15 N-ammonium, 0.005 ± 0.001 (SE) for 15 N-urea, 0.004 ± 0.0002 (SE) for 15 N-cyanate; and the combined AOA N-assimilation rate from all 15 N-substrates together was 0.12 fmol-N per cell per day. Due to the probable bias in measured 13 C-enrichment measurements (see above), C-assimilation for both Nitrospinae and AOA was estimated from the measured 15 N-assimilation rates, following the Redfield ratio of C:N (6.6:1, see Methods). The combined Nitrospinae C-assimilation rate was 6.0 fmol-C per cell per day, compared to a much lower combined AOA C-assimilation rate of 0.76 fmol-C per cell per day. When these values were combined with the Nitrospinae and AOA cell counts, the population C-assimilation was ~80 nmol-C per liter per day for the Nitrospinae and ~400 nmol-C per liter per day for the AOA. Nitrospinae and AOA C-assimilation was also calculated from the increase in cell counts before and after incubation and their cellular C-content. The C-assimilation rate based on cell count increase was ~75 nmol-C per liter per day for the Nitrospinae population, and ~480 nmol-C per liter per day for AOA; these values are similar to those calculated from the 15 N-tracer additions. Contrasting life strategies of Nitrospinae and AOA From a thermodynamic perspective, nitrite oxidation is a much less exergonic process than ammonia oxidation 14 . This is also the case under conditions representative for the GoM, where Gibbs free energy release is −65 kJ per mol for nitrite oxidation, compared to −262 kJ per mol for ammonia oxidation (Supplementary Table 3 ). Based on the measured bulk nitrite and ammonia oxidation rates at Station 2, 14 m depth (Fig. 1 ), nitrite oxidation provides ~0.04 Joule per liter per day, and ammonia oxidation ~0.7 Joule per liter per day in the hypoxic GoM waters. Therefore, from a purely thermodynamic perspective, AOA biomass should increase about ten times faster than that of the Nitrospinae in the GoM (Fig. 1 ). This, however, assumes that they have an equal biomass yield (i.e. they are fixing the same amount of C per Joule, see below), which likely is not the case 52 . The Joule energy gain was combined with the population C-assimilation rates of ~80 nmol-C per liter per day for the Nitrospinae, and ~400 nmol-C per liter per day for the AOA population (Table 1 ) to calculate the biomass yield for nitrite and ammonia oxidation (i.e. nmol-C fixed per Joule gained). Intriguingly, the biomass yield for the Nitrospinae population was ~2150 nmol-C per Joule, while AOA population biomass yield was only ~610 nmol-C per Joule (Table 1 ). This implies that Nitrospinae are ~4-fold more efficient in translating the energy gained from the oxidation of nitrite to C-assimilation than the AOA are in translating energy gained from ammonia oxidation. This is surprising considering that AOA use the HP/HB C-fixation pathway, which is suggested to be the most energy efficient aerobic autotrophic C-fixation pathway (requiring five ATP per generated pyruvate 53 ). Nitrospinae employ the reverse tricarboxylic acid cycle (rTCA) for autotrophic C-fixation 28 . This pathway is highly energy efficient under anaerobic conditions (requiring two ATP per generated pyruvate) but is highly sensitive to oxygen 54 . A previous study has suggested that the Nitrospinae replace the oxygen sensitive enzymes by less oxygen sensitive versions 28 . Our results imply that at least under the low oxygen conditions in the GoM, the rTCA cycle in the Nitrospinae is also highly energy efficient. However, additional factors likely contribute to the apparently higher biomass yield of Nitrospinae when compared to the AOA. According to the most recent metabolic models, the AOA must synthesize at least three enzymes to oxidize ammonia to nitrite 55 , 56 . It is noteworthy, that of the six electrons from aerobic ammonia oxidation to nitrite, only two directly contribute to energy conservation 51 , while the other four are required for the reduction of molecular oxygen during the conversion of ammonia to hydroxylamine by the ammonia monooxygenase. In comparison, the Nitrospinae have a shorter respiratory chain, oxidizing nitrite to nitrate in a single reaction, before transferring the two electrons from nitrite oxidation to oxygen. Additionally, the active site of NXR in Nitrospinae is located in the periplasm; therefore, the protons generated during nitrite oxidation might directly contribute to the proton motive force, and thus to ATP generation 28 . All of these factors, which are not captured in thermodynamic comparisons, could lead to a higher than predicted biomass yield of Nitrospinae compared to AOA. In this context, reverse electron transport, which is required for generating reducing equivalents for CO 2 -fixation in Nitrospinae and AOA, must also be considered. This may be energetically more expensive for Nitrospinae compared to AOA, however, to date, no information is available that allows a meaningful comparison of the actual energetic costs associated to reverse electron transport in Nitrospinae and AOA. A further factor that could contribute to the apparent differences in biomass yield is mixotrophic growth of Nitrospinae, i.e. assimilation of organic C in addition to autotrophic C-fixation. Mixotrophy would lead to C-assimilation that requires less energy and thus the calculated biomass yield would be an overestimate, as it assumes that the measured N-assimilation is matched by autotrophic C-fixation (see Methods). Nevertheless, comparison of the directly measured 13 C-bicarbonate (DIC) assimilation by Nitrospinae and AOA also indicated that the Nitrospinae had a much higher biomass yield (~465 nmol-C per Joule) than the AOA (~105 nmol-C per Joule, Table 1 ). In principle, the biomass yield of the Nitrospinae could also have been overestimated if they were using other electron donors in addition to nitrite, such as sulfur or hydrogen; however, little evidence for the use of alternative electron donors was found in the investigated Nitrospinae MAGs (see above). Alternatively, rather than overestimating the biomass yield of the nitrite oxidizers, the yield of the AOA might have been underestimated if they were releasing significant amounts of dissolved organic C (DOC), as recently shown for AOA pure cultures 57 . If this occurs in the environment as well, it would have wide ranging implications for our understanding of the impact of the highly abundant AOA on C-cycling in the dark ocean. The fact that the AOA outnumber NOB ten to one in the GoM and other marine systems despite lower AOA growth rates indicates a higher mortality rate of Nitrospinae than of AOA. This mortality could for example be due to viral lysis or zooplankton grazing. We did not perform experiments to assess the relative importance of these two controlling factors. However, both viral lysis and zooplankton grazing have previously been shown to play a major role in bacterioplankton population control 58 . Taken together, our results show that despite their lower in situ abundance, Nitrospinae in the GoM are more energy efficient, and grow faster than AOA. If our results can be extended to the rest of the ocean, no additional undiscovered NOB are required to account for the global oceanic balance between ammonia and nitrite oxidation. Furthermore, the results presented here show that Nitrospinae meet most of their cellular N-requirement by the assimilation of N from urea and cyanate, in contrast to AOA, which mainly assimilate ammonium. We hypothesize that differences in mortality, biomass yield and organic N-utilization between Nitrospinae and AOA are likely key factors regulating the abundances of these main nitrifiers in the ocean. Methods Sampling Sampling was undertaken on the Louisiana Shelf in the Northern Gulf of Mexico aboard the R/V Pelican , cruise PE17-02, from 23 July to 1 August 2016, on an East–West transect from 92°48′4″ W to 90°18′7″ W 31 . Briefly, seawater was sampled with 20 L Niskin bottles on a rosette equipped with a conductivity, temperature, depth (CTD), and an SBE 43 oxygen sensor. Water column nutrient profiles (ammonium, nitrite, nitrate, urea, cyanate) were measured at nine stations (surface to water-sediment interface at max. 18.5 m). Nitrite oxidation rate measurements, N- and CO 2 -assimilation measurements, molecular and CARD-FISH analyses were carried out at three of the nine stations (Supplementary Fig. 2 ). Nutrient sampling and analysis were carried out as described in Kitzinger et al. 31 . Briefly, samples for ammonium, nitrite and urea concentrations were measured onboard immediately after collection, following the procedures of Holmes et al. 59 , Grasshoff et al. 60 , and Mulvenna et al. 61 , respectively. Samples for cyanate concentration measurements were derivatized onboard and stored frozen until analysis using high performance liquid chromatography (Dionex, ICS-3000 system coupled to a fluorescence detector, Thermo Scientific, Dionex Ultimate 3000) 62 . Samples for the determination of nitrate concentrations were stored frozen until analysis following Braman and Hendrix 63 . N- and CO 2 -assimilation and nitrite oxidation experiments Stable isotope experiments were done at three stations and three depths in and below the oxycline as previously described 18 , 31 . These experiments were designed to assess ammonium, urea, cyanate and nitrite assimilation, autotrophic CO 2 (bicarbonate, DIC) fixation and nitrite oxidation rates. Briefly, seawater was filled into 250 ml serum bottles from Niskin bottles and allowed to overflow three times to minimize oxygen contamination. Serum bottles were then sealed bubble-free with deoxygenated rubber stoppers 64 and stored at in situ temperature (28 °C) in the dark until the beginning of the experiments (<7 h). All experimental handling took place under red light to minimize phytoplankton activity. Tracer amendments (Supplementary Table 4 ) were made to triplicate serum bottles at each depth to investigate urea ( 15 N 13 C-urea), cyanate ( 15 N 13 C-cyanate), ammonium ( 15 N-NH 4 + ), and nitrite ( 15 N-NO 2 − ) assimilation and oxidation rates. All amendments were made as 5 µM additions. In the ammonium and nitrite assimilation experiments, 200 µM 13 C-bicarbonate ( 13 C-NaHCO 3 ) was added to investigate autotrophic CO 2 fixation. Tracer aliquots were dissolved in sterile filtered seawater at the start of every experiment to minimize abiotic breakdown. As described in Kitzinger et al. 31 , after tracer addition, a 40 ml helium headspace was set in each serum bottle and oxygen concentrations were adjusted to match in situ conditions (Supplementary Data 1 ). Oxygen concentrations remained within 20% of in situ concentrations throughout the incubations, as determined by optical sensors in separate bottles (Firesting, Pyroscience). Samples were taken at the start of each experiment to determine the labeling percentage of 15 N and 13 C-DIC 59 , 61 , 62 , 62 . Thereafter, serum bottles were incubated in the dark at in situ temperature (28 °C). After 6, 12, and 24 h, 20 ml of seawater was sampled and replaced with He, sterile filtered and frozen. Serum bottle headspaces were again flushed with He, and oxygen was added to match in situ concentrations. After 24 h, the remaining seawater from triplicate incubations was combined, and 20 ml were fixed and filtered onto 0.22 µm GTTP filters for catalyzed reporter deposition fluorescence in situ hybridization (CARD-FISH) and 0.22 µm gold sputtered GTTP filters for nanoSIMS analyses (see below). Nitrite oxidation rate measurements Nitrite oxidation rates were determined from the increase in 15 N-nitrate over time after the addition of 15 N-nitrite. After the removal of any residual nitrite with sulfamic acid, nitrate was reduced to nitrite using spongy cadmium and subsequently converted to N 2 via sulfamic acid 8 , 65 . The resulting N 2 was then measured by GC-IRMS on a customized TraceGas coupled to a multicollector IsoPrime100 (Manchester, UK). Rates were calculated from the slopes of linear regressions across all time points from the triplicate serum bottles and were corrected for initial 15 N-labeling percentage. Only slopes that were significantly different from 0 are reported ( p < 0.05, one-sided student t -test, R v. 3.5.1) 66 . When non-significant regressions were found, rates are reported as below detection limit. For the determination and calculation of ammonia oxidation rates see Kitzinger et al. 31 . 13 C-DIC labeling percentage measurements 13 C-DIC labeling percentages were determined from the first time point by 13 C-CO 2 / 12 C-CO 2 measurements after sample acidification 67 using cavity ring-down spectroscopy (G2201-i coupled to a Liaison A0301, Picarro Inc., Santa Clara, USA, connected to an AutoMate Prep Device, Bushnell, USA). CARD-FISH counts and per cell oxidation and growth rates To visualize and quantify cells of the Nitrospinaceae family, a new oligonucleotide probe was designed ( Supplementary Methods , Supplementary Fig. 13 ). For Nitrospinae and AOA quantification, seawater samples from each station and depth were fixed with 1% paraformaldehyde (final concentration, without methanol, EMS) for 12–24 h at 4 °C before filtration (<400 mbar) onto 0.22 µm GTTP filters (Millipore). Filters were stored frozen at −20 °C until analysis. Nitrospinae and AOA 31 abundances were determined by CARD-FISH according to Pernthaler et al. 68 ( Supplementary Methods ), using the newly designed Nitrospinae probe and probe Thaum726 for AOA 31 , 38 , 39 . Samples were additionally screened by CARD-FISH for other marine NOB of the genera Nitrospira (probe Ntspa662) 36 , Nitrobacter (probe Nit3) 37 and Nitrococcus (probe Ntcoc84) 35 at the respective published formamide concentrations; published competitor probes were used for all CARD-FISH experiments. Nitrospinae and AOA growth during the incubation time was assessed by CARD-FISH, and growth rates (Eq. ( 1 )) and doubling times (Eq. ( 2 )) were estimated according to: $${\mathrm{GR = ln}}\left( {N_t/N_0} \right)/t$$ (1) $${\mathrm{DT = ln}}\left( 2 \right)/{\mathrm{GR}}$$ (2) where GR is growth rate, N t the number of Nitrospinae or AOA cells at time t (cell counts after incubation), N 0 the number of cells at time 0 (in situ cell counts), t the incubation time in days and DT is doubling time in days. Per cell Nitrospinae nitrite oxidation rates were estimated by combining measured bulk nitrite oxidation rates and Nitrospinae cell abundance, as determined by averaging Nitrospinae in situ counts and Nitrospinae counts after 24 h of incubation, as in Stieglmeyer et al. 69 . Per cell AOA ammonia oxidation rates were calculated accordingly, and therefore differ from previously reported rates which only took into account in situ AOA abundances 31 . NanoSIMS analyses and single cell C-content and growth rates At the end of each incubation experiment, the content of triplicate serum bottles was combined. The seawater was filtered (<100 mbar) onto gold sputtered 0.22 µm GTTP filters (Millipore) at the end of the incubations, and fixed in 3% paraformaldehyde (in sterile filtered seawater) for 30 min at room temperature, washed twice in sterile filtered seawater and then stored at −20 °C. Before nanoSIMS analysis, Nitrospinae or AOA were targeted by CARD-FISH (without embedding filters in agarose) and all cells were stained with DAPI after which regions of interest were marked on a laser microdissection microscope (6000 B, Leica). Single cell 15 N- and 13 C-assimilation from incubations with 15 N-ammonium and 13 C-bicarbonate, 15 N-nitrite and 13 C-bicarbonate, 15 N 13 C-urea or 15 N 13 C-cyanate were determined for Station 2, 14 m depth, using a nanoSIMS 50 L (CAMECA), as in Martinez-Pérez et al. 70 . Instrument precision was monitored daily on graphite planchet and regularly on caffeine standards. Due to the small size of most cells in the sample, they were pre-sputtered for only 10–20 s with a Cs + beam (~300 pA) before measurements. Measurements were carried out over a field size of 10 × 10 µm or 15 × 15 µm, with a dwelling time of 2 ms per pixel and 256 × 256 pixel resolution over 40 planes. The acquired data were analyzed using the Look@NanoSIMS software package 71 as in Martinez-Pérez et al. 70 . Ratios of 15 N/( 15 N + 14 N) and 13 C/( 13 C + 12 C) of Nitrospinae/AOA and non-Nitrospinae/AOA cells were used for calculation of growth rates only when the overall enrichment Poisson error across all planes of a given cell was <5%. The variability in 15 N/( 15 N + 14 N) ratios across measured Nitrospinae/AOA and non-Nitrospinae/AOA cells 31 was calculated in R v. 3.5.1 (ref. 66 ) following Svedén et al. 72 ( Supplementary Methods and Supplementary Fig. 14 ). Single cell growth rates from nanoSIMS data were calculated as in Martinez-Pérez et al. 70 , where cell 15 N- and 13 C-atom% excess was calculated by subtracting natural abundance 15 N/( 15 N + 14 N) and 13 C/( 13 C + 12 C) values (0.37% and 1.11%, respectively). These calculated values are considered conservative, as isotopic dilution of 15 N/( 15 N + 14 N) and 13 C/( 13 C + 12 C) ratios due to CARD-FISH was not taken into account 50 , 73 . AOA single cell assimilation and growth rates from these samples have previously been published 31 . The autotrophic growth rate calculations assume that all newly incorporated 13 C as detected from single cell 13 C/( 13 C + 12 C) ratios is due to biomass increase. Biomass turnover due to recycling or replacing of cell components without net per cell growth, and utilization of intracellular C-storage compounds was assumed to be negligible. Nitrospinae autotrophic growth rates were measured in incubations with 15 N-ammonium and 13 C-bicarbonate (and an added 14 N-nitrite pool), and in incubations with 15 N-nitrite and 13 C-bicarbonate. Nitrospinae 13 C-growth rates did not differ significantly between these two incubations (two-sided, two-sample Wilcoxon test, W = 240, p = 0.1113, R v. 3.5.1) 66 and were therefore considered together. AOA 13 C-based growth rates were obtained from 15 N-ammonium and 13 C-bicarbonate incubations only. For estimation of the per cell C-content, cell volumes of AOA and Nitrospinae in the GoM were calculated from nanoSIMS ROI areas. For Nitrospinae, cell shapes were assumed to resemble cylinders topped by two half spheres, whereas AOA cell shapes were assumed to resemble prolate spheroids 74 . Nitrospinae and AOA cellular C-content was calculated according to Khachikyan et al. 44 , and cellular N-content for both groups was calculated from C-content assuming Redfield stoichiometry (C:N = 6.625:1), as no environmental C:N ratios of AOA or Nitrospinae are available 16 , 44 . This in good agreement with C:N ratios published previously for cultured AOA 44 and energy dispersive spectroscopy measurements performed on Nitrospina gracilis (C:N = 5.9 ± 1.2 (SD), n = 13) according to Khachikyan et al. 44 . N-assimilation (and correspondingly C-assimilation from 13 C-bicarbonate) rates were calculated by: $$ {\mathrm{N}}\_{\mathrm{AssimilationRate}}\, \left[{\mathrm{fg}}{\hbox{-}} {\mathrm{N}}\,{\mathrm{cell}}^{{\mathrm{-1}}}{\mathrm{d}}^{{\mathrm{-1}}}\right] \\ \quad = \left( { \,}^{15}{\mathrm{Nat}}\%{\mathrm{excess}}_{{\mathrm{cell}}} \right)/ \left( { \, }^{15}{\mathrm{Nat}}\% {\mathrm{excess}}_{{\mathrm{label}}} \right) \times {\mathrm{fg}} {\hbox{-}} {\mathrm{N}}_{{\mathrm{cell}}} \times 1/t$$ (3) $$ {\mathrm{N}}\_{\mathrm{AssimilationRate}}\,\left[ {{\mathrm{fmol {\hbox{-}} N}}\,{\mathrm{cell}}^{{\mathrm{ - 1}}}{\mathrm{d}}^{{\mathrm{ - 1}}}} \right]\\ \quad = {\mathrm{N {\hbox{-}} AssimilationRate}}\left[ {{\mathrm{fg {\hbox{-}} N}}\,{\mathrm{cell}}^{{\mathrm{ - 1}}}{\mathrm{d}}^{{\mathrm{ - 1}}}} \right]{\mathrm{/14}}$$ (4) where 15 Nat%excess cell and 15 Nat%excess label are 15 N-atom% of a given measured cell and of the 15 N-enriched seawater during the incubation after subtraction of natural abundance 15 N-atom% (0.37%). fg-N cell is the assumed N-content per cell, and t is the incubation time in days 75 . In addition to the directly measured C-assimilation rates from 13 C-bicarbonate fixation, C-assimilation rates were calculated from the measured N-assimilation rates, assuming that 6.625 mol of C is assimilated per assimilated mol of N. This was done because nanoSIMS measurements of cells filtered onto polycarbonate filters might lead to a possible dilution with 12 C through edge effects between cell boundary and filter. This would affect the 13 C-enrichment of all cells in the samples but likely has a larger impact on small cells like AOA. DNA and RNA analyses Samples for DNA and RNA analyses were collected from the same depths and CTD casts sampled for assimilation and oxidation rate experiments as described in Kitzinger et al. 31 . For details on nucleic acid extraction please refer to the Supplementary Methods . 16S rRNA gene sequencing and analysis 16S rRNA gene diversity was assessed by amplicon sequencing, following an established pipeline 31 , 76 , 77 , using barcoded primers F515 and R806 (ref. 78 ). Amplicons were sequenced on the Illumina MiSeq Platform using a Reagent Kit v2 (500-cycles) and a Nano Flow Cell. Details on PCR conditions and bioinformatic analyses are described in the Supplementary Methods . Metagenome sequencing, assembly and binning of MAGs Metagenomic libraries were constructed and sequenced as in Kitzinger et al. 31 (see Supplementary Methods ). Read sets were quality filtered using BBduk (BBMap v. 36.32 - Bushnell B. - sourceforge.net/projects/bbmap/), assembled using Metaspades v. 3.10.1 (ref. 79 ) and binned with Metabat2 v 2.12.1 (ref. 80 ) (see Supplementary Methods ). Nitrospinae metagenome assembled genomes (MAGs) were identified using GTDB-Tk v. 0.2.2 ( ) with database release 86, which is based on the Genome Taxonomy Database 81 . To improve Nitrospinae MAG quality, the MAGs were iteratively re-assembled and re-binned (see Supplementary Methods ). Metagenome sequencing statistics and information on dereplicated Nitrospinae MAGs are listed in Supplementary Tables 5 and 1, respectively. Metatranscriptome sequencing Metatranscriptomes from Station 2 were obtained as described in Kitzinger et al. 31 . To enrich for mRNA, ribosomal RNA (rRNA) was depleted from total RNA using the Ribo-Zero™ rRNA Removal Kit for bacteria (Epicentre). mRNA-enriched RNA was converted to cDNA and prepared for sequencing using the ScriptSeq™ v2 RNA-Seq Library preparation kit (Epicentre) and sequenced on an Illumina MiSeq using a 600 cycle kit. Metatranscriptomes were separated into ribosomal and non-ribosomal partitions using SortMeRNA v. 2.1 (ref. 82 ). Metatranscriptome sequencing statistics are listed in Supplementary Table 6 . Single-gene phylogenetic reconstruction Single-gene phylogenetic reconstruction was done as described in Kitzinger et al. 31 and is described in detail in the Supplementary Methods . Briefly, genes of interest, namely the 16S rRNA gene, and the genes encoding for nitrite oxidoreductase alpha subunit ( nxrA ), urease alpha subunit ( ureC ), cyanase ( cynS ), and bacterial RNA polymerase beta subunit ( rpoB ), respectively, were identified in metagenomic assemblies using their respective rfam and pfam HMM models. Alignments were compiled for genes (16S rRNA) and proteins (NxrA, UreC, CynS, RpoB) of interest retrieved from the GoM metagenomes and public databases. These alignments were used for phylogenetic tree calculations using IQ-TREE v. 1.6.2 (ref. 83 ), using the best-fit model using ModelFinder 84 (model TNe + R3 for the 16S rRNA gene; LG + R4 for NxrA, UreC and CynS). The resulting trees were visualized using ITOL 85 . Phylogenetic trees of GoM UreC and CynS have previously been published 31 , but have been recalculated using the data of the new metagenomic assembly and updated reference sequences. Accession numbers of reference sequences included in all phylogenetic analyses are given in Supplementary Data 2 . The abundances of genes of interest in metagenomic and metatranscriptomics datasets were assessed by identifying reads with BLASTX queries against the dataset assembled for phylogenetic analysis and phylogenetic placement into phylogenetic trees using the evolutionary placement algorithm 86 . Read mapping is reported as fragments per kilobase per million reads (FPKM) values. FPKM values were calculated based on the number of read pairs for which one or both reads were placed into a specified location in the tree, divided by the average gene length in the reference alignment (in kb) divided by the number of total metagenomic read pairs or ribosomal-RNA free metatranscriptomic read pairs (in millions). The percentage of ureC - and cynS -containing Nitrospinae was estimated for each metagenomic dataset as in Kitzinger et al. 31 . FPKM for urease or cyanase genes (FPKM ureC/cynS ) classified as Nitrospinae ureC/cynS and the FPKM for Nitrospinae rpoB (FPKM rpoB ) and SSU (16S rRNA genes, FPKM SSU ) genes were compared, under the assumption that rpoB and SSU were universally present in all Nitrospinae as single copy genes. The percentage of ureC-/cynS -positive Nitrospinae was then calculated as FPKM ureC/cynS /FPKM rpoB and/or as FPKM ureC/cynS /FPKM SSU . Nitrospinae 16S rRNA gene distribution analysis Full-length 16S rRNA gene sequences obtained from GoM Nitrospinae Clade 2 MAGs were used to screen for the presence of closely related sequences in all publicly available Short Read Archive (SRA, ) datasets with IMNGS 34 , using a minimum identity threshold of 99% and a minimum size of 200 nucleotides. Metadata for SRA datasets were obtained from NCBI, and latitude/longitude coordinates were plotted using the maps and ggplot2 libraries in R v. 3.5.1 (ref. 66 ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All sequence data and Nitrospinae MAGs generated in this study are deposited in NCBI under BioProject number: PRJNA397176. Metatranscriptomes are deposited under BioSample numbers SAMN07461123–SAMN07461125; 16S amplicon sequencing under SAMN07461114–SAMN07461122; metagenomes under SAMN10227777–SAMN10227781 and MAGs under SAMN12766710–SAMN12766716. Accession numbers of sequences used for tree calculations (16S rRNA gene, NxrA, UreC, CynS, and genome sequences) are given in Supplementary Data 2 . CTD data, measured nutrient concentrations, process rates, Nitrospinae and AOA relative abundance based on 16S rRNA gene amplicon sequencing and CARD-FISH counts are given in Supplementary Data 1 . Code availability No custom code was used for analyses of amplicon sequencing data and phylogenetic analyses. Code used to automatize the MAG binning is provided by the authors upon request. | Nitrogen cycling in shelf waters is crucial to reduce surplus nutrients, which rivers pour out into the ocean. Yet this process is poorly understood. Scientists from Bremen have now found answers to a longstanding mystery in a key process of the nitrogen cycle. The issue involves nitrification, the oxidation of ammonia via nitrite to nitrate, a key process in marine nitrogen cycling. In the sea, both steps of this process are balanced and most available nitrogen exists in the form of nitrate, the final product of nitrification. The organisms which are largely responsible for the first step of nitrification in the ocean—the ammonia oxidizing archaea—were discovered around a decade ago, and it turns out that they are amongst the most abundant microorganisms on the planet. The second part of nitrification, the transformation of nitrite to nitrate, is carried out by nitrite oxidizing bacteria, which mainly belong to the Nitrospinae phylum. Yet, Nitrospinae are ten time less abundant than the ammonia-oxidizers, raising the question: is there an equally abundant, still undiscovered nitrite oxidizer in the ocean? Grow fast, die young Scientists at the Max Planck Institute for Marine Microbiology have now solved this mystery in cooperation with colleagues of the University of Vienna, the University of Southern Denmark and the Georgia Institute of Technology. "We show that there is no need to invoke yet undiscovered, abundant nitrite oxidizers to explain nitrification in the ocean. "Surprisingly, we probably already know all the players," says Katharina Kitzinger, first author of the paper, published in the scientific journal Nature Communications in February. So far, scientists have mainly determined the number of microbes involved in the marine nitrification, however, Katharina Kitzinger and her colleagues also examined the biomass of the microorganisms, as well as the growth rates and the activity of individual cells. These results have revealed that the ten times higher abundance of ammonia-oxidizers is not due to differences in the size of the microorganisms or because of the slow growth of Nitrospinae, as many scientists supposed until now. "On the contrary. Our results indicate that Nitrospinae are much more active and grow much faster than the ammonia-oxidizing Archaea. Thus, Nitrospinae are clearly more efficient than the Archaea," explains Katharina Kitzinger and adds: "As such, one would expect the Nitrospinae to be significantly more abundant. As this is not the case we assume that Nitrospinae have such a low abundance because they have a high mortality rate. This explains the balanced marine nitrification process in the ocean and makes the existence of further unknown, abundant nitrite oxidizers unlikely." Pictures of ammonia-oxidizing Archaea and nitrite-oxidizing Nitrospinae: The picture on the left shows the abundance of ammonia-oxidizing Archaea (green) and other microorganisms (blue). The picture on the right shows the abundance of nitrite-oxidizing Nitrospinae (green) and other microorganisms (blue). The differences in abundance and size are clearly visible. Credit: Max Planck Institute for Marine Microbiology/ K. Kitzinger Nitrogen and food for friends At the same time, the researchers investigated which nitrogen compounds ammonia-oxidizing Archaea and Nitrospinae use for their cell growth. "While the Archaea almost exclusively grow using ammonium, the Nitrospinae seem to mainly use organic nitrogen, namely urea and cyanate instead," says Katharina Kitzinger. "The utilization of organic nitrogen is likely key to the ecological success of Nitrospinae, as it allows them to avoid competition with their friends, the Archaea, on whom they depend on for nitrite." In this way the two microorganisms help each other: The Archaea produce nitrite which serves the Nitrospinae, while the Nitrospinae presumably release some ammonium after they take up organic nitrogen. So, in turn, they provide the energy source for the Archaea—a symbiotic win-win situation. The scientists acquired their samples in the Gulf of Mexico, where the process of nitrification is very important due to the high nutrient input from rivers like the Mississippi. "The microorganisms involved in nitrification and their relative abundances are similar worldwide," says Katharina Kitzinger. "Therefore, it is very likely that our results are also valid for the rest of the ocean." | 10.1038/s41467-020-14542-3 |
Nano | New technique traps light at graphene surface using only pulses of laser light | All-optical generation of surface plasmons in graphene, Nature Physics, DOI: 10.1038/nphys3545 Journal information: Nature Physics | http://dx.doi.org/10.1038/nphys3545 | https://phys.org/news/2015-11-technique-graphene-surface-pulses-laser.html | Abstract Surface plasmons in graphene offer a compelling route to many useful photonic technologies 1 , 2 , 3 . As a plasmonic material, graphene offers several intriguing properties, such as excellent electro-optic tunability 4 , crystalline stability, large optical nonlinearities 5 and extremely high electromagnetic field concentration 6 . As such, recent demonstrations of surface plasmon excitation in graphene using near-field scattering of infrared light 7 , 8 have received intense interest. Here we present an all-optical plasmon coupling scheme which takes advantage of the intrinsic nonlinear optical response of graphene. Free-space, visible light pulses are used to generate surface plasmons in a planar graphene sheet using difference frequency wave mixing to match both the wavevector and energy of the surface wave. By carefully controlling the phase matching conditions, we show that one can excite surface plasmons with a defined wavevector and direction across a large frequency range, with an estimated photon efficiency in our experiments approaching 10 −5 . Main Graphene has attracted significant interest recently as a unique optical material. In particular, it has been predicted and experimentally shown that graphene can support highly confined surface plasmons 1 , 9 , with electrically tunable dispersion 7 , 8 . Despite these promising discoveries, the burgeoning field of graphene plasmonics has some serious obstacles to overcome if it is to progress from the proof-of-principle stage. Problems arise due to the small wavelength of the surface plasmons, two orders of magnitude smaller than light of the same frequency. This has led to the development of specialized measurement techniques, most of which use infrared light and geometries with scattering resonances 10 , 11 , 12 or near-field sources 7 , 8 to excite graphene surface plasmons. However, the far-infrared regime, in which graphene plasmons are predicted to have long lifetimes, lacks developed sources and detectors compared to the visible regime. Alternative approaches, such as the manipulation of surface acoustic waves to couple to the graphene surface plasmons 13 , 14 , therefore hold promise. Particularly desirable is the potential to excite a plasmon eigenstate with a singular energy, momentum and direction, vital for many future applications, including plasmonic circuits. In this respect, very recent progress has been made, with the development of carefully designed nanoantennas which can locally excite and direct surface plasmons in graphene 11 . Here, the combination of infrared source frequency and nanoantenna dimensions determine the frequency, wavevector and direction of the surface plasmons generated. In this letter, we investigate a competing approach that embodies many of these desirable aspects of directivity without requiring careful nanofabrication of antennas. This all-optical approach can access a distinctly broad frequency range, even down to the far infrared. We coherently excite surface plasmons using two visible frequency free-space beams via difference frequency generation (DFG), an effect which we monitor through changes in reflectance, and can tune the frequency and wavevector of the surface plasmon through careful adjustment of incident light sources. This potential to excite and detect plasmons purely with free-space optics, and at frequencies different from that of the plasmons themselves, has the potential to significantly expand the technological possibilities for graphene plasmonics. The intrinsic nonlinear interactions of graphene with light are surprisingly large 5 , 15 , 16 , 17 , 18 . Moreover, large enhancements of nonlinear optical effects are predicted by the presence of highly confined plasmons in graphene 19 , 20 . It seems intuitive, then, to attempt the converse: to use the nonlinear interaction between optical fields to resonantly drive surface plasmons. Similar approaches have been demonstrated experimentally for thin metallic films 21 , 22 , and have been recently proposed for graphene, with various coupling schemes proposed for the difference frequency mixing of infrared light in a graphene film 23 and in graphene-clad waveguide structures 23 , 24 . Similar in concept to that described in ref. 23 , Fig. 1 shows our nonlinear coupling scheme illustrated on a dispersion diagram. By illuminating the graphene with two intense laser pulses with well-defined angles of incidence but different frequencies, labelled here f pump and f probe , one can phase match both the frequency and wavevector, k , of the surface plasmon. This wave mixing process is a second-order nonlinear effect, normally forbidden in centro-symmetric crystals 25 , but possible in graphene because of the distinctively non-local, spatial character of the interaction 20 . The inset in Fig. 1 shows the experimental arrangement used (see Methods ). The experiments are carried out in a non-collinear geometry, using two beams incident on the samples at angles θ pump and θ probe providing sufficient in-plane momentum to match to the surface plasmon, as illustrated in Fig. 1 . We measure the differential reflection of the probe beam, defined as Δ R / R = ( R − R 0 )/ R 0 , where R and R 0 are the reflections with and without the presence of the pump pulse, respectively. The polarization of both incoming beams is set in the plane of incidence (transverse magnetic polarized). To isolate the nonlinear reflection signal, we vary the temporal overlap of the two pulses. Figure 1: The nonlinear coupling scheme illustrated on a dispersion diagram. The DFG of the pump (green arrow) and probe (orange arrow) allows access to wavevectors outside of the light line (red line). This permits phase matching to the surface plasmon modes in graphene (blue line). The pink line illustrates a region that can be interrogated by altering the pump wavelength from 615 to 545 nm with the probe wavelength fixed at 615 nm. (Inset) The experimental arrangement used to excite surface plasmons on graphene. Full size image For optical excitation, one expects optical nonlinearity arising due to saturable absorption caused by Pauli blocking of interband transitions 26 . A typical measurement of the temporal dynamics recorded for this process ( λ pump = 547 nm, λ probe = 615 nm) is shown by the black curve in Fig. 2 . Note that we normalize the signal by the pump fluence, ϕ , to remove artefacts due to power variation 27 . The asymmetric line shape of the signal is due to the relaxation dynamics of the excited electrons cooling 27 , 28 , with temporal broadening caused by the spatial overlap of the non-collinear beam spots. Figure 2: Normalized differential reflection as a function of temporal overlap for the geometry θ pump = 15°, θ probe = 125°. At zero delay time, both the pump and probe pulses arrive simultaneously, leading to a nonlinear change in the probe reflection. Two curves are shown: The black curve labelled ‘non-resonant’ shows a typical time-asymmetric measurement when the difference frequency produced by the pump and probe (61.2 THz) does not coincide with a surface plasmon energy state. The red curve shows an additional fast symmetric contribution to the recorded reflection signal when the difference frequency matches the energy of a graphene surface plasmon (23.8 THz). Full size image For non-degenerate pump and probe beams, in addition to (incoherent) saturable absorption effects, one can expect (coherent) wave mixing signals. This coherent contribution to the probe reflection is expected to be significantly enhanced when the difference frequency field generated by the pump and probe matches that of the graphene surface plasmons. This is analogous to that of a stimulated Raman process, corresponding to a transfer of energy from pump to probe pulses 25 via the generation of surface plasmons. An example of the recorded temporal dynamics under such a resonant condition is presented in Fig. 2 . Comparing the two curves in this figure, we see that the ‘non-resonant’ signal (that is, when one is not phase matching to plasmon excitation) gives rise to an asymmetric lineshape representative of carrier cooling dynamics. Under ‘resonant’ conditions (that is, when phase matching conditions are satisfied) we observe a fast additional contribution to the signal, giving rise to a more symmetric lineshape, as one would expect for a coherent signal. For certain experimental geometries and excitation fluences, signal enhancements of up to ×4 are observed (see Supplementary Fig. 1 ). It should be noted that, depending on efficiencies, it may be possible to isolate the coherent signal using a heterodyne detection scheme 29 , which could also allow detection of a plasmon in a different spatial position from where generated. To observe the presence of the coherent signal, we vary the difference frequency from 0 to 60 THz to isolate any resonant, coherent conditions (see Methods ). In this way, it is possible to interrogate a section of the surface plasmon dispersion, for example the region illustrated by the pink line in Fig. 1 . By altering the experimental geometry, we investigate here three different regions of the dispersion diagram corresponding to ( θ pump = 55°, θ probe = 45°), ( θ pump = 50°, θ probe = 70°) and ( θ pump = 15°, θ probe = 125°). Figure 3 shows the results of these three measurement geometries, superimposed on the surface plasmon dispersion (black line). The dispersion was calculated according to the model outlined in ref. 30 , with the SiO 2 substrate phonon frequencies as given and a Fermi energy of E f = 0.5 eV . This Fermi energy is larger than the measured intrinsic doping of our graphene samples (see Methods for sample details), which we attribute to a significantly raised electron temperature expected under illumination by ultrafast pulses 31 , 32 (see Supplementary Information , Supplementary Fig. 2 ). Hybridization with the substrate phonons leads to four branches 30 , 33 . The overlaid colour plots are placed on the diagram so that the maximum differential reflection signal achieved in each delay-scan corresponds to the difference frequency and wavevector of the data set. Figure 3: Plots of normalized differential reflection for three different experimental geometries, superimposed on the graphene surface plasmon–phonon dispersion. The grey shading around the plasmon dispersion curves (black lines) indicates the expected spectral broadening of the signals ( ∼ 7.5 THz) due to the finite bandwidth of ∼ 100 fs-pulses. The set-ups used are depicted at the top of the figure with the angles used being θ pump = 55°, θ probe = 45° ( a ), θ pump = 50°, θ probe = 70° ( b ) and θ pump = 15°, θ probe = 125° ( c ). The intraband transition threshold and light line (dotted lines) are labelled on the diagram. The colourbar for a , b is given at the top left and the colourbar for c is given at the bottom right. Full size image Near the regions defined by the surface plasmon dispersion in graphene, we observe clear enhancement in the differential reflection. The assignment of the spectral features to surface plasmon excitation is further supported by the polarization dependence of the signal (see Supplementary Fig. 3 ). The observation of these resonant features over the incoherent background is also strongly dependent on the magnitudes of both pump and probe intensities (see Supplementary Figs 1 and 2 ). For larger difference frequencies, up to 150 THz, we do not observe any further resonance features in our spectra (see Supplementary Fig. 4 ). The lower branch of the plasmon dispersion relation gives rise to the largest mixing signals for the low-wavevector phase matching (set-ups a, b in Fig. 3 ), while the upper branches give rise to the largest signals for the high-wavevector (set-up c in Fig. 3 ) region. Although we observe clear resonance features in all three of these experimental geometries, we also observe a change in sign of the signal between the low- ( Fig. 3a, b ) and high-wavevector ( Fig. 3c ) regions. The absolute differential reflectivity signal size also increases with increasing wavevector. To understand the origin of these coupling behaviours, we have developed a simple theoretical model that captures the salient features of this nonlinear reflection and generation of plasmons. We briefly summarize the model in the Methods (full details are presented in the Supplementary Information ). In Fig. 4 , we plot the modelled differential probe reflectance, normalized by fluence, for the simplified case of continuous plane-wave pump and probe beams. While this simple model ignores the non-equilibrium nature of the excitation, as observed in experiment, we show below that it is sufficient to describe some of the salient features of our results. Similar to Fig. 3 , the differential reflectance is plotted versus difference frequency and in-plane wavevector. It can be seen that the simulation qualitatively produces the main features of Fig. 3 . In particular, the change in the sign of differential reflectance at the Brewster angle is clearly observed, as is the enhancement of the signal when the difference frequency and wavevector align with the plasmon dispersion relation. Figure 4: Numerical solution for the normalized differential probe reflectance, calculated using the model outlined in the Supplementary Information . The white dotted lines a – c indicate the region of the dispersion relation probed by the experimental geometries shown in the equivalent parts of Fig. 3 . Full size image The model also reproduces some of the main features arising from different coupling efficiencies to different bands ( Fig. 4 ). Generally, the highest coupling efficiency occurs for the dispersion regions which are most ‘plasmon-like’ in origin (see also Supplementary Fig. 7 ). This is most obvious comparing the data in Figs 3c and 4c , where the coupling to the upper bands is much stronger than the lowest band in both model and experiment. For lower-wavevector cases, the coupling to the highest band is overestimated in the model compared to the experiment. This could possibly be caused by frequency-dependent losses in the graphene sheet unaccounted for in our simple model. The model also reproduces the increasing absolute signal strength with increasing wavevector observed in experiment, which is a consequence of both larger changes in the reflection coefficient for a corresponding change in absorption for higher angles, and due to spatial dispersion 20 in the signal. Indeed, it can be shown that the magnitude of enhancement is proportional to the square of the plasmon quality factor, Q 2 (see Supplementary Information ), in agreement with predictions from ref. 23 . In addition to the surface plasmon resonance conditions, for the highest-wavevector region in Fig. 3c there is an additional resonant enhancement found in experiment at low frequencies (<3 THz) that is not reproduced in our model ( Fig. 4c ). The position of this peak lies within the expected region of intraband transitions in graphene, indicated by the dotted line in Fig. 3 . This feature is also largely independent of polarization, unlike the enhancements we attribute to surface plasmon coupling (see Supplementary Fig. 3 ). In principle, the reflection/transmission expressions obtained (see Methods , equation (1)) can be inverted to allow an experimental determination of the nonlinear conductivity σ (2) , given transmission or reflection data. This is difficult in the present set-up, in part given the broad bandwidth of the pulses, uncertainty over some system parameters, and difficulty of investigating a large number of angles to quantify possible wavevector and frequency dependence of σ (2) . However, as an estimate, we take the simplest possible model, in which the effective nonlinear susceptibility χ (2) is independent of frequency and wavevector. This corresponds to a nonlinear conductivity function obeying σ (2) ( ω ) = i | σ (2) ( ω probe )|( ω / ω probe ), where the value at the (fixed) probe frequency represents a single fitting parameter. We find that a value of | σ (2) ( ω probe )| ≍ 2.4 × 10 −12 A m V −2 produces the same peak signal as observed in Fig. 3b . It should be emphasized that this represents a rather conservative estimate of σ (2) ( ω ). In particular, the mobility of μ ≍ 2,000 cm 2 V −1 s −1 corresponds to a plasmon linewidth of γ = 2π × 1.6 THz that is narrower than the measurement bandwidth, indicating that only a fraction of the pulse can efficiently excite plasmons. Reducing the measurement bandwidth could therefore give rise to greater coupling to the surface plasmons, while also reducing the effects of non-equilibrium carriers on the measurements. While a comparison to a bulk nonlinear crystal is not directly meaningful, it is nonetheless interesting to note that a bulk nonlinear crystal with the thickness t ≍ 0.3 nm of a graphene layer would require a nonlinear susceptibility of χ (2) ∼ | σ (2) ( ω probe )|/( ɛ 0 ω probe t ) ∼ 3 × 10 −7 m V −1 to produce the equivalent in-plane nonlinear currents. This value is approximately three orders of magnitude larger than in GaAs. Finally, from the inferred value of σ (2) and the input beam parameters, our model enables us to estimate the conversion efficiency η of pump photons to plasmons for our experimental pulse intensities (see Supplementary Information ). We find a value of η ≍ 6 × 10 −6 , while noting that the actual conversion could be significantly higher with narrow pulses, again as the estimated value of σ (2) does not account for the large pulse bandwidth. We note that this experimentally obtained value of η is of the same order as predicted in ref. 23 , once adjusted for our experimental parameters. To conclude, by carefully manipulating the phase matching conditions, we show that one can generate surface plasmons with a defined wavevector and an efficiency approaching 10 −5 for the pulse intensities used. This efficiency by no means represents a fundamental limit, and we believe that it could in principle be pushed towards a 10 −2 level with future adjustments, such as increasing the surface plasmon Q factor from ∼ 5 to ∼ 30 with lattice-matched hBN substrates 34 , equalizing the intensities of the pump and probe beams (see Supplementary Fig. 1 ), or the use of narrower bandwidth pulses. Moreover, in principle, our approach could be extended to higher or lower frequencies, regions that are generally hard to access using present approaches 2 , 35 . Methods Experimental arrangement. An identical pair of optical parametric amplifiers (OPAs), pumped by an amplified femtosecond laser system, generate the 100 fs-pulses at a repetition rate of 1 kHz. The wavelengths of the two OPAs are selected independently, and the beams are directed to the sample. The incident beams are weakly focused on the sample using 30 cm focal length lenses, giving rise to a very small uncertainty in angle ∼ 0.017 rad, and a similarly negligible uncertainty for the in-plane wavevectors. Sets of half-wave plates and polarizers determine both the average power and polarization, with the polarization set such that the electric vector of the light is in the plane of incidence (transverse magnetic polarized). The pump pulse fluence, ϕ , used is typically in the range ϕ ∼ 0.1 − 0.2 mJ cm −2 , with a pump spot size on the sample of ∼ 300 μm radius. This pump fluence is an order of magnitude less than the photo-modification threshold for graphene 36 , and the probe fluence is typically two orders of magnitude smaller still. To obtain difference frequencies from 0 to 60 THz, the pump wavelength is varied from 615 to 545 nm, with the probe wavelength set at 615 nm. We record the differential reflection of the probe beam, defined as Δ R / R = ( R − R 0 )/ R 0 , where R and R 0 are the reflections with and without the presence of the pump pulse, respectively. This differential reflection is recorded using a set of photo-balance diodes. To isolate the nonlinear reflection signal, we vary the temporal overlap of the two pulses using a motorized delay stage. Note that there is no appreciable signal from the quartz substrate (see Supplementary Fig. 8 ). Sample preparation. Samples for our experiments are fabricated from commercially grown CVD graphene on copper foil (Graphene Supermarket). Transfer to quartz substrates was performed in house by means of a standard metal etching and float technique using ammonium persulphate to etch the copper and PMMA as a support structure. Combined resistance and Raman spectroscopy 37 give an estimated mobility of the samples of around 2,000 cm 2 V −1 s −1 and a natural Fermi energy of ∼ 300 meV. Raman imaging indicates that the graphene is nominally single layer, with ∼ 80% coverage of the substrate. Theoretical model. In general, equations for the electromagnetic boundary conditions at the air–graphene–substrate interface relate the wavevector- and frequency-dependent reflection and transmission coefficients r ( k , ω ), t ( k , ω ) to the graphene current density J ( k , ω ). The current density, on the other hand, can be written in terms of the electric field using conductivity functions, which allows the equations to be solved in terms of fields alone. Nonlinear contributions imply that J ( k , ω ) depends on fields at other wavevectors and frequencies, which couple the various reflection and transmission coefficients together. For a second-order conductivity σ (2) , we find that the probe transmission depends on the pump via with an analogous equation for the pump transmission (expressions for r are more involved but are directly related to t , see Supplementary Methods ). Here t probe (L) is the linear transmission coefficient, A is a function of linear optical properties and beam angles with, for notational simplicity, dependencies on k , ω being implicit. To model the differential reflectivity of the probe as shown in Fig. 4 , the probe wavelength and pump angles are fixed at 615 nm and 50°, respectively, and the pump and probe intensities are chosen to be 10 and 0.1 W μm −2 , to closely correspond to the configuration in Fig. 3b . | Pioneering new research by the University of Exeter could pave the way for miniaturised optical circuits and increased internet speeds, by helping accelerate the 'graphene revolution'. Physicists from the University of Exeter in collaboration with the ICFO Institute in Barcelona have used a ground-breaking new technique to trap light at the surface of the wonder material graphene using only pulses of laser light. Crucially, the team of scientists have also been able to steer this trapped light across the surface of the graphene, without the need for any nanoscale devices. This dual breakthrough opens up a host of opportunities for advances in pivotal electronic products, such as sensors and miniaturised integrated circuits. The new research features in the latest online edition of the respected scientific journal, Nature Physics. Dr Tom Constant, lead author on the paper and part of Exeter's Physics and Astronomy Department said: " This new research has the potential to give us invaluable insight into the wonder material and how it interacts with light. A more immediate commercial application could be a simple device that could easily scan a piece of graphene and tell you some key properties like conductivity, resistance and purity ." Dr Constant and his colleagues used pulses of light to be able to trap the light on the surface of commercially-available graphene. When trapped, the light converts into a quasi-particle called a 'surface plasmon', a mixture of both light and the graphene's electrons. Additionally, the team have demonstrated the first example of being able to steer the plasmons around the surface of the graphene, without the need to manufacture complicated nanoscale systems. The ability both to trap light at a surface, and direct it easily, opens up new opportunities for a number of electronic-based devices, as well as help to bridge the gap between the electronics and light. Dr Constant said: "Computers than can use light as part of their infrastructure have the potential to show significant improvement. Any advance that reveals more about light's interaction with graphene-based electronics will surely benefit the computers or smartphones of the future." | 10.1038/nphys3545 |
Biology | New study finds two amino acids are the Marie Kondo of molecular liquid phase separation | Nature Communications, DOI: 10.1038/s41467-020-18224-y Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-18224-y | https://phys.org/news/2020-09-amino-acids-marie-kondo-molecular.html | Abstract Liquid phase separation into two or more coexisting phases has emerged as a new paradigm for understanding subcellular organization, prebiotic life, and the origins of disease. The design principles underlying biomolecular phase separation have the potential to drive the development of novel liquid-based organelles and therapeutics, however, an understanding of how individual molecules contribute to emergent material properties, and approaches to directly manipulate phase dynamics are lacking. Here, using microrheology, we demonstrate that droplets of poly-arginine coassembled with mono/polynucleotides have approximately 100 fold greater viscosity than comparable lysine droplets, both of which can be finer tuned by polymer length. We find that these amino acid-level differences can drive the formation of coexisting immiscible phases with tunable formation kinetics and can be further exploited to trigger the controlled release of droplet components. Together, this work provides a novel mechanism for leveraging sequence-level components in order to regulate droplet dynamics and multiphase coexistence. Introduction Liquid–liquid phase separation of biomolecules has emerged as a ubiquitous driving force underlying subcellular organization, from modern cells to the protocellular origins of life 1 , 2 , 3 . Coacervation of proteins and nucleic acids into liquid droplets, increasingly referred to as “biomolecular condensates” 1 , has been implicated in the assembly of membraneless organelles 4 , 5 , 6 , in the coordination of genetic elements 7 , 8 , 9 and cytoskeletal regulatory molecules 10 , 11 , and in the etiology of diseases from cancer to neurodegeneration 2 , 12 , 13 , 14 . The collective emergent material properties of condensates and their regulation have been underscored as essential features of condensate function and/or dysfunction 2 , 14 , 15 , 16 , 17 . Deciphering the mechanism underlying the assembly of individual biomolecules into condensates with unique material properties and the interaction or coexistence between distinct phases impacts our understanding of current and past cellular life, human health, and additionally drives a new frontier toward the engineering of organelles with controllable and even novel functions 18 , 19 , 20 , 21 . Current understanding of the biomolecular driving forces underlying liquid phase separation has been successfully informed by classic theories of model polymer coacervates 22 , 23 , including the role of length-dependent multivalent interactions 10 , 11 . The interplay of electrostatic, hydrophobic, and cation–pi interactions 22 , 23 , 24 , 25 , 26 , 27 have been further demonstrated to contribute to the condensate interactome. Specific amino acids, such as arginine (R) and lysine (K), have been identified as key residues in driving phase separation in vitro and in vivo. Arginine residues are essential features of arginine/glycine (R/G)-rich domains 28 that drive phase separation of the proteins DDX4 29 , LAF-1 30 , FUS 31 , 32 , FMRP 33 , Lsm4 34 , and PGL proteins 35 , 36 , and arginine methylation can regulate phase separation of these domains 29 , 33 , 37 . Similarly, lysine residues and their acetylation have been shown to be crucial for the liquid phase separation of the proteins tau 38 and DDX3X 39 . Interestingly, despite the conserved charge between residues, arginine-to-lysine mutations in R/G-rich domains result in decreased phase separation propensity with higher critical concentrations required for phase separation 29 , 33 . Additionally, the properties of arginine-rich and lysine-rich condensates exhibit significant differences. Recent fluorescence recovery after photobleaching (FRAP) studies indicate that proline–arginine dipeptide repeats implicated in ALS give rise to condensates that have less internal mobility than comparable proline–lysine dipeptide repeats 40 , with similar observations of reduced fluidity made for model arginine–glycine vs lysine–glycine peptide sequences 41 and lysine and arginine-rich peptides 42 . These recent works have additionally shown that substituting poly-RNA bases (purine vs pyrimidine) had distinct consequences on the apparent fluidity of respective arginine- vs lysine-rich peptides. Direct rheological measurements comparing arginine and lysine homopolymer condensates would provide fundamental insight into how these residues contribute to network properties, such as viscosity. Material properties such as viscosity and surface tension dictate many essential characteristics of condensates, including internal diffusion rates, molecular sequestration, and the hierarchical organization of coexisting phases 43 , 44 . The coexistence of multiple phases has recently been demonstrated to play important roles in cellular function, including the organization of the nucleolus 45 , FMRP/CAPRIN1 droplets 46 , and P granule proteins 36 . The sequence-driven rules underlying the multiphase droplet formation of charged biopolymers are just beginning to be unraveled. Previous work has shown that the miscibility of distinct phases of hydrophobic elastin-like polypeptides can be regulated by sequence changes that alter the critical temperature of phase separation 47 . More recent works have shown that coexisting phases of charged polyelectrolytes can, when sufficiently different, form multiple phases 48 , 49 . Where there is a difference in critical salt concentration, which is indicative of a different density and water content between complex coacervates, multiple phases will form. Different homopolymeric RNAs, due to difference in cation–pi interaction strength between arginine and nucleobases, have also been shown to be sufficient in creating multiphase droplets 40 . These coexisting phases of charged polyelectrolytes can influence solute partitioning 48 , 49 due in part to unique microenvironments brought about by relative density differences 49 . Despite these advances, it is not yet understood what differences on the amino acid residue level are sufficient to drive the formation of coexisting phases. In addition, recent work has been studied under equilibrium conditions, while the kinetic processes and directed manipulation of multiphase dynamics remains largely unexplored. Here we set out to ask whether differences between lysine and arginine condensates could be exploited to regulate multiphase dynamics and stability. We find that the minimal nucleobase unit required for condensate formation differs between poly-L-lysine (polyK) and poly-L-arginine (polyR) sequences. Using microrheology to precisely quantify condensate viscosity, we show that arginine–nucleotide droplets have approximately 100-fold greater viscosity than comparable lysine–nucleotide condensates, which is a significantly larger difference than that observed when increasing polymer length (between N = 10 to N = 100). We find that lysine and arginine polymers are not miscible within condensates, and arginine antagonistically competes for anionic complexation. We demonstrate that the differences between droplets can be exploited to rapidly invert lysine-rich droplets inducing release of lysine polymers into the surrounding environment. Furthermore, by altering the stoichiometry and length of polymers and nucleotides, the rate of inversion and polymer release can be tuned allowing for coexisting phases to persist over varying timescales. This work utilizes the distinct phase behaviors of lysine and arginine residues, resulting from underlying differences in interaction, to offer a fundamental mechanism for the control and manipulation of droplet dynamics and multiphase coexistence. Results Polymer length tunes viscosity of polyK liquid condensates In order to extract fundamental rules linking condensate molecular components to emergent material properties, we first examine the contribution of polymer length, using three different fixed lengths of polyK ( N = 10, 50, 100) combined with uridine phosphates and poly-uridine (pU) ( N = 10, 50). At a total concentration of 6 mM per monomer, all lengths of polyK form liquid droplets capable of rapid droplet fusion in complex with charge-matched quantities of pU and uridine-5’-triphosphate trisodium salt (UTP; Fig. 1a, b and Supplementary Fig. 1 ), consistent with the coacervation of oppositely charged polymers (reviewed here 50 , 51 ) and the more recently observed phase separation of mixed length polyK with mononucleoside triphosphates 52 , respectively. We find that polyK cannot, however, form droplets with uridine-5′-driphosphate disodium salt (UDP) or uridine-5′-monophosphate (UMP) under the conditions tested (Supplementary Fig. 1 ). Fig. 1: Viscosity of poly-L-lysine coacervates is controlled by polymer length. a Brightfield image of polyK100/UTP condensates (10 mM Tris, pH 7.4). polyK concentration 6 mM per monomer and uridine triphosphate (UTP) 1.5 mM per monomer. b Widefield fluorescence image of polyK100/UTP condensate fusion (partitioned free Atto488 dye incorporated for enhanced visualization). c Confocal fluorescence image of polyK100/UTP droplet with 500 nm beads embedded (Red FluoSpheres, Invitrogen). Inset, representative 2D bead track. Scale bar 0.1 μm d Mean squared displacement (MSD) vs lag time for individual 500 nm beads in polyK100/UTP droplets. Inset, distribution of bead displacements at lag times = 0.5 s (red), 5 s (green), 10 s (blue). e MSD data for polyK10 (blue), polyK50 (red), and polyK100 (yellow) with UTP. Inset, viscosity as a function of polyK length. f Viscosity vs polyK length for polymers with UTP (blue), pU10 (green), and pU50 (yellow). Full size image To quantify the viscoelasticity of the droplets, we utilized microrheology, a technique based on tracking the motion of fluorescent tracer beads embedded with in condensates (Fig. 1c ) in order to obtain the mean-squared displacement (MSD), $${\mathrm{MSD}}(\tau ) = \left\langle {\left( {{\mathbf{r}}(\tau + t) - {\mathbf{r}}{\mathrm{(}}{\it{t}}{\mathrm{)}}} \right)^2} \right\rangle.$$ (1) A concentration of 6 mM per monomer of polyK and charge-matched polyanion concentration were used for all rheological measurements. Microbeads embedded within polyK condensates display Brownian motion with Gaussian displacement distributions, and fitting the MSD over time gives a diffusive exponent equal to 1 demonstrating that polyK condensates are pure viscous fluids with no elastic component (Fig. 1c, d ). As polymer length increases, bead motion slows and there is a downward shift of the MSD. Using $${\mathrm{MSD}} = 4Dt$$ (2) and the Stokes–Einstein relation $$D = \frac{{k_BT}}{{6\pi \eta R}},$$ (3) where D = diffusion coefficient, T = temperature, η = viscosity, and R = bead radius, we determine the viscosity for polyK10-UTP droplets to be 0.1 Pa.s (similar viscosity to maple syrup). Droplet viscosity increases with increasing polyK length, with η = 0.2 and 0.6 for polyK50-UTP and polyK100-UTP, respectively (Fig. 1e and Table 1 ). polyK droplet viscosity increases further when complexed with pU10 and pU50 with the highest viscosity values increasing to approximately 20 Pa.s for the longest polymer complex, polyK100-pU50 (Fig. 1f and Table 1 ). Viscosity scales with polyK polymer length, N , for both the pU10 and pU50 condensates (Supplementary Fig. 2 ) suggesting an unentangled polymer solution according to the Rouse model 53 . Interestingly, the viscosity for polyK-UTP condensates appears to demonstrate a weaker dependence on polymer length than for pU10 or pU50 condensates, suggesting distinct modes of interaction for mononucleotides and polynucleotides (Supplementary Fig. 2 ). Table 1 Coacervate viscosity. Full size table Arginine and lysine polymers exhibit distinct phase behavior Recent work in the field of biological phase separation has highlighted important roles for lysine 38 , 54 and arginine 29 , 55 , 56 , 57 , 58 residues. Moreover, despite the conserved charge between residues, arginine-to-lysine mutations in R/G-rich domains result in decreased phase separation propensity with higher critical concentrations required for droplet formation 29 , 33 . We therefore sought to quantify the differences in assembly propensity and material properties of arginine/lysine homopolymer condensates. We find that, under identical conditions, arginine polymers display differences in propensities for phase separation when compared to lysine analogs. Whereas all polyK lengths tested form droplets with UTP, only polyR10-UTP and polyR50-UTP form droplets, while polyR100-UTP assembles into amorphous aggregates (Fig. 2a and Supplementary Fig. 3 ). In addition, while polyK polymers are unable to coacervate with UDP, all polyR lengths tested do form droplets with UDP (Fig. 2a ). To determine whether these differences arise from differences in relative interaction strengths, we constructed a phase diagram as a function of NaCl concentration for polyR50-UTP and polyK50-UTP. We find a significant shift in droplet stability as a function of increasing NaCl concentration, with polyR droplets persisting at higher salt at a given polymer/UTP concentration (Fig. 2b ). To further examine differences in interaction strength between these two polymers and uridine, we used fluorescence correlation spectroscopy (FCS) to measure binding affinity. Using pU10-alexa488 (50 nM) and increasing amounts of polyK10, we determine a binding affinity of 6.5 μM for polyK/pU. Interestingly, extracting a dissociation constant for polyR10 was not possible, as binding was concomitant with phase separation, even at sub-μM concentrations (Supplementary Fig. 4 ). In contrast, no phase separation was observed for polyK up to 1 mM concentrations tested. Thus arginine and lysine display inherent differences in binding strength with identical partners (Fig. 2b ), as well as unique modes of interaction with distinct partners (Fig. 2a ). Fig. 2: Differences in assembly propensity of polyR and polyK droplets. a DIC images showing (i) polyR10-UDP, polyR50-UDP, and polyR100-UDP (10 mM Tris, pH 7.4) with insets displaying polyK under same condition and (ii) polyR10-UTP, polyR50-UTP, and polyR100-UTP (10 mM Tris, pH 7.4). Concentration polyK/polyR 6 mM per monomer, UTP 1.5 mM, UDP 2 mM. Scale bar 20 μm. b Phase diagram for polyK50 (green) and polyR50 (purple) (6 mM per monomer) with varying NaCl and UTP concentrations. Green circles denote conditions under which polyK50-UTP droplet formation is observable. Purple circles denote conditions where polyR50-UTP droplet formation is observable. Full size image polyR droplets are over 100× more viscous than polyK We next sought to determine how differences in droplet stability and interaction strength influence the relative material properties of polyK/polyR droplets. FRAP measurements reveal a dramatic decrease in recovery of pU10 within polyR droplets in comparison to polyK (Fig. 3a ). Upon bleaching a spot (radius r = 1.5 μm) in the center of droplets containing 1% Alexa 488-labeled pU10 RNA, we find that over the course of approximately 10 s polyK10 droplets recover to approximately 80%, whereas polyR10 droplets recover to this value on the order of 1000 s (Fig. 3b ). Using $$D_{{\rm{app}}} = \frac{{r^2}}{t},$$ (4) where t is recovery time, we find an apparent diffusion coefficient of 2.18 × 10 −13 m 2 s −1 for polyK compared to 2.9 × 10 −15 m 2 s −1 , for polyR indicating an approximately 100-fold difference in mobility. To precisely quantify changes in viscosity, we next perform microrheology experiments. We find that polyR droplets are viscous fluids with viscosities ranging from 36 Pa.s for polyR10/UTP to >200 Pa.s for polyR complexed with pU50, translating to approximately 30×–300× higher viscosities (consistency of ketchup) than comparable polyK constructs (consistency of maple syrup) (Fig. 3c and Table 1 ). We find that the greatest relative increase in viscosity is seen for the UTP conditions. We note that viscosity differences between sequences of equal length are significantly greater than the relative difference between polymer lengths of a single residue. Interestingly, the viscosities of pure polyK and polyR solutions in the absence of nucleotide-induced phase separation are equivalent up to the highest concentration tested (see Supplementary Table 1 for details). Together, these results highlight a role for distinct modes of nucleotide complexation vs homotypic residue–residue interactions in contributing to the significant differences in viscosity between polyR and polyK droplets. Fig. 3: Differences in emergent properties of polyR and polyK droplets. a Confocal fluorescence images of FRAP recovery for polyK10/pU10-A488 (upper) and polyR10/pU10-A488 (lower) illustrating increased fluidity of polyK vs polyR. b FRAP recovery within droplets of polyK/pU10-A488 (green) and polyR/pU10-A488 (purple). c MSD vs lag time for polyK and poly R of length 10 (blue, o), 50 (red, o) and 100 (yellow, o) with UTP) illustrating increased viscosity of polyR compared to polyK. Inset: Brownian motion of 200 nm bead in polyK50/UTP (green) and polyR50/ UTP (purple). Full size image R/K differences sufficient to induce multiphase coexistence Our results thus far indicate that differences in nucleotide interaction strength and interaction modes between polyK and polyR dramatically influence droplet viscosity. We therefore hypothesize that this dramatic difference between polyK and polyR nucleotide interactions could be sufficient to drive the formation of multiphase condensates. To investigate this hypothesis, we combined 50:50 mixtures of polyK (6 mM monomer) and polyR (6 mM monomer) with varying amounts of UTP. Indeed, we observe the formation of multiphase droplets in all conditions with sufficient UTP (3 mM) to form charge-matched condensates with both polymers (Fig. 4b–d ). Where UTP was present at a concentration at which only half the total polymer mix could form a charge-matched condensate (1.5 mM), we observe a single polyR phase (Fig. 4a ). A partition coefficient (P) of approximately 1.2, calculated from fluorescence intensities, indicates that polyK is only slightly enhanced with in this phase compared to polyR ( P = approximately 11). Fig. 4: Multiphase condensate behavior. a – d Confocal fluorescence images of multiphase liquid condensates formed from the addition of UTP (1.5 mM ( a ), 3 mM, 4 mM or 15 mM ( b – d )) to polyK:polyR 50:50 mixtures. Scale bar 20 μm e Confocal fluorescence images of fusion of dual-phase coacervates. polyK phase (green) and polyR phase unlabeled. f Aspect ratio change outer polyK droplet (green) and inner polyR droplet (purple). g Fusion timescale vs average droplet radius for polyR single-phase droplets (purple circles), polyR dual-phase droplets (purple crosses), polyK single-phase droplets (green circles), polyK dual-phase droplets (green crosses). Full size image Multiphase condensates of all conditions tested form with polyR as the inner layer and polyK as an external shell. This implies that the polyR phase has a higher density than polyK 48 consistent with maintaining a higher viscosity, as well as a higher surface tension 45 . We find that fusion of the inner polyR droplets occurs more slowly than fusion of the surrounding polyK droplets (Fig. 4e, f and Supplementary Movie 1 ). To obtain an approximate value of surface tension ( γ ), inverse capillary velocity ( η / γ ) can be approximated from the slope of fusion time ( τ ) vs average droplet radius ( l ) by assuming phases are simple liquids in a lower viscosity medium where \(\tau \approx l(\eta /\gamma )\) 4 , 6 , 30 , 59 . For single-phase pure polyK droplets, we find η / γ ≈ 0.012 s μm −1 . Having measured the viscosity directly, we can calculate an approximate surface tension of 17 μN m −1 . We find that, when multiphase droplets fuse, the polyK component fuses at the same timescale as pure polyK, as illustrated by single and multiphase fusions following the same linear trend (Fig. 4g ). For single-phase, pure polyR droplets, we find η / γ ≈ 0.144 s μm −1 and γ ≈ 100 μN m −1 ; these values are approximately an order of magnitude higher than those obtained for polyK. Similar to polyK, multiphase and single-phase droplets appear to follow the same τ vs l linear trend, indicating that polyR most likely retains its highly viscous properties in a multiphase environment. Interestingly, the number of polyR fusion events was found to increase in multiphase droplets apparently due to readily fusing polyK droplets forcing polyR droplets into close proximity. polyR antagonizes polyK condensates triggering polyK release Upon demonstrating that polyR and polyK are capable of forming distinct coexisting phases, but only at sufficiently high UTP concentrations, we next sought to investigate the impact of the order of addition of polyK/polyR solution components. We find that, although the order of addition does not affect the final equilibrium state, the mechanism by which this equilibrium state is reached is dramatically different. For the multiphase conditions, when polyK is added to pre-formed polyR droplets, initially no change is observed but with sufficient time a secondary polyK phase will form surrounding the existing polyR liquid phase (Supplementary Fig. 6 and Supplementary Movies 2 – 5 ). As was seen in the premixed samples, only a single polyR phase is observed at limiting UTP concentration (1.5 mM) even at long timescales. More remarkably, however, when we first form charge-matched polyK-UTP condensates (6 mM polyK/1.5 mM UTP) and subsequently add an equal amount of polyR50, within around 60 s we observe complete condensate inversion with polyK droplets being entirely replaced by polyR (Fig. 5a, b ). Zooming in on individual droplets (Fig. 5c ), we find that polyR50 nucleates droplets within polyK-rich condensates; monitoring the polyK fluorescence, we see that polyR50 nucleation is concomitant with polyK release from the condensate to the surrounding environment (Fig. 5d ). This illustrates that polyR is successfully competing for the available UTP, thereby triggering the release of free polyK back to the dilute phase. When the experiment is repeated with polyR100, which assembles into amorphous aggregates in the presence of UTP (Fig. 2a ), we find that polyK is ultimately released after the droplets transform to aggregates (Supplementary Fig. 6 ), further demonstrating the dominance of polyR over polyK interactions. Fig. 5: Condensate inversion and polyK release. a Confocal fluorescence images of polyK fluorescein isothiocyanate (FITC)-labeled (green) displacement by polyR50 labeled with dylight594 (purple). Merged images taken at moment of polyR addition ( t = 0) and after 30, 60, and 90 s. Scale bar 20 μm. b Percentage of slide covered as a function of time for polyK (green) and polyR (purple). Inset images correspond to t = 0 and t = 90 s. Scale Bar 20 μm. c Close up of individual condensate green channel showing polyK FITC and purple channel only showing polyR50. Scale bar 5 μm. d Intensity of FITC-polyK over time inside a polyK droplet (filled square) and outside of a polyK droplet (open circle). Intensity values correspond to timeseries displayed in a . Inset displays polyK FITC before and after displacement by unlabeled polyR50 illustrating polyK displacement into surrounding media. Scale Bar 20 μm Intensity re-scaled in this image for clarity. [polyK] = [polyR] = 6 mM monomer. [UTP] = 1.5 mM. Full size image Manipulating release kinetics and coexisting liquid phases Triggering the rapid release of a component in a condensed phase presents a useful tool for engineering droplet dynamics. We next asked whether tuning our minimal model components could regulate the kinetics of multiphase dynamics. We had previously shown that addition of polyR50 to polyK droplets at limiting UTP concentrations (1.5 mM) resulted in rapid inversion and release of polyK to the dilute phase (Fig. 5 ), consistent with the single polyR phase present in our equilibrium experiments (Fig. 4 ). Given that our equilibrium experiments indicate the presence of multiphase droplets at higher UTP concentrations, we hypothesized that increasing the relative UTP abundance would in turn impact the inversion dynamics. Indeed, we find that increasing the UTP concentration controls the kinetics of polyK release as well as the stabilization of coexisting phases over long timescales (Fig. 6a, c and Supplementary Movies 6 – 9 ). We find that, at 3 and 4 mM UTP, inversion still occurs but at increasing timescales within approximately 2 and 5 min, respectively, compared to approximately 1 min for 1.5 mM (Fig. 5a ) with no observable polyK release at 15 mM UTP. Interestingly, for 3 mM UTP and more significantly for 4 mM UTP, we find that, after rapid release of polyK to the dilute phase, a polyK-rich phase begins to re-condense around the polyR phase on the timescale of hours. These dual-phase droplets persist up to at least 24 h (Supplementary Fig. 5 ), resembling the equilibrium state described above. Fig. 6: Control over inversion and multiphase coacervate creation. a Confocal fluorescence images of droplet inversion via addition of Dylight-labeled polyR50 at increasing UTP concentrations (3, 4, 15 mM top to bottom). Timepoints 0, 100, 150, 200 s, and 1 h are shown. Scale bar = 20 μm. b Initial coacervates of polyK paired with (i) UTP, (ii) pU10, and (iii) pU50. Scale bar = 20 μm. c Intensity of FITC-labeled polyK in dilute phase for UTP concentrations 3 mM (red), 4 mM (gold), and 15 mM (blue). Intensity values correspond to timeseries displayed in a . d Intensity of FITC-labeled polyK in dilute phase for UTP (gold), pU10 (red), and pU50 (blue). Intensity values correspond to timeseries displayed in b . Full size image Given the demonstrated impact of polymer length on droplet viscosity (Table 1 and Fig. 3 ), we then asked whether polymer length could also be used to tune intra-droplet dynamics. We repeated the inversion experiment and compared UTP, pU10, and pU50, all under initial charge-matched conditions (Fig. 6b, d , Supplementary Movies 10 and 11 ). Indeed, we find that pU10 and pU50 progressively dampen polyK release kinetics along with increasing the longevity of coexisting phases. Thus tuning either the stoichiometry or the polymer length of droplet components can regulate the dynamics of this multiphase model system. Discussion As the broader impact of liquid phase separation on the fields of cell biology and bioengineering continues to expand, many fundamental questions and challenges remain. For example, how do sequence-level changes influence condensate material properties; how do material properties in turn influence condensate dynamics, multiphase coexistence, and ultimately function; and finally, can bottom–up sequence design rules be generated to engineer condensates with specific material properties that can be leveraged for controlling condensate dynamics. The molecular and functional differences between arginine and lysine residues make them an ideal model system for extracting fundamental principles bridging sequence, material, and functional levels. In addition to unique roles in liquid phase separation discussed above, arginine specificity over lysine residues appear in other functional roles, including membrane interactions 60 , cell entry of antimicrobial peptides 61 , and regulation of voltage-gated ion channels 62 . In addition, both arginine and lysines and their respective post-translational modifications play important roles in chromatin remodeling via regulation of histone proteins 63 . While both lysine and arginine have the same theoretical positive charge at neutral pH (pKa’s ~10.5 and 13.8 for lysine and arginine, respectively), the delocalization of positive charge within the pi-bonded guanidium side chain of arginine imparts it with enhanced modes of interactions. While both cations can engage in cation–pi interactions, arginine alone can further engage in pi–pi interactions 25 , 28 , 33 via its guanidium group as well as hydrogen bonding 64 . It is worth noting that, while we cannot rule out any local pKa shifts in lysine vs arginine within coacervates that influence effective charge, such shifts would not likely support the drastic differences we see in our viscosity measurements. Arginines are also more efficient at RNA binding 65 , and recent work 40 suggests that proline–arginine dipeptide repeats interact more strongly with RNA oligonucleotides than proline–lysine repeats due in part to enhanced pi–pi interactions. This is consistent with our phase diagram and FCS binding data, which show significant increase in interaction strength for polyR compared to polyK with respect to pU binding. In addition to enhanced interaction strength, the delocalization of charge on the arginine group may contribute to enhanced effective multivalency, both of which would be consistent with the significant increases in emergent viscosity we report here (Fig. 3 ). This additionally aligns with our divergent results for assembly capacity of polyR/polyK with UDP and UTP, whereby enhanced nucleotide interactions enable only polyR to form droplets with UDP and leads to the aggregation of polyR100-UTP (Fig. 2 ). The same difference in molecular interaction strength that leads to distinct viscosities also drives the assembly of multilayered condensates. Both multiphase inversion and coexistence can be understood in terms of the relative ability of polyR and polyK to compete for UTP binding. Jacobs and Frenkel showed theoretically that for systems where sufficient interaction strength differences between components exists multiple distinct phases will form 66 . Interestingly, they also show that, as the number of components increases, the system tends toward forming a single phase. It is therefore important to bear in mind that in vitro systems with few components may tend toward multiple phases at a lower interaction difference than would actually be found in the complex multi-component cellular environment. Experimentally both Mountain and Keating and Lu and Spruijt 48 , 49 showed for a three-component system, say two polycations and a single polyanion, that, where some sufficient interaction strength difference between polycations exists, two phases will form with the shared oppositely charged polymer unequally distributed between the two phases based on relative interaction strength. Lu and Spruijt calculate the magnitude of this sufficient difference from density, which they extrapolate from a difference in critical salt concentration 49 . We show that where UTP is the limiting component this uneven distribution of the shared component is so extreme that only polyR phase will form and it is only when UTP is in excess that multiphase formation is observed. We believe the UTP concentration-dependent inversion kinetics likely results from a differential concentration gradient between the inside and outside of condensates as well as local competition between polyK and polyR for UTP binding. Complex coacervates are generally considered to form between charge-matched quantities of polyanion and cation; consequently, as overall UTP concentration increases but polymer concentration remains the same, the relative external UTP concentration will increase and one would expect to find that the driving force to nucleate within the PK droplets will decrease (Fig. 6 ). If we consider droplet inversion in terms of competitive binding where polyK + UTP \(\rightleftharpoons\) polyK-UTP, polyR + UTP \(\rightleftharpoons\) polyR-UTP, and polyK-UTP + polyR \(\rightleftharpoons\) polyK + polyR-UTP, both the rate of polyK-UTP dissociation and polyR UTP binding will determine the rate of polyR/UTP droplet formation. At increasing UTP concentration, the rate of dissociation should not be affected. As polyanion length increases, however, due to increased number of interaction sites per chain, a slower rate of dissociation could be expected, accounting for the observed increase in inversion time with increasing polyanion length. We additionally note the observation of the formation of vacuoles, presumed to consist of surrounding buffer, during the dynamics of inversion in pU10 droplets (Fig. 6b ) and even more so in pU50 droplets, but not in UTP samples, suggesting increasing prevalence with increasing viscosity. This non-equilibrium vacuole formation has been observed previously in biomolecular condensates and has been shown to be capable of being induced in polyK DNA systems by exposure to an electric field 45 , 67 , 68 . We have not observed vacuole formation when polyK and polyR are premixed indicating that their formation is not favorable under equilibrium conditions and only occur during the disruptive process of polyK droplet disassembly. Here we have demonstrated bottom–up control of multi-droplet assembly and dynamics by exploiting the contribution of polymer length, stoichiometry, and the distinct differences between lysine and arginine residues and their respective nucleotide interactions. Employing minimalist polymers and precise rheological measurements, we have dissected the contribution of arginine/lysine residues to the bulk material properties of biomolecular liquid condensates. We demonstrate that the distinct modes of arginine/lysine interactions with mononucleotides and polynucleotides gives rise to individual droplets with viscosities that differ by orders of magnitude, which can be finer tuned by polymer length. Arginine and lysine polymers are not miscible within condensates, with arginine outcompeting lysine for anionic partners. Importantly, we go on to show that the fundamental differences in arginine/lysine–nucleotide phase behavior can be exploited to trigger the controlled release of lysine sequences and to drive the formation of coexisting immiscible phases with tunable kinetics of self-separation. Together, this work lends unique insight into the distinct roles of arginines and lysines in liquid phase separation and more significantly provides fundamental design principles for leveraging sequence-level components in order to regulate droplet assembly, dynamics, and multiphase coexistence. These principles present invaluable tools for the regulation and engineering of novel organelles and could feasibly be developed into incorporating lysine/arginine tags designed to modulate molecular release and phase behavior with tunable kinetics. Expanding this fundamental model toward increased sequence complexity, component diversity, and post-translational modifications presents exciting new future directions. Methods Materials Poly(L-lysine hydrochloride) (molecular weight (MW) = 1600 Da, N = 10, DP n by nuclear magnetic resonance (NMR) = 8–12), poly(L-lysine hydrochloride) (MW = 8200 Da, N = 50, DP n by NMR = 45–55), poly(L-lysine hydrochloride) (MW = 16 kDa, N = 100, DP n by NMR = 90–110), poly(L-arginine hydrochloride) (MW=1900 Da, N = 10, DP n by NMR = 8–12), poly(L-arginine hydrochloride) (MW = 9600 Da, N = 50, DP n by NMR = 45–55), and poly(L-arginine hydrochloride) (MW = 19 kDa, N = 100, DP n by NMR = 90–110) were purchased from Alamanda Polymers (Huntsville, AL, USA) and used as received. Poly(L-lysine hydrochloride) (MW = 15–30 kDa) and poly(L-lysine hydrochloride)–fluorescein isothiocyanate (FITC) labeled (MW = 15–30 kDa) were purchased from Sigma-Aldrich. Polymer stock solutions (50 mg ml −1 ) were prepared in nuclease-free water and stored at 4 °C. Solutions were sonicated for 10 min, as per the manufacturer’s instructions, and diluted in Tris buffer prior to use. UTP and UDP were purchased from MP Biomedicals (Solon, OH, USA). UMP was purchased from Sigma. pU RNAs ( N = 10 and N = 50) were purchased from IDT. pU RNAs were received as lyophilized samples that were resuspended in TE buffer (10 mM Tris pH 8.0, 0.1 mM EDTA) at 1 mM and stored at −20 °C. UTP, UDP, and UMP were prepared in nuclease-free water (90 mM) and stored at 4 °C for immediate use or at −20 °C for longer-term storage. Poly-L-arginine labeling Poly(L-arginine hydrochloride) ( N = 50) was labeled with Dylight594 via amide linkage as per the manufacturer’s instructions. Unreacted dye was removed using a Hi-trap de-salting column, equilibrated with Tris (10 mM, pH 7.5), and connected to an AKTAstart, followed by overnight dialysis (3 kDa cut off) in the same buffer. Concentration was determined from FCS measurements. Coacervate preparation Coacervate samples were prepared by the addition of charge-matched quantities of polyanion to solutions of polycation (6 mM monomer). Charge matching was assumed to be a 4:1 ratio for UTP, 3:1 for UDP, and 1:1 per nucleotide monomer of pU RNA. For microrheology samples, 200 or 500 nm beads were added prior to the addition of polyanion. Samples were imaged in glass slide–coverslip chambers made with Grace BioLabs spacers. To prevent the droplets wetting the surface of the well, slides were first incubated in a 1% Pluronics F-127 solution for 1 h followed by thoroughly rinsing with MilliQ water. For multiphase complex coacervate experiments, samples were prepared in Grace BioLabs CultureWells with coverslip bottom and treated with 1% Pluronics F-127 (1 h). Coacervate imaging Samples were imaged on a Marianas Spinning Disk confocal microscope (Intelligent Imaging Innovations) consisting of a spinning disk confocal head (CSU-X1, Yokagawa) on a Zeiss Axio Observer inverted microscope equipped with ×100/1.46 numerical aperture (NA) Plan-Apochromat (oil immersion) or ×63/1.4 NA Plan-Apochromat (oil immersion). Focus was maintained over time using Definite Focus 2 (Zeiss). FITC or Alexa488 were excited with the 488-nm line from a solid state laser (LaserStack) and collected with a 440/521/607/700-nm quad emission dichroic and 525/30-nm emission filter. Dylight and carboxy-labeled beads were excited with the 561-nm line and collected with the same dichroic and 617/73-nm emission filter. Images were acquired with a Prime sCMOS camera (Photometrics) controlled by SlideBook 6 (Intelligent Imaging Innovations). ImageJ was used to further format and process images. Images of Dylite594-labeled polyR have been false colored purple to improve contrast. Phase diagrams Phase diagrams were constructed by brightfield imaging of polymer/UTP/salt mixtures prepared in glass slide–coverslip chambers. Imaging was performed approximately 30 min after mixing using a ×63 objective on an inverted Zeiss Axio microscope. Based on these observations, samples were designated as either droplet or no droplet. Fluorescence recovery after photobleaching FRAP experiments were performed on a Marianas Spinning Disk confocal microscope with a ×63/1.4 NA Plan-Apochromat oil immersion objective. An area with radius = 1.5 μm was bleached with a 488-nm laser; subsequent recovery of the bleached area was recorded with a 488-nm laser. Intensity traces were exported from Slidebook (Intelligent Imaging Innovations, Denver, CO). Correction for photobleaching, normalization, and fitting to an exponential function of the form $$f\left( t \right) = A\left( {1 - e^{\frac{{ - t}}{\tau }}} \right)$$ (5) was performed in Matlab. The final FRAP recovery curve is the average of recovery curves collected from five separate droplets. Microrheology Fluorescent beads of 100, 200, or 500 nm were embedded into droplets typically >50 μm and their motion was tracked over time to obtain the MSD. To avoid boundary effects, only beads several microns away from all droplet interfaces were analyzed. Bead diffusion was tracked on a Marianas Spinning Disk confocal microscope with a ×100/1.46 NA Plan-Apochromat oil immersion objective for 1000 frames with 100, 200, or 500-ms time intervals. Temperature was kept at 19 °C using a microfluidic temperature stage (CherryTemp, CherryBiotech). Particle-tracking code to locate and track bead trajectories in two dimension ( XY ) was adapted from Matlab Multiple Particle Tracking Code from The Matlab Particle Tracking Code Repository ( ). Custom Matlab software was then used to analyze bead dynamics. MSD was calculated from time and ensemble averages for all trajectories: $${\rm{MSD}}\left( \tau \right) = \left\langle {\left( {x\left( {\tau + t} \right) - x(t)} \right)^2} \right\rangle + \left\langle {\left( {y\left( {\tau + t} \right) - y(t)} \right)^2} \right\rangle.$$ (6) The dependence of the MSD on lag time ( τ ) follows a power law, the exponent ∝ was determined as the slope of a log–log plot and diffusion coefficient as the y -intercept. $${\rm{MSD}}\left( \tau \right) = 2dD\tau ^ \propto,$$ (7) where d is the number of dimensions (here 2), D is the diffusion coefficient, and ∝ is the exponent. Viscosity can be determined from the Stokes–Einstein relation (Eq. 3 ), assuming a system at equilibrium and a freely diffusing Brownian particle within a solution of viscosity η . The final viscosity is the average of three values collected from individual measurements performed on 3 different days at 19 ± 2 °C. Errors shown are standard deviations derived from these three individual experiments. Inversion experiments Poly(L-lysine hydrochloride) (MW = 15–30 kDa, 10% labeled) and UTP coacervates were allowed to sit for 2 h at which point droplet settling had subsided. An equal amount of poly-L-arginine 50 (approximately 1% labeled; [polyK] = [polyR]) was then added as a timeseries was recorded. Timeseries were recorded with a time interval approximately 1 s. t = 0 was assigned as the first frame emission was detected in the red channel. Intensity plots (Figs. 5c and 6c, d ) are representative plots from a single timeseries, not averages of multiple runs. The average inversion time from three measurements is shown (Supplementary Fig. 7 ). Fusion experiments For fusion measurement, samples were prepared in Grace BioLabs CultureWells with coverslip bottom treated with 1% Pluronics F-127 (1 h). Timeseries of fusion events were collected at 49-ms time intervals on a widefield Axio Observer 7 Inverted Microscope (Zeiss) with a ×63/1.4 NA Plan-Apochromat (oil immersion) objective. FITC was excited with a 120-W metal halide lamp (X-cite 120) with 470/40 nm excitation and 525/50 nm emission filters. Images were acquired with a Axiocam 506 mono camera (Zeiss) controlled by the Zen software (Zeiss). ImageJ was used to further format and process images, and MATLAB was used to analyze fusion events as described previously 69 . Fluorescence correlation spectroscopy FCS binding measurements were performed on an inverted Leica TCS-SP8 STED 3X equipped with a ×63 water immersion objective. Fluorophores were excited at 488 nm using a white light laser and detected at 550 ± 20 nm using an HyD detector. Temperature was kept constant at 25 °C using a temperature stage (Instec Inc., CO, USA, Model mK2000B). Data acquisition and calculation of the correlation curve G ( τ ) were performed with the SymPhoTime software (PicoQuant, Germany). Each measurement is the average of ten 30 s traces. Averaged autocorrelation curves were then fit to a single-component model using the following equation: $$G\left( \tau \right) = \frac{1}{{\left[N\left( {1 + \frac{\tau }{{\tau _{\rm{D}}}}} \right)\left( {1 + \frac{\tau }{{{\it{\upkappa }}^2\tau _{\rm{D}}}}} \right)^{0.5}\right]}},$$ (8) where G ( τ ) is the autocorrelation function as a function of time, τ . N is average number of molecules in the focal volume. τ D is diffusion time, the average amount of time a molecule spends diffusing through the observation volume. \({\it{\upkappa }} = \frac{{z_0}}{{\omega _0}}\) represents the ratio of the axial ( z 0 ) to radial ( ω 0 ) dimensions of the Gaussian excitation volume. This value was determined by calibration using Atto488-carboxylic acid ( D = 4.0 × 10 −6 cm 2 s −1 at 25 °C). Data availability The data sets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. | The last several years have brought mounting evidence that the molecules inside our cells can self-organize into liquid droplets that merge and separate like oil in water in order to facilitate various cellular activities. Now, a team of biologists at the Advanced Science Research Center at The Graduate Center, CUNY (CUNY ASRC) have identified unique roles for the amino acids arginine and lysine in contributing to liquid phase properties and their regulation. Their findings are available today online in Nature Communications. Known as liquid-liquid phase separation, the process allows some molecules within a cell to cloister themselves into membraneless organelles in order to carry out certain duties without interruption from other molecules. The mechanism can also allow molecules to create multiphase droplets that resemble, say, a drop of honey inside a drop of oil surrounded by water in order to carry out sophisticated jobs. "This is a really exciting new research area because it uncovers a core biological function that, when gone awry, may be at the root of disease, particularly neurodegeneration as in ALS or Alzheimer's," said principal investigator and Graduate Center, CUNY Biochemistry Professor Shana Elbaum-Garfinkle, whose lab at the CUNY ASRC Structural Biology Initiative conducted the study. "With an understanding of how individual amino acids contribute to phase behavior, we can begin to investigate what's going wrong in liquid phase separation that may interfere with normal biological function and potentially design therapies that can modulate the process." Arginine (pink) dissolves and replaces lysine-rich droplets (green). These results present a novel mechanism by which to design, control and/or intervene with existing new liquid phases. Credit: Rachel Fisher Researchers have suspected for a while that arginine and lysine—two of the 20 amino acids that make up all proteins—were responsible for regulating liquid phase separation, but they weren't certain how each contributed to phase behavior and to creating the differing viscosities that cloister molecules into separate communities. "Arginine and lysine are very similar amino acids in terms of both being positively charged, but they differ in terms of binding capability. We were really curious to understand what effect this difference would have on the material properties, such as viscosity or fluidity, of the droplets they form," said Rachel Fisher, the paper's first author and a postdoc in Elbaum-Garfinkle's lab. "We also wanted to know how these differences manifest themselves when the arginine and lysine systems are combined. Will the droplets coexist? When we saw they did, we then wanted to understand how we could modulate this multi-phase behavior." To answer their questions, Elbaum-Garfinkle's team used a technique called microrheology—whereby tiny tracers are used to probe material structures—to track and investigate the properties of arginine and lysine droplets. They found that arginine-rich droplets were over 100 times more viscous than lysine-rich droplets, comparable to the difference between a thick syrup or ketchup and oil. The viscosity differences are significant enough that if lysine and arginine polymers are combined, they don't mix. Instead, they create multi-phase droplets that sit within one another like Dutch nesting dolls. Additionally, arginine has such strong binding properties that under some conditions it can compete with lysine and replace or dissolve lysine droplets. The researchers further identified ways to tune the balance between competition and coexistence of the two phases. The results present a novel mechanism for designing, controlling or intervening in molecular liquid phases. | 10.1038/s41467-020-18224-y |
Medicine | Stroke damage mechanism identified | M. Ye et al,'TRPM2 channel deficiency prevents delayed cytosolic Zn2+ accumulation and CA1 pyramidal neuronal death after transient glo Q1 bal ischemia' Cell Death and Disease , 27 November 2014. DOI: 10.1038/CDDIS.2014.494 Journal information: Cell Death and Disease | http://dx.doi.org/10.1038/CDDIS.2014.494 | https://medicalxpress.com/news/2014-11-mechanism.html | Abstract Transient ischemia is a leading cause of cognitive dysfunction. Postischemic ROS generation and an increase in the cytosolic Zn 2+ level ([Zn 2+ ] c ) are critical in delayed CA1 pyramidal neuronal death, but the underlying mechanisms are not fully understood. Here we investigated the role of ROS-sensitive TRPM2 (transient receptor potential melastatin-related 2) channel. Using in vivo and in vitro models of ischemia–reperfusion, we showed that genetic knockout of TRPM2 strongly prohibited the delayed increase in the [Zn 2+ ] c , ROS generation, CA1 pyramidal neuronal death and postischemic memory impairment. Time-lapse imaging revealed that TRPM2 deficiency had no effect on the ischemia-induced increase in the [Zn 2+ ] c but abolished the cytosolic Zn 2+ accumulation during reperfusion as well as ROS-elicited increases in the [Zn 2+ ] c . These results provide the first evidence to show a critical role for TRPM2 channel activation during reperfusion in the delayed increase in the [Zn 2+ ] c and CA1 pyramidal neuronal death and identify TRPM2 as a key molecule signaling ROS generation to postischemic brain injury. Main Transient ischemia is a major cause of chronic neurological disabilities including memory impairment and cognitive dysfunctions in stroke survivors. 1 , 2 The underlying mechanisms are complicated and multiple, and remain not fully understood. 3 It is well documented in rodents, non-human primates and humans that pyramidal neurons in the CA1 region of the hippocampus are particularly vulnerable and these neurons are demised after transient ischemia, commonly referred to as the delayed neuronal death. 4 Studies using in vitro and in vivo models of transient ischemia have demonstrated that an increase in the [Zn 2+ ] c or cytosolic Zn 2+ accumulation is a critical factor. 5 , 6 , 7 , 8 , 9 , 10 , 11 There is evidence supporting a role for ischemia-evoked release of vesicular Zn 2+ at glutamatergic presynaptic terminals and subsequent entry into postsynaptic neurons via GluA2-lacking AMPA subtype glutamate receptors (AMPARs) to raise the [Zn 2+ ] c. 12 , 13 , 14 , 15 , 16 Upon reperfusion, while glutamate release returns to the preischemia level, 17 Zn 2+ can activate diverse ROS-generating machineries to generate excessive ROS as oxygen becomes available, which in turn elicits further Zn 2+ accumulation during reperfusion. 18 , 19 ROS generation and cytosolic Zn 2+ accumulation have a critical role in driving delayed CA1 pyramidal neuronal death, 7 , 12 , 20 , 21 , 22 but the molecular mechanisms underlying such a vicious positive feedback during reperfusion remain poorly understood. Transient receptor potential melastatin-related 2 (TRPM2) forms non-selective cationic channels; their sensitivity to activation by ROS via a mechanism generating the channel activator ADP-ribose (ADPR) confers diverse cell types including hippocampal neurons with susceptibility to ROS-induced cell death, and thus TRPM2 acts as an important signaling molecule mediating ROS-induced adversities such as neurodegeneration. 23 , 24 , 25 , 26 Emergent evidence indeed supports the involvement of TRPM2 in transient ischemia-induced CA1 pyramidal neuronal death. 27 , 28 , 29 , 30 This has been attributed to the modulation of NMDA receptor-mediated signaling; despite that ROS-induced activation of the TRPM2 channels results in no change in the excitability of neurons from the wild-type (WT) mice, TRPM2 deficiency appeared to favor prosurvival synaptic Glu2A expression and inhibit prodeath extrasynaptic GluN2B expression. 30 A recent study suggests that TRPM2 activation results in extracellular Zn 2+ influx to elevate the [Zn 2+ ] c . 31 The present study, using TRPM2-deficient mice in conjunction with in vivo and in vitro models of transient global ischemia, provides compelling evidence to show ROS-induced TRPM2 activation during reperfusion as a crucial mechanism determining the delayed cytosolic Zn 2+ accumulation, CA1 neuronal death and postischemic memory impairment. Results TRPM2 channels are functionally expressed in hippocampal pyramidal neurons We first examined TRPM2 expression by immunofluorescent staining of hippocampal brain slices and neuron cultures. Strong TRPM2 immunostaining was present in hippocampal neurons ( Supplementary Figure 1 ), particularly in pyramidal neurons identified by the expression of calmodulin-dependent kinase II (CaMKII) ( Figure 1a ). Whole-cell patch-clamp recordings of CA1 pyramidal neurons in hippocampus slices documented ADPR-induced inward currents, which were inhibited by clotrimazole, a TRPM2 channel inhibitor 23 ( Figure 1b ). ADPR-induced currents were also observed in the cultured WT hippocampal neurons, as reported by a previous study, 32 but were absent in the TRPM2-knockout (KO) hippocampal neurons ( Figure 1c ). These results are consistent in confirming the functional TRPM2 channel expression in CA1 pyramidal neurons. 32 Figure 1 TRPM2 expression in hippocampal pyramidal neurons. ( a ) Representative immunofluorescent images showing expression of TRPM2 in CaMKII-positive pyramidal neurons in the hippocampus including CA1 region. DAPI (4',6-diamidino-2-phenylindole) staining and merged images are also shown. ( b ) Representative traces showing 1 mM ADPR-induced inward current in WT hippocampal brain slices that was inhibited by 20 μ M clotrimazole (CLT). The solid line above the current recording denotes the zero current level. The holding membrane potential was −60 mV. Summary of the currents recorded using intracellular solutions with or without ADPR are shown on the right. ( c ) Mean inward currents induced by 1 mM ADPR in cultured WT or TRPM2-KO hippocampal neurons. The holding membrane potential was −60 mV. The number of neurons recorded for each case is shown within parentheses. *** P <0.005 for comparison of indicated groups Full size image TRPM2 deficiency prevents CA1 pyramidal neuronal death induced by transient ischemia TRPM2 deficiency caused no discernible effect on the number of hippocampal neurons and GFAP-positive glial cells during development ( Supplementary Figures 2a–c ). In addition, there was no significant difference in the body weight of adult WT and TRPM2-KO mice. The survival rate of WT and TRPM2-KO mice was examined over a period of 72 h after being subjected to bilateral common carotid artery occlusion (BCCAO) followed by reperfusion, an in vivo model of transient global ischemia. The survival rate was 54% for the BCCAO-operated WT mice, which was significantly lower than the survival rate of 90% observed in BCCAO-operated TRPM2-KO mice ( P <0.05) ( Figure 2a ). To examine the contribution of TRPM2 in delayed CA1 pyramidal neuronal death, we analyzed the neuronal loss in hippocampal slices from the mice survived 72 h after BCCAO operation ( Figures 2b–e ). Nissl staining disclosed a huge number of damaged or dead pyramidal neurons in the CA1 region in the BCCAO-operated WT mice (373±72/0.2 mm 2 , n =8), but very few damaged neurons in the sham-operated WT mice (9.0±3.5, n =5) ( Figures 2b and e ). The number of damaged neurons was markedly reduced in the BCCAO-operated TRPM2-KO mice (156±48, n =9) ( Figures 2b and e ). Conversely, NeuN staining revealed many more CA1 pyramidal neurons in the TRPM2-KO mice (236±31/0.2 mm 2 , n =9) than in the WT mice after BCCAO operation (46±20, n =8) ( Figures 2c and d ). These results clearly demonstrate that TRPM2 has an important role in mediating transient ischemia-induced delayed neuronal death. Figure 2 TRPM2 deficiency protects delayed CA1 pyramidal neuronal death induced by transient global ischemia. ( a ) Summary of the survival rate for WT and TRPM2-KO mice during 72 h after BCCAO operation. The number of mice used for each case is shown within parentheses. ( b and c ) Representative images showing Nissl ( b ) or NeuN ( b ) staining of hippocampal slices. The areas in the rectangle boxes are shown in enlarged images underneath. ( d and e ) Quantitative analysis of NeuN-positive ( d ) and damaged neurons ( e ) in BCCAO- or sham-operated WT and TRPM2-KO mice as shown in ( b and c ). The number of mice used for each case is shown within parentheses in ( e ). *** P <0.005 for comparisons within WT or TRPM2-KO mice; ††† P <0.005 for comparison between WT and TRPM2-KO mice Full size image TRPM2 deficiency protects memory impairment after transient ischemia It is well established that the CA region in the hippocampus is pivotal for learning and memory. We thus evaluated whether TRPM2 deficiency can also protect against transient ischemia-induced memory impairment. The WT and TRPM2-KO mice had similar locomotor function and exploratory ability as assessed by the open-field test ( Supplementary Figure 3 ). However, the WT and TRPM2-KO mice after BCCAO exhibited significant difference revealed by the novelty environment habituation test ( Figures 3a and b ). Unlike the sham-operated WT mice that showed reduced activities in exploring the pre-exposed environment (66.4±8.3% of the first time, n =6), the BCCAO-operated WT mice exhibited significantly increased activities (131.9±11.6%, n =5; not significantly different from the activities during the first test), indicating memory impairment ( Figure 3b ). In striking contrast, the TRPM2-KO mice displayed similar and reduced activities after BCCAO (71.1±12.2%, n =6) or sham operation (63.0±10.9%, n =7) ( Figure 3b ). Figure 3 TRPM2 deficiency prevents memory impairment induced by transient global ischemia. ( a ) Representative tracking traces in novelty environment habituation test. BCCAO or sham operation was introduced after the second test. ( b ) Summary of the locomotor activities as shown in ( a ). ** P <0.01 for comparison between the activities during the second test before BCCAO and third test after BCCAO in WT mice and also between the activities in the third test of the BCCAO- and sham-operated WT mice. †† P <0.01 for comparisons between BCCAO-operated WT mice and sham- or BCCAO-operated TRPM2-KO mice. ( c ) Summary of the latency to find the escape platform during the 5-day training session. The number of mice examined for each case is shown within parentheses. †† P <0.01 for comparisons between BCCAO-operated WT mice and other three groups of mice. ( d ) Summary of the percentage of time searching the escape platform in the target quadrant during the probe test. * P <0.05 for comparison between sham- and BCCAO-operated WT mice and NS, no significant difference for indicated comparisons. The number of mice examined for each case is shown within parentheses in ( b ) for novelty environment habituation test and in ( c ) for water maze test Full size image We also examined the spatial memory with the water maze test ( Figures 3c and d ). Following a 5-day training session, the latency to find the escape platform was progressively shortened for both WT and TRPM2-KO mice. At the end of the training session, the latency was however significantly longer for the BCCAO-operated WT mice (36.8±4.5 s, n =7) than the sham-operated WT (15.3±2.7 s, n =8; P <0.01) ( Figure 3c ), confirming impaired learning ability. In contrast, there was no difference in the latency for the BCCAO-operated (18.7±5.2 s, n =7) and sham-operated TRPM2-KO mice (14.3±1.8 s, n =7) ( Figure 3c ), indicating that TRPM2 deficiency prevented such learning impairment. During the probe test in which the escape platform was removed, the time the mice used to search for the missing platform in the target quadrant was recorded to determine the spatial memory ( Figure 3d ). The BCCAO-operated WT mice spent significantly less time in the targeted quadrant (18.6±3.5%, n =7) in comparison with the sham-operated WT mice (34.1±5.0%, n =8; P <0.05), suggesting impaired spatial memory as reported by previous studies. 33 However, there was no significant difference in the time the TRPM2-KO mice spent in the targeted quadrant after sham (38.7±7.8%, n =7) and BCCAO operation (31.2±5.3%, n =7) ( Figure 3d ). Taken together, these in vivo studies provide strong evidence to support a crucial role for TRPM2 in mediating memory impairment after transient ischemia. TRPM2 deficiency prevents delayed increase in the [Zn 2+ ] c As mentioned above, there is strong evidence to support the delayed increase in the [Zn 2+ ] c as a key factor driving CA1 neuronal death. 3 , 5 , 9 , 10 , 11 , 14 , 34 , 35 , 36 , 37 Moreover, using single-cell imaging and tetracycline (Tet)-inducible TRPM2-stable HEK293 cells, we found that H 2 O 2 evoked a modest but significant increase in the [Zn 2+ ] c in extracellular Zn 2+ -free solutions and a further >5-fold increase upon introduction to extracellular Zn 2+ -containing solutions in TRPM2-expressing cells (Tet + ), and, in contrast, there was virtually no H 2 O 2 -induced increase in the [Zn 2+ ] c in uninduced cells (Tet − ) ( Supplementary Figure 4 ). These results are consistent with a recent study that suggests that TRPM2 channel activation results in increases in the [Zn 2+ ] c . 38 We therefore hypothesized that TRPM2 deficiency protects the delayed neuronal death by preventing TRPM2-dependent increases in the [Zn 2+ ] c during reperfusion. To test this hypothesis, we performed Nissl and TSQ double staining of hippocampal slices from the WT and TRPM2-KO mice 72 h after BCCAO operation to determine postischemic [Zn 2+ ] c and cell death in CA1 pyramidal neurons ( Figure 4 ). Neuronal death in the SP layer was clearly accompanied by an increase in the [Zn 2+ ] c in the BCCAO-operated WT mice ( Figure 4a ), and such increases in neuronal death and the [Zn 2+ ] c were nearly abolished in the BCCAO-operated TRPM2-KO mice ( Figure 4b ). Further detailed analysis indicates that the [Zn 2+ ] c was significantly increased in the BCCAO-operated WT mice (37.2±8.0, n =3) compared with the sham-operated WT mice (4.3±0.5, n =3) ( Figure 4c ). Such an increase was remarkably suppressed in the BCCAO-operated TRPM2-KO mice (11.1±3.8, n =3). The [Zn 2+ ] c in the BCCAO-operated TRPM2-KO mice appeared to increase compared with the sham-operated TRPM2-KO mice (1.7±0.3, n =3), but the increase was not statistically significant ( P >0.05). We also examined the [Zn 2+ ] c in the WT and TRPM2-KO mice at 24 and 48 h, as well as 72 h after BCCAO operation. The increase in the [Zn 2+ ] c in the WT mice became discernible at 48–72 h but not at 24 h ( Supplementary Figure 5 ), supporting that TRPM2 deficiency prevents the delayed increase in the [Zn 2+ ] c . Figure 4 TRPM2 deficiency prevents delayed increases in the [Zn 2+ ] c and CA1 pyramidal neuronal death after transient global ischemia. ( a and b ) Representative images of TSQ (blue) and Nissl staining of hippocampal brain slices from BCCAO- or sham-operated WT ( a ) and TRPM2-KO mice ( b ). SR, SP and SO represent the stratum radiatum, pyramidale and oriens layers. The areas in the red rectangle box are shown in enlarged images underneath, with the double-dotted lines highlighting the SP layer. ( c ) Summary of the TSQ staining presented as percentage of that in the lateral ventricle as shown in ( a and b ). The number of mice examined for each case is shown within parentheses in ( c ). * P <0.05 and NS, no significant difference, for comparisons within WT or TRPM2-KO mice; † P <0.05 for comparison between BCCAO-operated WT and TRPM2-KO mice Full size image To further corroborate these findings, we also used oxygen/glucose deprivation–reperfusion (OGD-R), an in vitro model of transient ischemia, in acute hippocampal brain slices ( Figure 5a ). There were low Zn 2+ fluorescence and neuronal death in the control slices that were similar between WT and TRPM2-KO mice ( Figures 5b and c ), likely resulting from slice preparations. OGD-R induced increases in both the [Zn 2+ ] c and pyramidal neuronal death in the WT slices as reported previously, 8 , 13 and both responses were absent in the TRPM2-KO slices ( Figures 5b and c ). Figure 5 TRPM2 deficiency prevents increases in the [Zn 2+ ] c and CA1 pyramidal neuronal death after oxygen glucose deprivation–reperfusion. ( a ) Representative images showing Newport Green and PI staining of WT and TRPM2-KO hippocampal brain slices after OGD-R or control operation. ( b and c ) Quantitative analysis of Zn 2+ fluorescence intensity ( b ) and dead neurons ( c ) as shown in ( a ). The number of slices examined in each case is shown within parentheses in ( c ). *** P <0.005 and NS, no significant difference for comparisons within WT or TRPM2-KO mice; †† P <0.01 for comparisons between WT and TRPM2-KO mice Full size image Therefore, our studies using in vivo and in vitro models of transient ischemia provide consistent evidence to indicate that TRPM2 has an important role in the delayed increase in the [Zn 2+ ] c and cell death in CA1 pyramidal neurons. Recent studies have reported that the zinc transporters (ZnT1, ZnT2, ZnT3 and ZnT6) and TRPM7 channel are involved in the regulation of Zn 2+ homeostasis in the brain. 11 , 39 , 40 We performed real-time RT-PCR to test whether TRPM2 deficiency altered their expression. There was no significant difference at the mRNA level in the hippocampus of WT and TRPM2-KO mice ( Supplementary Figure 6 ), suggesting no major role for these Zn 2+ -regulating mechanisms in attenuating the delayed increase in the [Zn 2+ ] and the protection against pyramidal neuronal death as a result of TRPM2 deficiency. TRPM2 deficiency abolishes reperfusion- and ROS-induced increase in the [Zn 2+ ] c To understand mechanistically how TRPM2 is engaged in the delayed increase in the [Zn 2+ ] c , we used time-lapse confocal imaging to monitor the temporal changes in the [Zn 2+ ] c during OGD-R in cultured hippocampal neurons from the WT and TRPM2-KO mice. OGD induced a robust increase in the [Zn 2+ ] c in the WT neurons ( Figures 6a and b ). Such an increase was completely inhibited using CaEDTA, a membrane-impermeable and specific Zn 2+ chelator, 14 , 15 or Naspm, a GluR2-lacking AMPAR-selective antagonist 16 ( Figures 6a–c ). The TRPM2-KO neurons exhibited similar basal [Zn 2+ ] c and OGD-induced increase in the [Zn 2+ ] c as the WT neurons ( Figures 6a and b ). These results confirm a critical role for AMPARs, 13 , 14 , 16 and also exclude a role for TRPM2 in elevating the [Zn 2+ ] c during ischemia. In stark contrast with the sustained [Zn 2+ ] c in the WT neurons upon reperfusion, the [Zn 2+ ] c in the TRPM2-KO neurons declined rapidly, returning almost to the basal level in 10 min ( Figures 6a and c ). Consistent with the well-established fact that reperfusion generates excessive ROS, 41 our results provide the first evidence to show ROS-sensitive TRPM2 channel as a molecular mechanism that is exclusively required for the delayed increase in the [Zn 2+ ] c or cytosolic Zn 2+ accumulation during reperfusion. Figure 6 TRPM2 deficiency abolishes the increase in the [Zn 2+ ] c during reperfusion without altering Zn 2+ influx during ischemia in cultured hippocampal neurons. ( a ) Representative images showing the [Zn 2+ ] c under basal conditions, 2 and 60 min after OGD and 10 min after reperfusion in cultured hippocampal neurons from WT mice without or with treatment by CaEDTA or Naspm, and from TRPM2-KO mice. ( b and c ) Mean changes in the [Zn 2+ ] c at 60 min of OGD ( b ) and 10 min of reperfusion ( c ) as shown in ( a ) and expressed as the percentage of the basal [Zn 2+ ] c denoted by the dotted lines. The number of neurons examined for each case is shown within parentheses in ( b ). *** P <0.005 for comparisons among different WT groups; ††† P <0.005; and NS, no significant difference for comparisons between WT and TRPM2-KO mice. Full size image To testify directly whether TRPM2 activation underpins ROS-induced increase in the [Zn 2+ ] c , we monitored the [Zn 2+ ] c in cultured hippocampal neurons during exposure to H 2 O 2 ( Figure 7 ). H 2 O 2 at 100–300 μ M evoked a salient increase in the [Zn 2+ ] c , leading to Zn 2+ accumulation in some punctate structures in the WT neurons as reported previously. 42 Such H 2 O 2 -induced increase in the [Zn 2+ ] c was strongly attenuated by TPEN, a selective Zn 2+ chelator ( Figures 7a and c ). More evidently, H 2 O 2 -induced Zn 2+ responses were completely absent in the TRPM2-KO neurons ( Figures 7b and c ), further supporting an essential role of TRPM2 in ROS-induced increase in the [Zn 2+ ] c . Figure 7 TRPM2 deficiency abolishes H 2 O 2 -induced increase in the [Zn 2+ ] c in cultured hippocampal neurons. ( a and b ) Representative images showing [Zn 2+ ] c in cultured hippocampal neurons from WT and TRPM2-KO mice before (basal) and during exposure to 300 μ M H 2 O 2 . The WT neurons were further exposed to 10 μ M TPEN for 10 min. The arrow points to the neuron in ( b ). ( c ) Mean [Zn 2+ ] c induced by H 2 O 2 at 40 min and after TPEN treatment as shown in ( a ), and presented relative to the basal [Zn 2+ ] c denoted by the dotted line. The number of neurons examined for each case is shown within parentheses in ( c ). *** P <0.005 for comparison with the basal level or before and after TPEN treatment; ††† P <0.005 for comparison between WT and TRPM2-KO mice Full size image TRPM2 deficiency attenuates ROS production As mentioned above, Zn 2+ can activate diverse ROS-generating machineries to generate excessive ROS as oxygen becomes available during reperfusion. 18 , 19 We were interested in whether the loss of TRPM2-dependent increase in the [Zn 2+ ] c led to a reduction in ROS production after transient ischemia. Thus, we detected the superoxide production in situ in the CA1 region 3.5, 24, 48 and 72 h after BCCAO operation. As expected, in the WT mice, there was a strong increase in ROS production at 3.5 h that was reduced over the next 72 h ( Figure 8 and Supplementary Figure 7 ). The ROS production occurred noticeably earlier than the increases in the [Zn 2+ ] c (cf. Supplementary Figure 5 ). The ROS production in the BCCAO-operated TRPM2-KO mice was also increased, but the ROS level was significantly attenuated in comparison with that detected in the BCCAO-operated TRPM2-KO mice over the same period of examination time ( Figure 8 and Supplementary Figure 7 ). These results indicate that TRPM2 is critical for ROS production during reperfusion. Figure 8 TRPM2 deficiency attenuates ROS generation after transient global ischemia. ( a ) Representative images of dHEt staining of hippocampal slices from BCCAO- or sham-operated WT and TRPM2-KO mice. ( b ) Summary of dHEt staining intensity as shown in ( a ). The number of mice examined for each case is shown within parentheses. ** P <0.01 and *** P <0.005 for comparisons within WT or TRPM2-KO mice; †† P <0.01 for comparisons between WT and TRPM2-KO mice Full size image Discussion The present study, using TRPM2-deficient mice in conjunction with in vivo and in vitro models of transient ischemia, has made several important findings that support a critical role for TRPM2 activation during reperfusion in determining delayed increases in the [Zn 2+ ] c , CA pyramidal neuronal death, and memory impairments. First, TRPM2 deficiency protected against CA1 pyramidal neuronal death induced by transient ischemia ( Figures 2 , 4 and 5 ), which are in agreement with recent reports. 27 , 28 , 29 , 30 Second, preventing transient ischemia-induced CA1 pyramidal neuronal death by TRPM2 deficiency offers strong protection of postischemia memory impairments as assessed by novelty environment habituation test and water maze test ( Figure 3 ), supporting a critical role of TRPM2-dependent CA1 pyramidal neuronal death in the development of learning and memory deficits. Third, TRPM2 was required for the increase in the [Zn 2+ ] c ( Figure 4 ) and ROS generation ( Figure 8 ) during reperfusion. Finally, TRPM2 was essential for ROS-induced delayed increase in the [Zn 2+ ] c in hippocampal neurons ( Figure 7 ) as well as ROS-induced increase in the [Zn 2+ ] c in HEK293 cells ( Supplementary Figure 3 ). Taken together, these findings provide a novel mechanistic insight into the molecular processes underlying transient ischemia-induced brain injury. It is well established that ischemia–reperfusion leads to the generation of excessive ROS and increases in the [Zn 2+ ] c , which in turn causes delayed CA1 pyramidal neuronal death and postischemic brain injury. 2 , 3 , 19 , 41 The underlying Zn 2+ signaling mechanisms are not fully understood. Presynaptic Zn 2+ release and subsequent cytosolic accumulation via Zn 2+ -permeable channels and particularly GluR2-lacking AMPARs represent a well-recognized mechanism. 13 , 14 , 15 , 16 , 17 , 33 There is accumulating evidence to support synaptic presence of GluR2-lacking AMPARs in CA1 pyramidal neurons, 43 , 44 , 45 , 46 , 47 , 48 including a recent study that shows that transient ischemia induces a fast endocytosis of GluA2 subunit and insertion of GluA1 subunit in cultured hippocampal pyramidal neurons. 49 The finding from the present study that OGD-induced increase in the [Zn 2+ ] c was prevented by removing extracellular Zn 2+ by CaEDTA or by GluA2-lacking AMPAR-specific antagonist Naspm ( Figures 6a and b ) is highly consistent with the notion of GluA2-lacking AMPARs in hippocampal CA1 pyramidal neurons as the Zn 2+ influx pathway during ischemia. 13 , 14 , 15 , 16 TRPM2 deficiency did not alter the basal [Zn 2+ ] c and ischemia-induced increase in the [Zn 2+ ] c , strongly excluding a role for TRPM2 in mediating ischemia-induced in the [Zn 2+ ] c ( Figure 6b ). In striking contrast, TRPM2 deficiency caused rapid loss of the cytosolic Zn 2+ upon reperfusion ( Figure 6c ), revealing an exclusive role for the TRPM2 activation during reperfusion in the delayed increases in the [Zn 2+ ] c ( Figure 4 and Supplementary Figure 6 ). This finding is well reconciled with the facts that reperfusion provides oxygen, which the ROS-generating machineries need to generate ROS during reperfusion 37 ( Figure 8 and Supplementary Figure 7 ), and that TRPM2 is an important ROS sensor 22 , 23 , 24 , 25 and activation of TRPM2 channels leads to increases in the [Zn 2+ ] c . 30 Furthermore, consistent with the notion that Zn 2+ can induce further ROS production via activating various ROS-generating mechanisms, 8 , 13 , 19 , 20 , 21 , 39 , 47 TRPM2 deficiency strongly but incompletely suppressed ROS generation ( Figure 8 ). Selective loss of cytosolic Zn 2+ accumulation as a result of TRPM2 deficiency also attenuated ROS production during reperfusion ( Supplementary Figure 7 ) and potent neuronal protection ( Figures 2 , 3 , 4 , 5 ), and provided compelling evidence to suggest TRPM2 activation during reperfusion is a key step initiating a vicious positive feedback mechanism driving the delayed increase in the [Zn 2+ ] c and cell death in CA1 pyramidal neurons and memory impairments after transient ischemia. Such an exclusive role of TRPM2 during reperfusion predicts that TRPM2 deficiency can only protect against brain damage induced by transient (followed by reperfusion) but not permanent (no reperfusion) ischemia, which has been shown in a recent study. 29 The postischemic TRPM2-dependent Zn 2+ mechanism revealed in the present study can also explain satisfactorily why removal of Zn 2+ by CaEGTA 15 or pharmacological inhibition of TRPM2 27 after transient ischemia was also effective in preventing the delayed CA1 pyramidal neuronal death. 15 , 27 The present study shows that TRPM2 is present on hippocampal pyramidal neurons ( Figure 1 ), but whether it also has intracellular localization, as found in other cell types, 50 remains to be investigated. Further studies are required to determine the sources responsible for the increase in the [Zn 2+ ] c during reperfusion and the mechanisms responsible for further ROS production. H 2 O 2 -induced increase in the [Zn 2+ ] c in cultured hippocampal neurons was observed in extracellular Zn 2+ -free solutions ( Figure 7 ) and, as suggested previously, 51 may result at least in part from intracellular Zn 2+ release. It should be mentioned that TRPM2 is expressed in glial cells as well as in neurons in the brain; 23 non-neuronal TRPM2 may also be involved in neuronal death induced by transient ischemia, and studies are required to study their contribution. In summary, the present study shows TRPM2 activation during reperfusion as a crucial mechanism responsible for the delayed increases in the [Zn 2+ ] c that drives transient ischemia-induced CA1 pyramidal neuronal death and memory impairments, and suggests that TRPM2 inhibition is a promising strategy of developing novel therapeutic treatments to mitigate the cognitive sequelae following transient ischemia or stroke. Materials and Methods Mice TRPM2-KO mice were generated in the C57BL/6 background as detailed in our recent study. 52 WT and TRPM2-KO mice were housed under standard conditions with a 12/12 h light/dark cycle and free access to food and water. All animal use procedures were approved by the Committees at Zhejiang University and Leeds University for the Care and Use of Laboratory Animals. All the experiments were performed at room temperature or specifically indicated. The behavioral studies and the analysis of images were performed double-blindly without the knowledge of the treatments given to the animals. BCCAO and reperfusion The bilateral common carotid arteries in 8- to 12-week-old male mice were disclosed by incision and occluded using microaneurysm clips for 15 min under 2% isoflurane in a mix gas of 70% N 2 O/30% O 2 using a face mask. The body temperature was maintained at 37±0.5 °C by a heating pad during surgery by monitoring rectal temperature with a digital thermometer. After blood flow was restored and incision sutured, the mice were kept at 37±2 °C for ≥4 h and recovered under normal housing conditions for the indicated time. The cerebral blood flow was monitored by the laser Doppler flowmetry (Moor, Devon, UK) to confirm the ceasing and restoration of blood flow. The survival rate was calculated at 12, 24, 48 and 72 h after BCCAO operation. Hippocampal neuron culture Hippocampal tissues from postnatal day 1 mice were chopped and digested in 0.5% trypsin in Hank's balanced salt solution (HBSS) at 37 °C for 13 min. Dissociated cells were plated at 1 × 10 5 /cm 2 in poly-L-lysine-coated 35-mm dishes and incubated in the Neurobasal medium containing 0.5 mM glutamax, 50 U/ml penicillin, 50 μ g/ml streptomycin and 2% B27 (Invitrogen, Carlsbad, CA, USA) for 3 days. Cells were maintained in the Neurobasal medium containing 2.5 μ M cytosine arabinofuranoside, which was replaced every 4 days. Immunostaining, Nissl and TSQ staining Mice were anesthetized with sodium pentobarbital and killed at 24, 48 and 72 h after BCCAO, respectively. For immunostaining, brains were fixed with 4% paraformaldehyde and cut into 30- μ m-thick coronal sections (1.58–2.18 mm posterior to Bregma) using a Leica CM3050S cryostat (Leica Biosystems, Mannheim, Germany). After blocking with goat serum in phosphate-buffered saline (PBS) containing 0.2% Triton X-100 for 1 h, slices were incubated with rabbit anti-TRPM2 (Abcam, Cambridge, UK; 1 : 200), mouse anti-CaMKII (Abcam; 1 : 500), mouse anti-GFAP (Invitrogen; 1 : 200) or guinea-pig anti-NeuN (Millipore, Billerica, MA, USA; 1 : 1000) at 4 °C overnight, and then with Alexa Flour 546 anti-rabbit IgG, Alexa Flour 546 anti-guinea-pig IgG or Alexa Flour 488 anti-mouse IgG antibody (Invitrogen; 1 : 1000) for 1 h, followed with DAPI staining (Invitrogen; 1 : 10 000) for 15 min in some experiments. Nissl staining was performed according to the manufacturer’s instructions (Beyotime, Suzhou, China). For TSQ and Nissl staining, brains were frozen and cut into 20- μ m-thick coronal sections. Air-dried slices were stained with 100 μ M TSQ (Molecular Probes, Invitrogen) for 90 s and, after the fluorescent images were captured, stained with Nissl. Images were captured using an Olympus FV1000 confocal microscope (Olympus, Tokyo, Japan) and Image-Pro Plus (Media Cybernetics, Inc., Bethesda, MD, USA). For immunostaining of single cultured hippocampal neurons, cells were seeded at a density of ~65 cells/mm 2 on 13-mm poly-L-Lysine-coated coverslips placed in 24-well plates and cultured for 21 days in vitro (DIV). Cells were incubated in fixing solution (PBS containing 4% paraformaldehyde and 4% sucrose) for 10 min. After washing in PBS, cells were incubated with blocking serum solution (PBS containing 5% BSA and 0.4% Triton X-100) for 1 h at room temperature. Cells were incubated with rabbit anti-TRPM2 antibody (Bethyl, Montgomery, TX, USA; 1 : 1000) and mouse anti-Neuro-ChromTM Pan Neuronal Marker antibody (Millipore; 1 : 100) in blocking serum solution at 4 °C overnight and, after washing in PBS, incubated with fluorescein isothiocyanate-conjugated anti-rabbit IgG secondary antibody (Sigma, St. Louis, MO, USA; 1 : 600) for 1 h. Cells were washed with PBS and incubated with tertramethylrhodamine isothiocyanate-conjugated anti-mouse IgG secondary antibody (Sigma; 1 : 500) for 1 h. After cells were washed with PBS and rinsed in water, coverslips were mounted with SlowFade Gold antifade reagent with DAPI (Invitrogen) and kept in 4 °C overnight. Fluorescent images were captured using an inverted LSM700 microscope (Carl Zeiss, Jena, Germany). Acute brain slice preparation Four-week-old male mice were anesthetized with diethyl ether and brains were cut into 300- μ m-thick transverse slices using a Leica 1200S vibrotome (Mannheim, Germany) in ice-cold solutions (in mM: 2.5 KCl, 7 MgSO 4 , 0.5 CaCl 2 , 1.25 NaH 2 PO 4 , 25 NaHCO 3 , 10 glucose and 210 sucrose, pH 7.3). Slices were firstly incubated at 34 °C to recover for 30 min and then at room temperature for ≥45 min in artificial cerebral spinal fluid (ACSF) (in mM: 119 NaCl, 2.5 KCl, 1 NaH 2 PO 4 , 26.2 NaHCO 3 , 2.5 CaCl 2 , 1.3 MgSO 4 and 11 glucose, pH 7.3) bubbled with 95% O 2 /5% CO 2 . Patch-clamp recording Whole-cell currents were recorded from cultured hippocampal neurons or acute brain slices, using an Axopatch 200B or Multiclamp 700B amplifier (Molecular Devices, Sunnyvale, CA, USA) and protocols as described previously. 32 Intracellular solution for recording cultured neurons contained (in mM): 147 Cs-gluconate, 10 HEPES, 2 MgCl 2 and 1 ADPR (pH 7.3) and extracellular solution: 147 NaCl, 2 KCl, 10 HEPES, 13 glucose, 2 CaCl 2 and 0.2 μ M TTX (pH 7.4). Intracellular solution for recording brain slices contained (in mM): 135 CsMeSO 4 , 8 NaCl, 10 HEPES, 4 MgATP, 0.3 EGTA, 0.3 ADPR and 5 QX314 (pH 7.3), and Mg 2+ -free ACSF was used as an extracellular solution. Clotrimazol at 20 μ M was diluted in an extracellular solution. The signals were filtered at 2 kHz and digitized at 10 kHz. Open-field test The open-field test was conducted in an open plastic chamber (45 cm × 45 cm × 45 cm). Mice were individually placed in the corner of the chamber and allowed to freely explore for 15 min. The locomotor activity of the animals in the field was measured using an automated video-tracking system (ViewPoint, Lyon, France). Measurement began immediately after placement in the chamber. The total distance mice traveled was calculated, and the accumulative time mice spent in the central square of the open plastic chamber was used as an indicator for exploratory ability. Novelty environment habituation test Similar to the open-field test, the mice were placed in an open plastic chamber (45 cm × 45 cm × 45 cm) and allowed to move freely for 5 min, and their locomotor activities were monitored using a ViewPoint video-tracking system. This test was repeated further two times to examine their memory of the pre-exposed chamber environments, and the mice were subjected to sham or BCCAO operation after the second test. The novelty environment habituation score was presented as the locomotion activity during the second and third tests as the percentage of that during the first test. Water maze test This spatial learning and memory test was conducted as described in a previous study. 53 In brief, the circular water maze tank of 100 cm in diameter and 60 cm deep was filled with water and the water temperature was maintained at 22±1 °C. The water was opaque by adding white and non-toxic paint. A 10-cm diameter escape platform was submerged 0.5 cm below the water surface in a fixed position. Distinct cues were painted on the walls. The whole procedure was performed over a period of 6 days, composed of a training session during the first 5 days and the probe test on the last day. Each mouse, 7 days after sham or BCCAO operation, was trained four trails with a 30- s intertrial interval for each day during the training session. The mouse was placed in one of the four random start locations facing the wall. The trial was complete when the mouse touched the platform or 60 s elapsed. If the mouse failed to find the platform with the limited time during a given trial, it was moved onto the platform by the experimenter. The probe test lasted for 60 s and was conducted 24 h after the last training using the same water tank with the escape platform removed. The latency for mice to reach the escape platform during the training and the time mice spent searching for the missing escape platform in the target quadrant during the probe test were recorded and analyzed using an automated tracking system (Coulbourn Instrument, Whitehall, PA, USA; ACTIMETRICS software, Wilmette, IL, USA). In situ detection of ROS production The production of superoxide in situ was detected as follows. Dihydroethidium (dHEt) (Invitrogen) at 1 mg/kg mouse weight was injected intraperitoneally at the beginning of BCCAO. Mice were killed 3.5, 24, 48 and 72 h after termination of BCCAO and reperfusion and fixed with 4% PFA in PBS, respectively. The brain was fixed further in 4% PFA overnight. Three hippocampal sections with 150 μ m intervals were prepared (Vibrotome 1000 plus, Royston, Herts, UK). The image was captured by a confocal fluorescent microscope with excitation at 510–550 nm and emission at 580 nm. Ethidium signal intensity was expressed as the mean fluorescence in neuronal body in the hippocampal CA1 region. Data were obtained from three independent experiments. OGD and reperfusion Glucose-free ACSF with glucose replaced with NaCl and normal ACSF were bubbled, 30 min before and during use, with 95% N 2 /5% CO 2 or 95% O 2 /5% CO 2 , respectively. Acute brain slices were perfused in normal ACSF for 30 min, glucose-free ACSF for 1 h and then washed with normal ACSF for 30 min, whereas the control slices remained in normal ACSF for 2 h. The slices were then incubated for 10 min in normal ACSF containing 5 μ g/ml propidium iodide (PI) and 20 μ M Newport Green (Invitrogen). Fluorescence images were captured using an Olympus FV1000 confocal microscope and Zn 2+ fluorescence intensity was measured using ImageJ software (NIH, Bethesda, MD, USA), and PI-stained cells were counted in the same stratum pyramidal layer of the CA1 region. Data were from at least three independent experiments. Cell imaging Hippocampal neurons of 12–14 DIV were incubated with 1 μ M FluoZin3-AM and 0.02% F127 (Invitrogen) at 37 °C for 1 h and maintained for another hour in normal solutions (in mM:129 NaCl, 5 KCl, 3 CaCl 2 , 1 MgCl 2 , 10 glucose and 10 HEPES, pH 7.4). Neurons were perfused for 60 min with glucose-free solution (in mM: 140 NaCl, 5 KCl, 3 CaCl 2 , 0.05 glycine and 10 HEPES, pH 7.4) saturated with 95% N 2 /5% CO 2 and then normal solution for 10 min. In some experiments, 100 μ M CaEDTA or 25 μ M Naspm was included. DIV14 hippocampal neurons used to study H 2 O 2 -induced changes in the [Zn 2+ ] c were loaded with 1 μ M FluoZin3-AM in a similar manner in HBSS. Neurons were exposed to H 2 O 2 at 100 or 300 μ M for 40 min, followed by treatment with 10 μ M TPEN for 10 min in some experiments. Fluorescence images were captured using an Olympus FV1000 confocal microscope or a Zeiss M510 confocal microscope (Carl Zeiss) and fluorescence intensity was measured using ImageJ software. Data were from at least three independent experiments. Single-cell imaging of Zn 2+ influence in a Tet-inducible HEK293 cells conditionally expressing TRPM2 was carried out as described above for cultured hippocampal neurons using extracellular solutions (in mM: 147 NaCl, 2 KCl, 2 CaCl 2 , 1 MgCl 2 , 13 glucose and 10 HEPES, pH 7.4) that contained no or 10 μ M ZnCl 2 . The cells were maintained in DMEM/F-12 (Invitrogen) supplemented with 10% fetal bovine serum and GlutaMaxTM-1 (Gibco, Life Technologies, Paisley, UK), 5 μ g/ml blasticidin (Invitrogen) and 400 μ g/ml zeocin (Invitrogen). Expression of TRPM2 was induced by removing blasticidin and zeocin and adding 1 μ g/ml Tet in the culture medium. Cells were seeded onto 35-mm glass-bottomed dishes (FluoroDish; World Precision Instruments, Sarasota, FL, USA) and grown to ~60% confluency. Total fluorescence intensity was measured with ImageJ software. Data were from at least three independent experiments. Real-time RT-PCR Total RNA was extracted from 20–30 mg of the hippocampus from each mouse using the RNeasy Mini-prep Kit according to the manufacturer’s instructions (Takara, Dalian, China). Real-time RT-PCR analysis was performed in four duplicates for each sample using the QPCR SYBR Green Fluorescein Mix (Takara) and a CFX-96 Thermal Cycler (Bio-Rad, Hercules, CA, USA). In all, 500–2000 ng of mRNA was reverse-transcribed using the OmniScript Kit (Takara). Each PCR reaction contained 2 μ l of diluted (1 : 10) cDNA, primers (0.4 μ M each) and dNTPs (0.35 μ M each). The PCR protocols were composed of one initial step of 95 °C for 2 min, 40 cycles of 95 °C for 30 s, 60 °C for 30 s and 72 °C for 30 s, and one additional step of 72 °C for 10 min. The forward and reverse primer sequences used were as follows: 5′-TACTGGGCACAGTGAATGG-3′ and 5′-GCAAGGCTAAGGAGAAGACC-3′ for ZnT1; 5′-ATGCTCATTAGCCTCTTCGC-3′ and 5′-CTGTCGTCACGGCTGTTCC-3′ for ZnT2; 5′-TTCCACCACTGCCACAAG-3′ and 5′-TGCTAAATACCCACCAACCA-3′ for ZnT3; 5′-TCCTCCAGACAACACCACC-3′ and 5′-AGCCAATGAGCCAAATCC-3′ for ZnT6; 5′-CAGGCTATGCTTGATGCTCT-3′ and 5′-GGTTGGACCTTGTTTAGTGTTAT-3′ for TRPM7; and 5′-AGAGTGTTTCCTCGTCCCG-3′ and 5′-CCGTTGAATTTGCCGTGA-3′ for glyceraldehyde- 3-phosphate dehydrogenase (GAPDH). The intensity relative to GAPDH was estimated by the ΔΔC T method. The size of PCR products was confirmed by agarose gel analysis. The data from four duplicates were averaged, and the average value from the TRPM2-KO mice was normalized to that in the matched WT mice in each independent experiment before the mean value was obtained from all the mice examined. Data analysis All data are presented with mean±S.E.M. where appropriate. Damaged neurons identified by condensed cell body and nuclei in Nissl staining, NeuN-positive neurons and GFAP-positive glial cells were counted within an area of 0.2 mm 2 randomly chosen for three adjacent fields. The TSQ intensity in CA1 pyramidal neurons was normalized to that in lateral ventricles. The changes in the [Zn 2+ ] c in individual neurons and HEK293 cells were expressed relative to the basal level. Statistical analysis was performed using Student’s t -test or one-way ANOVA followed by post hoc Tukey's test, with P <0.05 being considered statistically significant. Abbreviations TRPM2: transient receptor potential melastatin-related 2 [Zn 2+ ] c : cytosolic Zn 2+ level ROS: reactive oxygen species AMPA: α-amino-3-hydroxy-5-methyl-4-isoxazolepropionate AMPAR: AMPA receptor ADPR: ADP-ribose NMDA: N -methyl- D -aspartate WT: wild type CaMKII: calmodulin-dependent kinase II KO: knockout GFAP: glial fibrillary acidic protein BCCAO: bilateral common carotid artery occlusion CLT: clotrimazole Tet: tetracycline HEK293: human embryonic kidney 293 TSQ: 6-methoxy-(8- p -toluenesulfonamido)quinolone OGD-R: oxygen/glucose deprivation reperfusion ZnT: zinc transporter TRPM7: transient receptor potential melastatin-related 7 CaEGTA: Ca 2+ saturated ethylene glycol tetraacetic acid TPEN: N,N,N',N' -tetrakis(2-pyridylmethyl)ethylenediamine HBSS: Hank's balanced salt solution PBS: phosphate-buffered saline DIV: days in vitro DAPI: 4',6-diamidino-2-phenylindole ACSF: artificial cerebral spinal fluid dHEt: dihydroethidium DMEM: Dulbecco's modified Eagle's medium GAPDH: glyceraldehyde-3-phosphatedehydrogenase PCR: polymerase chain reaction | Researchers have discovered a mechanism linked to the brain damage often suffered by stroke victims—and are now searching for drugs to block it. Strokes happen when the blood supply to part of the brain is cut off but much of the harm to survivors' memory and other cognitive function is often actually caused by "oxidative stress" in the hours and days after the blood supply resumes. A team from the University of Leeds and Zhejiang University in China studied this second phase of damage in laboratory mice and found a mechanism in neurons that, if removed, reduced the damage to brain function. Co-author Dr Lin-Hua Jiang, of the University of Leeds' School of Biomedical Sciences, said: "Until now, much of the drug research has been focussing on the direct damage caused by the loss of blood flow, but this phase can be hard to target. The patient may not even be in the ambulance when it is happening. We have found a mechanism that is linked to the next phase of damage that will often be underway after patients have been admitted to hospital." The study, published in the journal Cell Death and Disease and supported by a strategic partnership between the University of Leeds and Zhejiang University, looked at the damage caused by the excessive production of chemicals called "reactive oxygen species" in brain tissues immediately after blood supply is re-established. In a healthy brain, there are very low levels of reactive oxygen species, but the quantity dramatically increases after a stroke to levels that are harmful to neurons. Dr Jiang said: "We identified an 'ion channel' in the membranes of neurons, called TRPM2, which is switched on in the presence of the reactive oxygen species. Basically, an ion channel is a door in the membrane of a cell that allows it to communicate with the outside world— TRPM2 opens when the harmful levels of reactive oxygen species are present and we found that removing it significantly reduced neuronal cell damage." The researchers compared the effects of strokes on mice with TRPM2 with a transgenic strain without it. "In the mice in which the TRPM2 channel does not function, the reactive oxygen species are still produced but the neurons are very much protected. The neuronal death is significantly reduced. More importantly, we observed a significant difference in brain function, with the protected mice demonstrating significantly superior memory in lab tests," Dr Jiang said. "This study has pinpointed a very promising drug target. We are now screening a large chemical library to find ways of effectively inhibiting this channel. Our ongoing research using animal models is testing whether blockage of this channel can offer protection again brain damage and cognitive dysfunction in stroke patients," Dr Jiang said. | 10.1038/CDDIS.2014.494 |
Biology | New method reveals hidden population of regulatory molecules in cells | Nature Methods, DOI: 10.1038/nmeth.3508 Journal information: Nature Methods | http://dx.doi.org/10.1038/nmeth.3508 | https://phys.org/news/2015-08-method-reveals-hidden-population-regulatory.html | Abstract High-throughput RNA sequencing has accelerated discovery of the complex regulatory roles of small RNAs, but RNAs containing modified nucleosides may escape detection when those modifications interfere with reverse transcription during RNA-seq library preparation. Here we describe AlkB-facilitated RNA methylation sequencing (ARM-seq), which uses pretreatment with Escherichia coli AlkB to demethylate N 1 -methyladenosine (m 1 A), N 3 -methylcytidine (m 3 C) and N 1 -methylguanosine (m 1 G), all commonly found in tRNAs. Comparative methylation analysis using ARM-seq provides the first detailed, transcriptome-scale map of these modifications and reveals an abundance of previously undetected, methylated small RNAs derived from tRNAs. ARM-seq demonstrates that tRNA fragments accurately recapitulate the m 1 A modification state for well-characterized yeast tRNAs and generates new predictions for a large number of human tRNAs, including tRNA precursors and mitochondrial tRNAs. Thus, ARM-seq provides broad utility for identifying previously overlooked methyl-modified RNAs, can efficiently monitor methylation state and may reveal new roles for tRNA fragments as biomarkers or signaling molecules. Main High-throughput RNA-sequencing has provided insight into the importance of small RNAs in a wide range of biological contexts. tRNAs are among the most abundant RNAs in all organisms, so it is perhaps unsurprising that tRNA fragments and half-molecules are often abundant constituents of small-RNA sequencing libraries 1 , 2 , 3 . There is increasing evidence that these tRNA-derived RNAs can have important functions distinct from those of mature tRNAs 4 , 5 , 6 , 7 , 8 , including potential roles in disease 4 , 5 , 9 . However, tRNA fragments are likely to escape sequencing-based detection when they contain nucleoside modifications similar to those in mature tRNAs. Many tRNA modifications cause pauses or stops during reverse transcription 10 , a critical step in most RNA-seq protocols. These so-called hard-stop modifications, including m 1 A, m 1 G, N 2 , N 2 -dimethylguanosine (m 2 2 G) and m 3 C, are more prevalent in tRNAs than in other classes of RNAs and likely play important roles in the biogenesis, stability and functional activities of tRNA fragments, much as they do for mature tRNAs 11 . For example, specific modifications can target tRNAs for cleavage into half-molecules 12 , protect tRNAs from cleavage 13 , 14 , or alter the interaction of tRNA fragments with proteins such as Dicer or Piwi 2 , 3 , 8 . We developed ARM-seq to provide sensitive and specific detection of methyl-modified RNAs using RNA-seq. In ARM-seq, RNA is treated with a dealkylating enzyme, Escherichia coli AlkB, before reverse transcription in library preparation. Differential abundance analysis comparing treated to untreated samples efficiently identifies RNAs sequenced more frequently after demethylation. The known substrates of E. coli AlkB in RNA are m 1 A, documented in approximately half of all well-characterized tRNAs, and m 3 C, a less common modification documented primarily in tRNAs 15 , 16 , 17 . There is also evidence that E. coli AlkB can demethylate m 1 G, which is nearly as prevalent as m 1 A in tRNAs, although by a different mechanism 18 . In our analyses of budding yeast ( Saccharomyces cerevisiae ) and human cell lines, ARM-seq greatly increased the abundance and diversity of reads for tRNA fragments. Furthermore, ARM-seq fragment analyses correctly predicted the identity and position of modified residues when compared to previous documentation 17 for corresponding mature tRNAs. This approach, corroborated by primer extension experiments, correctly predicted the m 1 A modification state for the complete set of known yeast tRNAs with 94% accuracy, including several for which modifications were verified to differ from previous studies. ARM-seq also provided compelling evidence for m 1 A modifications in a large proportion of human tRNAs for which modification patterns were unknown or not documented. Thus, ARM-seq facilitates sequencing of methyl-modified RNAs that otherwise escape detection in standard sequencing protocols, and it can be used to rapidly characterize methylation patterns across diverse transcriptomes. Results ARM-seq enables detection of methylated tRNA fragments We first tested the ARM-seq methodology ( Fig. 1 and Supplementary Software ) on S. cerevisiae , where tRNAs and their modifications 17 have been most extensively characterized. Initial experiments showed that demethylation conditions used for ARM-seq specifically removed m 1 A and m 3 C modifications from target RNAs ( Supplementary Fig. 1 ). ARM-seq more than doubled the proportion of small-RNA sequencing reads from tRNA genes from 6.9% to 15.1% ( Fig. 2a and Supplementary Table 1 ). These increases corresponded almost entirely to tRNA fragments rather than full-length mature tRNAs ( Supplementary Table 2 ), indicating that a large proportion of tRNA fragments in yeast contain AlkB-sensitive modifications. In contrast, the share of reads mapping to other major classes of small RNAs diminished slightly ( Supplementary Table 1 ). Figure 1: ARM-seq facilitates sequencing of m 1 A-, m 3 C- or m 1 G-modified RNAs. In ARM-seq, removal of m 1 A, m 3 C, or m 1 G modifications by AlkB treatment facilitates the production of full-length cDNAs from previously modified templates, producing a ratio of reads in treated versus untreated samples that can be used to identify methylated RNAs. Typical positions for m 1 A, m 3 C or m 1 G modifications are indicated in the schematic showing tRNA secondary structure in canonical cloverleaf form. Full size image Figure 2: ARM-seq reveals m 1 A-modified tRNA fragments in S. cerevisiae . ( a ) Combined ARM-seq read profiles show substantially increased detection of tRNA 3′ fragments and half-molecules, where m 1 A at position 58 (m 1 A58; dotted lines) is the most prevalent hard-stop modification (iMet, initiator methionine). ( b ) ARM-seq read profiles for individual tRNAs show increases in 3′-fragment reads relative to untreated samples (for example, Thr-AGT) that predict the presence of m 1 A58 in some tRNAs (indicated by *). By contrast, ARM-seq profiles for other tRNAs show comparable or diminished 3′ reads (for example Arg-CCG), predicting unmodified A58 in these tRNAs. x axes in a and b display canonical tRNA positions, excluding extended type II variable loops (ac, anticodon). ( c ) Primer extensions targeting six corresponding mature tRNAs demonstrate that these ARM-seq results reflect the modification patterns of mature tRNAs, confirming the A58 modification state documented in Modomics (for tRNAs indicated in brown in b and d ), and providing new information on the m 1 A58 modification state (for tRNAs in gray in b and d ). Table notation (+++, ++ and −) summarizes relative magnitude of AlkB effect in primer extension or ARM-seq. ( d ) Significant ARM-seq response (for example Arg-ACG), or lack of response (for example His-GTG) are displayed in context of all tRNAs in yeast. tRNAs for which ARM-seq predictions were verified by primer extension are indicated in orange. P -value < 0.01 indicated by *. Full size image ARM-seq predicts the m 1 A modification state of mature tRNAs Next, we showed that ARM-seq abundance ratios (RNA-seq read counts from AlkB-treated versus untreated RNA) and read profiles detected known m 1 A tRNA modifications as effectively as traditional primer extension experiments. Thr-AGT tRNA, which is known to contain m1A at position 58 (m 1 A58), showed a 16-fold increase in normalized read count corresponding to fragments that include A58 ( Fig. 2b ). Primer extensions targeting mature Thr-AGT tRNA revealed a hard-stop band corresponding to m 1 A58 in an untreated sample, versus much lower band intensity in the corresponding AlkB-treated sample, consistent with demethylation of the expected m 1 A58 modification ( Fig. 2c ). By contrast, ARM-seq produced no significant effect for His-GTG ( Fig. 2b ), a true negative where an expected unmodified A58 was also confirmed by primer extension ( Fig. 2c ). Similar comparisons confirmed ARM-seq predictions for three isodecoder groups with no previous modification data (Leu-GAG, Arg-CCG and Gly-CCC) and one isodecoder group (Gln-TTG) for which A58 was previously documented as unmodified 7 but shown to be methylated by both ARM-seq and primer extension ( Fig. 2b,c ). Because ARM-seq read profiles of tRNA fragments correctly predicted the m 1 A58 modification state for the mature tRNAs tested, we examined ARM-seq results for the complete set of yeast tRNAs. On the basis of our initial verified test data, we used a twofold increase in read abundance and a DEseq2 P value <0.01 (see Online Methods ) as our threshold for identifying all significant ARM-seq responses. ARM-seq correctly predicted the modification state for 22 of 26 yeast tRNAs with documented 17 m 1 A58 modifications ( Fig. 2d , Supplementary Figs. 2 and 3 and Supplementary Table 2 ). Among the other four tRNAs, ARM-seq predicted unmodified A58 in two (Leu-TAA-1 and Lys-CTT-1), which were confirmed by primer extension ( Fig. 2d and Supplementary Fig. 4 ). The last two tRNAs expected to contain m 1 A58 (Ile-TAT-1 and Val-CAC-1) showed visible increases in read count but were not quite significant by our cutoff criteria ( Fig. 2d and Supplementary Fig. 2b ). Conversely, ARM-seq produced profiles consistent with unmodified A58 for 15 of 19 tRNAs in isodecoder groups expected to lack m 1 A58 ( Supplementary Fig. 2 ) and correctly identified three others (Gln-TTG isodecoders) for which unexpected m 1 A58 modifications were confirmed by primer extension ( Fig. 2b,c ). ARM-seq profiles for the last tRNA in this group, Ser-CGA, showed evidence for demethylation of both an expected m 3 C32 modification and an unexpected m 1 A58. ARM-seq also predicted m 1 A58 modifications for five yeast tRNAs in isodecoder groups not represented in the Modomics database of RNA modifications and predicted unmodified A58 for three others, with primer extensions confirming m 1 A58 for Leu-GAG and unmodified A58 for Arg-CCG and Gly-CCC ( Fig. 2c,d and Supplementary Fig. 2d ). The final tRNA not represented in Modomics, Pro-AGG, showed evidence for partial AlkB sensitivity that was also confirmed by primer extension ( Supplementary Fig. 4 ). In summary, for all yeast tRNAs where m 1 A58 modification state was either corroborated by documentation in Modomics or verified by primer extensions, ARM-seq correctly predicted 26 of 28 that contain m 1 A58 (93% sensitivity) and 18 of 19 that contain unmodified A58 (95% specificity), demonstrating a combined accuracy of 94% overall. ARM-seq reveals methylated human tRNA fragments The tRNA repertoire in humans is substantially more complex. Of 414 unique human mature tRNA sequences identified by the program tRNAscan-SE 19 , 20 , just 43 match entries in Modomics. ARM-seq demethylation increased the proportion of RNA-seq reads mapping to tRNAs from 2.9% to 10.1% in a B cell–derived cell line (GM12878) and from 3.9% to 13.2% in a B-cell lymphoma–derived cell line (GM05372), about 3.5-fold in each case ( Supplementary Fig. 5 and Supplementary Table 1 ). These increases again corresponded to detection of modified tRNA fragments rather than of full-length mature tRNAs ( Supplementary Table 3 ). The tRNA 3′ fragments detectable only with ARM-seq all included A58, positively predicting 15 of the 17 (88%) human isodecoder groups expected to contain m 1 A58 modifications ( Fig. 3a,b ). ARM-seq also correctly identified the only isodecoder group expected to contain unmodified A58 (Glu-CTC; Supplementary Fig. 6d ). Examining all isotypes, ARM-seq produced an unprecedented set of methylation predictions encompassing the full spectrum of human isodecoder groups ( Supplementary Fig. 6a,b and Supplementary Data ). Figure 3: ARM-seq reveals methylated RNAs derived from human cytosolic tRNAs, tRNA precursors and mitochondrial tRNAs. ( a ) Significant ARM-seq responses (indicated by *) provide evidence for m 1 A58 modification in a majority of human tRNA isotypes in two B cell–derived human cell lines, GM05372 and GM12878. ( b ) Read profiles for a subset of abundant human cytosolic tRNA fragments revealed only by ARM-seq indicate high levels of m 1 A58 modification. ( c ) Read profiles showing increased detection of tRNA precursor fragments that contain 3′ trailer or 5′ leader sequences (demarcated by dashed lines) all include the T-loop region, consistent with m 1 A58 modification of human pre-tRNAs. ( d ) Fragments of human mitochondrial tRNAs revealed by ARM-seq indicate demethylation of tRNAs known to have m 1 A9 (see mito-Asp-GTC), m 1 G9 (mito-Ile-GAT), or m 1 G37 (mito-Leu-TAG). tRNAs for which ARM-seq predictions were verified by primer extension are indicated in orange. Full size image ARM-seq identifies methyl-modified pre-tRNAs and mitochondrial tRNAs A subset of transcripts revealed by ARM-seq in the human samples preferentially mapped to tRNA genes rather than mature tRNA transcripts because they included genomically encoded sequences found only in tRNA precursors ( Fig. 3c and Supplementary Figs. 7 , 8 , 9 , 10 , 11 ). Most tRNA base modifications are thought to occur after cleavage of 5′ leader and 3′ trailer sequences from tRNA-precursor transcripts 21 . Evidence demonstrating m 1 A58 modification of initiator methionine pre-tRNAs in yeast and exogenous pre-tRNAs in Xenopus laevis oocytes established a limited precedent for this particular modification at an earlier stage in pre-tRNA processing 22 , 23 , but direct evidence for early m 1 A58 modification has been lacking for most organisms, including humans. Surprisingly, ARM-seq identified modified precursors for most human acceptor types ( Supplementary Fig. 12 and Supplementary Table 3 ), even though pre-tRNAs are less abundant and more challenging to detect than mature tRNAs. Overall, pre-tRNAs in 33 different isodecoder families from 86 different human tRNA gene loci showed significant ARM-seq responses in at least one of the two cell lines. A large subset of these, 38 loci, showed significant ARM-seq responses in both cell lines. Primer extensions confirmed an AlkB-sensitive block corresponding to m 1 A58 in a human Leu-CAA pre-tRNA ( Supplementary Fig. 8b ). Thus, ARM-seq provides the first evidence that many human pre-tRNAs are m 1 A58 modified before 5′ leader and 3′ trailer removal, suggesting this pattern occurs broadly among eukaryotes. ARM-seq also efficiently revealed modifications in human mitochondrial tRNAs. Of 22 human mitochondrial tRNAs, 8 are currently documented 17 , showing m 1 A9, m 1 G9, m 1 G37 and m 1 A58 as the most frequent hard-stop modifications. More extensively characterized bovine mitochondrial tRNAs show at least one difference in modification relative to humans for seven of these (all except initiator methionine), underscoring the need for specific characterization of human mitochondrial tRNAs 17 , 24 . ARM-seq produced significant increases in read count, identifying modified RNAs derived from 12 mitochondrial tRNAs in GM12878 cells, 8 of which also showed significant responses in the GM05372 samples ( Fig. 3d , Supplementary Fig. 7 and Supplementary Table 3 ). In contrast to human cytosolic tRNAs, for which ARM-seq responses were attributable exclusively to m 1 A58 modification state, ARM-seq profiles for human mitochondrial tRNAs provided evidence for m 1 A9 (in mito-Asp-GTC, mito-Lys-TTT and mito-Pro-TGG), m 1 G9 (in mito-Ile-GAT and mito-Tyr-GTA), m 1 G37 (in mito-Leu-TAG and mito-Pro-TGG) and m 1 A58 (in mito-Leu-TAA). Primer extensions confirmed AlkB-mediated demethylation of m 1 A9 for mito-Pro-TGG, m 1 G9 in mito-Ile-GAT and a previously undocumented m 1 G9 in mito-Tyr-GTA ( Supplementary Fig. 8b ). Discussion ARM-seq results show that a large fraction of small RNAs in both budding yeast and human cells contain base modifications that reflect their biogenesis from modified tRNAs. Recently developed protocols provide tools to profile N 6 -methyladenosine (m 6 A)-, pseudouridine- and N 5 -methylcytidine (m 5 C)-modified RNAs using high-throughput sequencing, revealing new and unexpected targets for these modifications 25 , 26 , 27 , 28 , 29 . ARM-seq adds the capacity to profile m 1 A-, m 3 C- or m 1 G-modified RNAs, which are otherwise recalcitrant to sequencing, revealing a complex landscape of modified tRNA fragments in two evolutionarily divergent organisms. Sequences of the most abundant of these are listed ( Supplementary Table 4 ), with all 1,634 read profiles available for individual examination ( Supplementary Data ). The power of ARM-seq as a screen for m 1 A-, m 3 C- and m 1 G-modified RNAs can be maximized by leveraging prior knowledge from databases such as Modomics and complementary experimental approaches such as primer extension and mass spectrometry to identify the specific nature and location of modified residues. ARM-seq demonstrates remarkable accuracy in predicting previously documented tRNA modification patterns and perfect agreement with corresponding primer extensions for unexpected modifications. Furthermore, results showing that many human pre-tRNAs are modified by m 1 A demonstrate that ARM-seq can dissect complex sequential steps of RNA processing and modification, with potential application for identifying modification-based regulatory checkpoints. ARM-seq profiles revealing m 1 A- and m 1 G-modified mitochondrial tRNAs also suggest uses investigating mitochondrial genetic diseases, in which defects in mitochondrial tRNAs often play central roles 30 . Our results, including untreated samples, do not show the same evidence for nucleotide misincorporation at expected hard-stop modifications that has been reported in other studies 31 , 32 , 33 , 34 . Although signature mismatches in sequencing data can identify modified or edited residues, ARM-seq is almost certainly more sensitive and quantitative for detection of modified RNAs because it does not depend on low-frequency reverse-transcription errors that are poorly understood and possibly context dependent. ARM-seq should facilitate the study of tRNA processing and modification in a wide range of biological settings, including investigation of new model organisms as well as comparative analyses of different developmental stages, tissue types and disease states. Such studies may illuminate new facets of tRNA biology—for example, by revealing tissue-specific functions for distinct tRNA variants 35 —or important regulatory functions for new tRNA fragments 5 . These typically overlooked small RNAs outnumbered microRNAs by fourfold or more ( Supplementary Fig. 5 ), which underscores their potential involvement in cellular signaling and regulation as well as in disease states including neurodegeneration, cancer and viral infections 4 , 5 , 9 . Whether base modifications play central roles in these activities, and whether modifications have obscured detection of members of other classes of RNAs, such as mRNAs or long noncoding RNAs, is among the many potential lines of research now accessible with this methodology. Methods Purification of E. coli AlkB. AlkB was purified after growth of E. coli BL21(DE3)pLysS (12 l) bearing plasmid JEE1167-B in the AVA421 vector 36 , 37 and 2-h IPTG induction at 37 °C to express His6-3C-AlkB fusion protein. Crude lysates were made by sonication, and protein was purified by batch treatment on TALON resin, tag cleavage with His 6 -3C protease and reapplication to TALON resin. Unbound protein was concentrated (Amicon Ultra-15 centrifugal filter unit), purified using a Hi-Load 16/60 Superdex 200 gel-filtration column and then stored as concentrated protein (15.4 mg/ml, 0.77 ml) in buffer containing 20 mM Tris-HCl pH 8.0, 50% glycerol, 0.2 M NaCl and 2 mM dithiothreitol at −20 °C or at −80 °C. Freezing the enzyme did not impair activity. Growth of yeast cells and RNA isolation. S. cerevisiae cells (strain BY4741) were grown in liquid YPD medium at 30 °C to OD 600 = 1–2, and 300 OD cells were harvested and quick frozen at −80 °C. Bulk RNA was prepared from cell pellets using hot phenol 36 , typically yielding 2 mg RNA. Bulk RNA from three independently inoculated cultures was processed separately in subsequent treatments. Growth of human cell lines and RNA isolation. Cell pellets of human B lymphocyte–derived cell lines GM05372 and GM12878 were purchased from Coriell Institute and shipped frozen after a PBS wash. Cell lines were authenticated using microsatellite analysis and verified as free of mycoplasma infection by Coriell Institute. Upon arrival, cells were immediately placed at −80 °C for storage before RNA extraction. Isolation of total RNA from 10 8 human cells was performed using Direct-Zol RNA MiniPrep Kit (Zymo Research) with TRI Reagent (Molecular Research Center, Inc.), typically yielding 400–450 μg of total RNA. Total RNA samples from each of the two human cell lines were then split into three technical replicates for subsequent treatments. Sample size ( n = 3) was selected to demonstrate the utility of the protocol at a level of replication achievable by most researchers. Treatment of RNA with AlkB. AlkB treatment of RNA was performed in 200-μl reaction mixtures containing 50 mM HEPES KOH, pH 8, 75 μM ferrous ammonium sulfate, pH 5, 1 mM α-ketoglutarate, 2 mM sodium ascorbate, 50 μg/ml BSA, 50 μg AlkB and 50 μg bulk RNA at 37 °C for 100 min. AlkB reaction buffer was prepared fresh before each use. Reactions were stopped by addition of 200 μl buffer containing 11 mM EDTA and 200 mM ammonium acetate, followed by phenol extraction, ethanol precipitation and resuspension of the washed pellet in water. Control reactions for untreated samples were performed similarly, using AlkB storage buffer instead of AlkB. Primer extension. For primer extension, ∼ 0.7 pmol 5′- 32 P-phosphorylated primer was annealed to 0.2 μg bulk RNA in 5 μl H 2 O by heating for 3 min at 95 °C, followed by cooling to 50 °C and incubation for 1 h. Annealed primer was extended using 64 U Superscript III (Invitrogen) in a 10-μl reaction containing first strand buffer (50 mM Tris-HCl (pH 8.3, 25 °C), 75 mM KCl, 3 mM MgCl 2 ) and 1 mM each dNTP for 1 h at 50 °C, stopped by addition of 10 μl formamide loading dye and freezing on dry ice. Primer extension products were resolved by electrophoresis on a 15% polyacrylamide gel containing 4 M urea, followed by visualization of the dried gel on a phosphorimager cassette. Sequences of oligonucleotides used for primer extension are listed in Supplementary Table 5 . Size selection and preparation of RNA sequencing libraries. 50 μg of control or AlkB-treated RNA were processed using the MirVana miRNA Isolation Kit (Life Technologies), according to the manufacturer's instructions, to select for RNA <200 nt. RNA was concentrated to 25 μg using RNA Clean and Concentrate-25 (Zymo Research), and 10 μg were treated with DNase I (New England BioLabs). Following column cleanup of the RNA, 1 μg was used as input for NEBNext Small RNA Library Prep Kit for Illumina (New England BioLabs). Libraries were size selected on 2% SizeSelect agarose E-Gels using the 50-bp E-gel ladder (Life Technologies Corporation) as a marker to select for bands corresponding to libraries of RNA between 18 and 120 nt. Dilutions from column cleaned and concentrated libraries were assessed by BioAnalyzer traces using Agilent High Sensitivity DNA kit (Agilent Technologies). Sequencing of the libraries was performed at the University of California, Davis, DNA Technologies and Expression Analysis Core using Illumina MiSeq paired-end sequencing. FASTQ files for all sequencing runs are deposited in the NCBI Sequence Read Archive under project number SRP056032 . Mapping of sequencing reads. Reads were trimmed, with barcoding indices and adaptor sequences removed, and paired-end reads were merged using a custom Python script (SeqPrep, J. St. John, ). Only merged reads corresponding to RNAs at least 15 nt long were analyzed further. Reads were mapped to reference genomes ( Homo sapiens 2009 assembly hg19, GRCh37, or S. cerevisiae April 2011 assembly sacCer3) plus the set of mature tRNA sequences from tRNAscan-SE tRNA gene predictions for each of these genomes 19 . Mature tRNA sequences were generated to account for post-transcriptional processing steps: predicted introns were removed, a CCA sequence was added to the 3′ ends of all tRNAs and a G nucleotide was added to the 5′ end of histidine tRNAs. Each of these mature tRNA sequences was padded on both ends with 20 “N” bases to allow mapping of reads with additional end sequences. Reads were mapped to the reference genomes plus the nonredundant set of predicted mature tRNA sequences using Bowtie 2 (ref. 38 ), returning up to 100 alignments per read with default parameters. For analyses summarizing the composition of RNA-seq reads by RNA class, multiple mapping was not allowed and only the Bowtie 2 primary alignment was used (selected arbitrarily by the program when multiple features produced equal mapping scores). Each sample produced approximately 1 million mappable reads using this procedure. The proportional composition of these reads by RNA class was relatively uniform across technical replicates for the human samples, and somewhat more variable between biological replicates of the yeast samples that were derived from independently expanded cultures ( Supplementary Table 1 ). For differential expression analysis of reads mapped to either individual gene loci or mature tRNA sequences using DESeq2 analyses (described below), all best matches according to the Bowtie 2 scoring function were used. Reads showing equal mapping scores to tRNA gene loci (which represent unprocessed pre-tRNA transcripts) and predicted mature tRNA sequences were mapped exclusively to mature tRNAs. Thus, reads with equivalent mapping scores to multiple gene loci (encoding tRNAs that are identical after maturation) were mapped instead to a single mature tRNA sequence. In addition, reads mapped by this procedure to tRNA gene loci all contain features of tRNA precursors that are not found in mature tRNAs (for example, intronic sequences, 3′ trailers or 5′ leaders). These pre-tRNA features often distinguish one tRNA gene locus from another even when the mature tRNA encoded is identical. Plots of read coverage profiles for tRNAs were produced using read counts that were normalized according to size factors calculated from DESeq2 analyses (see below). Differential expression analysis. Read counts were tabulated for all reads and assigned to mature tRNAs or genomic features where mapping produced at least 10 nt of sequence overlap. Nonoverlapping RNA sequences mapped to the same annotated genomic features were labeled and counted separately (for example, nonoverlapping RNAs mapped to a genomic feature annotated as HERVH-int were labeled HERVH-int.1, HERVH-int.2, ...). Read counts for all features that exceeded a minimum threshold of 20 reads were used as input to the DESeq2 R package with default parameters 39 . DESeq2 takes into account variability between replicates and normalizes read counts to account for differences in sequencing depth between samples, reporting ARM-seq fold changes relative to untreated samples along with associated P values that are adjusted for multiple-hypothesis testing. We used a twofold increase in read abundance with a DEseq2 P value of <0.01 as our threshold for identifying all significant ARM-seq responses. A doubling of read counts in ARM-seq versus untreated samples indicates the presence of AlkB-sensitive modifications in at least half of the detected RNA molecules derived from a given tRNA, whereas larger increases indicate an even greater proportion of modified molecules. With the exception of Supplementary Table 1 , which presents raw read counts and a proportional breakdown of read mappings by RNA class that is unaffected by normalization, all read counts reported in results and figures reflect normalization using DESeq2 size factors. New tRNA naming convention. tRNA transcripts and individual gene loci are labeled using a new systematic naming convention that is designed to be more stable and informative (T.M.L. and P. Chan, unpublished data). The new tRNA naming convention echoes the systematic naming adopted for microRNAs in miRBase 40 . In brief, each unique mature tRNA transcript is named by isotype and codon (i.e., isodecoder), numbered in ascending order (for example, tRNA-Ala-AGC-1, tRNA-Ala-AGC-2, etc.), from most canonical to least canonical ('canonical' is objectively defined by the bit score given to each tRNA by tRNAscan-SE using the default general tRNA model 19 ). As with microRNAs, there are often multiple genome loci encoding identical mature tRNAs, so a secondary index number is assigned to denote specific tRNA gene loci (i.e., tRNA-Ala-AGC-1-1, tRNA-Ala-AGC-1-2, tRNA-Ala-AGC-1-3 describe different gene loci but produce identical mature tRNA transcripts). Thus, labels for mature tRNA transcripts include only the first index number, which refers to the specific unique tRNA (for example, tRNA-Ala-AGC-2), whereas labels for tRNA genes also include a second index, which refers to the locus number (i.e., tRNA-Ala-AGC-2-1). The new naming convention has been applied to all tRNAs in the Genomic tRNA Database 20 and has been adopted by the HUGO Gene Nomenclature Committee and by RNAcentral 41 . For convenience in cross-referencing, Supplementary Tables 1 and 2 also include legacy labels from the genomic tRNA database, where tRNA genes were labeled by chromosome number and order of occurrence 20 . By this new naming convention, we count 414 possible unique mature tRNAs in the GRCh37/hg19 assembly of the human genome (not including the ten tRNA predictions with undetermined anticodons). Correspondence to modifications annotated in Modomics. Predicted mature tRNA sequences were compared to those from the Modomics database (downloaded January 2015) to annotate modifications. tRNAs were labeled with annotated modifications from Modomics when these contained matching anticodons and the sequence of originating (unmodified) bases in Modomics matched those of the genomically encoded tRNAs with three or fewer nucleotide mismatches. tRNAs that did not match Modomics tRNA sequences using these criteria were labeled as “not documented.” Code availability. The software pipeline developed for this study includes components for trimming of raw sequencing reads, merging of paired-end reads, read mapping of small RNAs (including pre-tRNAs and mature tRNAs), abundance estimation and differential expression analysis (current version available at ). Accession codes. NCBI Sequence Read Archive: FASTQ files for all sequencing runs are deposited under accession number SRP056032 . Accession codes Primary accessions Sequence Read Archive SRP056032 | A recently discovered family of small RNA molecules, some of which have been implicated in cancer progression, has just gotten much larger thanks to a new RNA sequencing technique developed by researchers at UC Santa Cruz. The technique, described in a paper published August 3 in Nature Methods, provides sensitive detection of small RNAs that are chemically modified (methylated) after being transcribed from the genome. The researchers used the technique to reveal an abundance of modified fragments derived from transfer RNA molecules in both yeast cells and human cells. "Transfer RNAs are some of the most numerous small RNAs in all organisms, and it turns out that cells are just littered with little pieces of them. We discovered that many of those pieces are hidden from the standard analyses due to modifications of the RNA," said first author Aaron Cozen, a project scientist in biomolecular engineering at UC Santa Cruz. Senior author Todd Lowe, professor and chair of biomolecular engineering, said the method opens up a rapidly growing area of RNA research. "With our method, there is a more than three-fold increase in the overall detection of transfer RNA fragments," he said. Transfer RNA was characterized decades ago and plays a well-defined role, together with messenger RNA and ribosomal RNA, in translating the genetic instructions encoded in DNA into proteins. The discovery of RNA interference and genetic regulation by microRNA, however, revolutionized scientists' understanding of RNA's role in gene regulation and other cellular functions. Since then, a bewildering abundance and variety of small RNA molecules has been found in cells, and scientists are still struggling to sort out what they all do. "In the past five years, we're starting to see that transfer RNAs are not just translating genes into proteins, they are being chopped up into fragments that do other things in the cell," Lowe said. "Just recently, a subset of these fragments was found to suppress breast cancer progression." Transfer RNA fragments can be detected and analyzed using high-throughput sequencing techniques. But a critical step in the RNA sequencing protocol is blocked by certain RNA modifications involving added methyl groups, and these modifications are prevalent in transfer RNAs. The UC Santa Cruz researchers and their collaborators at the University of Rochester School of Medicine developed an enzymatic method that removes those modifications before sequencing. To make the method more powerful, the UCSC team developed a computational analysis of the sequencing data that they could use to identify and map specific modifications and see how common they are. They used this methodology to accurately predict previously documented transfer RNA modification patterns in yeast. Applying the technique to human cells, they were able to document a large number of previously unmapped modifications. "In the human genome, this particular modification had only been mapped in 10 to 15 percent of transfer RNAs, and with our method were able to map pretty much all of them. It's greatly accelerating the pace of discovery," Lowe said. Lowe noted that this project capitalized on the expertise at UC Santa Cruz in two major areas, RNA biology and genomics. His lab is affiliated with both the UC Santa Cruz Genomics Institute and the Center for Molecular Biology of RNA. | 10.1038/nmeth.3508 |
Medicine | Brain circuitry for positive vs. negative memories discovered in mice | A circuit mechanism for differentiating positive and negative associations, Nature, DOI: 10.1038/nature14366 Journal information: Nature | http://dx.doi.org/10.1038/nature14366 | https://medicalxpress.com/news/2015-04-brain-circuitry-positive-negative-memories.html | Abstract The ability to differentiate stimuli predicting positive or negative outcomes is critical for survival, and perturbations of emotional processing underlie many psychiatric disease states. Synaptic plasticity in the basolateral amygdala complex (BLA) mediates the acquisition of associative memories, both positive 1 , 2 and negative 3 , 4 , 5 , 6 , 7 . Different populations of BLA neurons may encode fearful or rewarding associations 8 , 9 , 10 , but the identifying features of these populations and the synaptic mechanisms of differentiating positive and negative emotional valence have remained unknown. Here we show that BLA neurons projecting to the nucleus accumbens (NAc projectors) or the centromedial amygdala (CeM projectors) undergo opposing synaptic changes following fear or reward conditioning. We find that photostimulation of NAc projectors supports positive reinforcement while photostimulation of CeM projectors mediates negative reinforcement. Photoinhibition of CeM projectors impairs fear conditioning and enhances reward conditioning. We characterize these functionally distinct neuronal populations by comparing their electrophysiological, morphological and genetic features. Overall, we provide a mechanistic explanation for the representation of positive and negative associations within the amygdala. Main The BLA, including lateral and basal nuclei of the amygdala 11 , receives sensory information from multiple modalities 12 , 13 , 14 , and encodes motivationally relevant stimuli 15 , 16 , 17 . Partially non-overlapping populations of BLA neurons encode cues associated with appetitive or aversive outcomes 8 , 9 . The acquisition of the association between a neutral stimulus and an aversive outcome such as a foot shock has been shown to induce long term potentiation (LTP) of synapses onto lateral amygdala neurons 3 , 4 , mediated by postsynaptic increases in α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR)-mediated currents 5 , 18 in a N -methyl- d -aspartate receptor (NMDAR)-dependent manner 19 , 20 . Similarly, increases in glutamatergic synaptic strength of inputs to BLA neurons are necessary for the formation of a stimulus–reward association 1 . Yet the similarity in neural encoding and synaptic changes induced by learning a positive or negative association and the contrasting nature of the ensuing outputs (reward-seeking or fear-related behaviours) presents an ostensible paradox: how is it possible that potentiation of synapses onto neurons in the BLA can underlie learned associations that lead to such different behavioural responses? One hypothesis is that BLA neurons project to many downstream regions, including the canonical circuits for reward and fear 14 , 30 , and the neurons that project to different targets undergo distinct synaptic changes with positive or negative associative learning. For example, BLA projections to the NAc have been implicated in reward-related behaviours 16 , 21 , 22 , while BLA projections to the CeM have been linked to the expression of conditioned fear 23 , 24 , 25 . However, the unique synaptic changes onto projection-identified BLA neurons have never been explored. To investigate this, we selected the NAc and CeM as candidate target regions and examined the synaptic changes onto either NAc-projecting BLA neurons (NAc projectors) or CeM-projecting BLA neurons (CeM projectors) following fear conditioning or reward conditioning ( Fig. 1 ). To identify the projection target of BLA neurons, we injected retrogradely travelling fluorescent beads (retrobeads) into either the NAc or CeM to label BLA neurons projecting axon terminals to these regions ( Fig. 1a and Extended Data Fig. 1 ). After retrobead migration upstream to BLA cell bodies, we trained mice in fear or reward conditioning paradigms wherein a tone was paired with either a foot shock or sucrose delivery. Mice in reward conditioning groups were food restricted 1 day before the conditioning session to increase motivation to seek sucrose ( Extended Data Fig. 1 ). AMPAR/NMDAR ratio, a proxy for glutamatergic synaptic strength, increases after either fear or reward conditioning in the BLA 1 , 2 , 5 , 18 . We used matched experimental parameters across groups in an acute slice preparation stimulating axons arriving via the internal capsule and performing whole-cell patch-clamp recordings in retrobead-identified NAc projectors and CeM projectors, which we observed to be topographically intermingled ( Fig. 1b and Extended Data Fig. 2 ). Figure 1: Opposite changes in AMPAR/NMDAR following fear or reward conditioning in BLA neurons projecting to NAc or CeM. a , After injecting retrobeads into NAc or CeM, animals underwent either fear or reward conditioning. b , Confocal image of retrobead-labelled BLA neurons, with schematic of stimulation and recording sites (left); region in white square is enlarged (right). DAPI is shown in blue. c – f , One-way ANOVAs were performed on AMPAR/NMDAR ratios collected after conditioning. Open circles reflect individual data points, number of neurons are shown in each bar and representative traces for each group are below the bar. Results show mean and s.e.m. FR, food restricted. c , AMPAR/NMDAR ratio was related to training condition ( F 2,33 = 5.844, P = 0.0070) and significantly lower in the paired fear group relative to the unpaired fear group ( t 31 = 2.21, P < 0.05). d , AMPAR/NMDAR ratio was related to training condition ( F 2,31 = 6.53, P = 0.0046) and reward learners showed a greater AMPAR/NMDAR ratio than mice in the unpaired reward group ( t 29 = 3.20, P < 0.01). e , In CeM projectors, AMPAR/NMDAR ratio was related to fear conditioning ( F 2,29 = 8.72, P = 0.0012) and was greater in the paired group relative to the unpaired group ( t 27 = 3.99, P < 0.001). f , In CeM projectors, AMPAR/NMDAR ratio was altered by reward learning ( F 2,32 = 3.63, P = 0.039), and was lower in learners relative to the unpaired group ( t 30 = 2.57, P < 0.05). g , Proposed model, arrow thickness represents relative synaptic strength. CeM pr., CeM projectors; NAc pr., NAc projectors. PowerPoint slide Full size image We found that in NAc projectors, fear conditioning decreased the AMPAR/NMDAR ratio relative to controls exposed to the same number of tones and shocks, but where the tones and shocks were unpaired ( Fig. 1c ). Conversely, following the acquisition of the association between a tone and sucrose delivery, synapses on NAc projectors showed an increase in AMPAR/NMDAR ratio relative to unpaired controls that were also food restricted and received the same number of tones and volume of sucrose ( Fig. 1d ). Importantly, we also included naive and food-restricted naive groups ( Fig. 1c–f ), as food restriction itself could alter AMPAR/NMDAR ratio ( Extended Data Fig. 3 ). In contrast, synapses on CeM projectors from the paired group showed an increase in AMPAR/NMDAR ratio following fear conditioning, relative to unpaired controls ( Fig. 1e ). Following reward conditioning, CeM projectors from mice that learned the tone–sucrose association showed a decrease in AMPAR/NMDAR ratio relative to unpaired controls ( Fig. 1f ). In addition to AMPAR/NMDAR ratios, we also examined paired-pulse ratios, and did not detect any differences between groups ( Extended Data Fig. 3 ), suggesting a postsynaptic mechanism of plasticity. These results support a model wherein NAc and CeM projectors undergo opposing changes in synaptic strength following fear and reward learning, such that relative synaptic strengths onto CeM projectors increase following fear conditioning and decrease following reward learning. Conversely, relative synaptic strengths onto NAc projectors decrease following fear conditioning and increase following reward learning ( Fig. 1g ). However, these findings raise new questions. First, is there a causal relationship between the activity of BLA neurons projecting to the NAc and reward-related behaviours, and between the activity of CeM projectors and fearful or aversive behaviours? Second, what defining features of NAc and CeM projectors might endow them with their opposing functions? To test whether there is a causal relationship between populations of projection-identified BLA neurons and behaviour, we first used a retrogradely infectious rabies viral (RV) vector 26 to express channelrhodopsin-2 (ChR2) fused to a fluorescent reporter (Venus), or a control virus carrying Venus alone (RV-Venus) in BLA neurons projecting to either the NAc or the CeM ( Fig. 2a ). Figure 2: Within the BLA, photostimulation of NAc or CeM projectors causes positive or negative reinforcement, respectively. a , After rabies virus injection into NAc or CeM, animals were tested using intracranial self-stimulation (ICSS) and real-time place avoidance (RTPA) assays. b , Representative traces of nose poke responses during ICSS. c , The relative number of active nose pokes was related to the experimental group (one-way ANOVA, F 2,18 = 10.50, P = 0.0012), and was significantly increased by photostimuation of NAc projectors (NAc pr.) in comparison to controls ( t 16 = 4.00, P < 0.01). CeM pr., CeM projectors. d , Representative locomotor trace from an animal receiving CeM projector photostimulation during RTPA. e , The percentage of time spent in the photostimulation-paired zone was related to the experimental group (one-way ANOVA, F 2,45 = 4.38, P = 0.019) and was significantly decreased by photostimulation of CeM projectors in comparison to controls ( t 43 = 2.25, P < 0.05). Results show mean and s.e.m. PowerPoint slide Full size image Following verification of functional ChR2 expression and retrograde transport from either the NAc or CeM back to the BLA ( Extended Data Fig. 7 ), we tested animals receiving injections of RV-ChR2–Venus or RV-Venus into either the NAc or CeM and implantation of an optical fibre over the BLA on an intracranial self-stimulation (ICSS) task ( Extended Data Fig. 4 ). Consistent with previous reports that photostimulation of BLA axons in the NAc produced ICSS 21 , we observed ICSS upon photostimulation of BLA cell bodies projecting to NAc ( Fig. 2b, c ). Given that we could not elicit robust nose-poke responses for CeM projector photostimulation, we next tested CeM projectors in a closed-loop real-time place avoidance assay (RTPA), where an animal freely explored two chambers, one in which the mouse received photostimulation of CeM projectors. Photostimulation of CeM projectors caused robust avoidance of the light-paired side ( Fig. 2d, e ). Consistent with our data and previous studies, we demonstrated a causal relationship between NAc projectors and positive reinforcement and CeM projectors and negative reinforcement (avoidance). We went on to probe the necessity of NAc or CeM projectors in mediating reward or fear conditioning. The acquisition of fear 19 and reward 1 associations is mediated by an NMDAR-dependent LTP mechanism thought to require simultaneous glutamate release and postsynaptic depolarization. Thus, we tested whether projection-specific hyperpolarization during the presentation of the unconditioned stimulus could impair learning in a valence-specific manner. To this end, we bilaterally infused an adeno-associated viral vector carrying halorhodopsin fused to an enhanced yellow fluorescent protein, or a no-opsin control (eYFP only), expressed in a Cre-dependent manner (double-floxed inverted open reading frame) into the BLA ( Extended Data Fig. 5 ). We then bilaterally infused a retrogradely travelling canine adenovirus carrying Cre recombinase (CAV-Cre) into either the NAc or CeM ( Fig. 3a ). We illuminated the BLA with yellow light only during conditioned–unconditioned stimulus pairing, that is, during shock or sucrose consumption. Photoinhibition of CeM projectors during the conditioned–unconditioned stimulus pairing impaired conditioned freezing ( Fig. 3b and Extended Data Fig. 6 ) and enhanced conditioned reward seeking ( Fig. 3c ). Figure 3: Photoinhibition of CeM projectors impairs fear learning and enhances reward learning. a , Halorhodopsin (NpHR) was expressed bilaterally either in NAc- or CeM-projecting BLA neurons using a dual-virus recombination strategy. Mice underwent fear or reward conditioning and yellow light was delivered to the BLA during each unconditioned stimulus. b , Time course of percentage freezing and average freezing in trials 6–8 (inset). Average freezing was related to experimental condition (one-way ANOVA, F 2,40 = 6.68, P = 0.0033) and was significantly reduced by photoinhibition of CeM projectors, relative to controls ( t 38 = 3.46, P < 0.01; see inset). c , Time course of normalized number of port entries relative to cue presentation during reward conditioning and average number of normalized port entries (<8 s latency, inset). z -score of port entries was related to the experimental condition (one-way ANOVA, F 2,31 = 9.23, P = 0.0008) and was significantly increased by photoinhibition of CeM projectors, relative to controls ( t 29 = 4.11, P < 0.001). Results show mean and s.e.m. PowerPoint slide Full size image Next, because there was no topographical separation between NAc and CeM projectors ( Fig. 1b ), which are both glutamatergic 11 , 21 , 27 , 28 , we searched for distinguishing characteristics of these functionally distinct neuronal populations. As the BLA is known to have some heterogeneity in electrophysiological and morphological characteristics 11 , 29 , we compared these features between NAc and CeM projectors. While we did not observe differences in action potential half-width ( Fig. 4a, b ), threshold to spike ( Fig. 4c, d ), or intrinsic excitability ( Fig. 4e, f ), we did observe a significant difference in action potential accommodation ( Fig. 4g and Extended Data Fig. 7 ). To investigate the morphological features of these functionally distinct populations, we reconstructed projection-identified BLA neurons. We observed greater distal dendritic branching in CeM projectors ( Fig. 4h, i ), though NAc projectors and CeM projectors contained both pyramidal and stellate cells ( Fig. 4i inset and Extended Data Fig. 8 ). Figure 4: Electrophysiological, morphological and transcriptional profiles of NAc and CeM projectors. a , Population average of action potential traces. CeM pr., CeM projectors; NAc pr., NAc projectors. b , No detectable difference in action potential half-width (unpaired t -test, t 20 = 1.82, P = 0.085). Open circles represent individual data points. c , Representative trace from action potential threshold detection protocol. d , No detectable difference in action potential threshold between NAc and CeM projectors (unpaired t -test, t 20 = 1.05, P = 0.31). e – g , Representative trace ( e ) from current injection protocol to determine firing rate responses ( f ) and action potential probability ( g ) over time, which was different between NAc and CeM projectors (interaction, two-way ANOVA F 9,180 = 2.32, P = 0.017) in the first 100 ms of current injection ( t 200 = 4.55, P < 0.001). h , Representative reconstructions showing dendritic branching pattern in the coronal plane. Cells were classified into pyramidal or stellate based on the presence of an apical tuft (right). i , Sholl analysis of neuron reconstructions. j , Schematic of transcriptome profiling. k , Candidate genes identified as differentially expressed between NAc and CeM projectors at a 0.01 quantile fold-change threshold, corresponding to a false discovery rate (FDR) of 26.2% (see also Extended Data Fig. 9) across two independent repetitions of RNA-seq ( n =8 mice for NAc pr., n =9 mice for CeM pr. total). Results show mean and s.e.m. PowerPoint slide Full size image Finally, we compared the transcriptomes of BLA neurons projecting to the NAc or CeM ( Fig. 4j and Extended Data Fig. 9 ). Following retrobead injections into the NAc or CeM, we dissociated labelled BLA neurons and performed RNA-seq ( Fig. 4j ). RNA-seq revealed relatively few candidate genes expressed differentially between NAc and CeM projectors, consistent with the idea that these two populations are closely related ( Fig. 4k and Extended Data Fig. 9 ). However, these differentially expressed candidate genes may underpin the mechanisms that contribute to the wiring of these distinct populations through development and/or rapidly biasing gain modulation of synaptic transmission during valence-specific learning. Taken together, NAc and CeM projectors are populations of BLA neurons that undergo opposing synaptic changes following fear or reward conditioning, and optogenetic manipulation of NAc and CeM projectors reveals causal relationships with valence-specific behaviours. Further, we have identified distinguishing electrophysiological, morphological and gene expression characteristics that facilitate further investigation. Our study suggests that the indelible nature of valence encoding observed in amygdala neurons 10 is mediated by connectivity, and the topographical intermingling of these populations may serve to facilitate interaction 30 . In conclusion, the BLA is a site of divergence for circuits mediating positive and negative emotional or motivational valence. Methods Animals and stereotaxic surgery Adult wild-type male C57BL/6 mice (248 mice), aged 6–12 weeks (8.3 ± 1.5 weeks; Jackson Laboratory or Charles River Laboratories for RNA-seq) were used for experiments. Following surgery, animals were maintained on a reverse 12 h light/dark cycle with ad libitum food and water. All procedures of handling animals were in accordance with the guidelines from the NIH, and with approval of the MIT or Harvard Medical School Institutional Animal Care and Use Committee. All surgeries were conducted under aseptic conditions using a digital small animal stereotaxic instrument (David Kopf Instruments). Mice were anaesthetized with isoflurane (5% for induction, 1.5–2.0% afterward) in the stereotaxic frame for the entire surgery and their body temperature was maintained with a heating pad. In order to label basolateral amygdala (BLA) neurons projecting to the nucleus accumbens (NAc), about 70 nl of red or green retrobeads (RetroBeads, Lumafluor Inc.) were injected into the NAc at stereotaxic coordinates from bregma: +1.4 mm anteroposterior, ±0.87 mm mediolateral and −4.7 mm dorsoventral. In order to label BLA neurons projecting to the medial part of the central amygdala (CeM), 50 nl of retrobeads (different colour from NAc injection) were injected in the contralateral CeM (−0.75 mm anteroposterior, ±2.35 mm mediolateral and −5.08 mm dorsoventral). To test the causal role of BLA neurons projecting to NAc or to CeM in reward and aversive behaviours, we injected a retrogradely travelling rabies virus carrying channelrhodopsin2–Venus fusion protein (RV-4ChopV(B19G)) or Venus 31 , 32 ( Fig. 2 and Extended Data Fig. 4 ) into NAc (250 nl) or CeM (180 nl). These virus constructs are referred to as RV-ChR2–Venus and RV-Venus in the manuscript. All animals receiving control virus injection were pooled into one control group. Injections were performed using glass micropipettes (1–5 µl; Drummond Scientific) pulled with a puller (Narishige PC-10) mounted on 10-µl microsyringes (Hamilton Microlitre 701; Hamilton Co.) to deliver the retrobeads at a rate of 2 nl s −1 or virus at a rate of 0.5 to 1 nl s −1 , using a microsyringe pump (UMP3; WPI) and controller (Micro4; WPI). After completion of the injection, the pipette was raised 100 µm and left for an additional 10 min to allow diffusion of the retrobeads or the virus at the injection site and then slowly withdrawn. In a separate group of experiments ( Fig. 3 ), adeno-associated virus serotype 5 (AAV 5 ) carrying halorhodopsin 3.0 fused to enhanced yellow fluorescent protein (eYFP) in a double-floxed inverted open reading frame (DIO) under the control of EF1α promoter (AAV 5 -EF1α-DIO-eNpHR3.0-eYFP) or a corresponding fluorophore control (AAV 5 -EF1α-DIO-eYFP) was injected bilaterally into the BLA (400 nl in each hemisphere) at stereotaxic coordinates from bregma: −1.60 mm anteroposterior, ±3.35 mm mediolateral and −4.90 mm dorsoventral. Concurrently, canine adenovirus 2 (CAV2) 33 , 34 carrying Cre-recombinase (or a mixture of CAV2-Cre and herpes simplex virus carrying Cre-recombinase and the fluorescent reporter mCherry under the control of EF1α promoter) was injected into the NAc (300 nl in each hemisphere) or CeM (150 nl in each hemisphere) at the same coordinates described above. We summarize this strategy with the designation ‘Cre-DIO’. All animals receiving fluorophore control injection were pooled into one control group. In this manuscript, CAV refers to CAV2, and NpHR refers to eNpHR3.0. In order to deliver light to the BLA, a 300-µm diameter optic fibre (0.37 numerical aperture (NA)) glued to a 1.25 mm ferrule was implanted above the BLA (−1.64 mm anteroposterior, ±3.35 mm mediolateral and −4.25 mm dorsoventral). One layer of adhesive cement (C&B metabond; Parkell) followed by cranioplastic cement (Dental cement; Stoelting) was used to secure the fibre ferrule to the skull and, 20 min later, the incision was sutured. Blue light ( ∼ 20 mW, ∼ 283 mW per mm 2 at the fibre tip) was delivered using a 473 nm laser and yellow light ( ∼ 10 mW, ∼ 141 mW per mm 2 at the fibre tip) was delivered using a 589/593 nm laser. After surgery, body temperature was maintained using a heat lamp until the animal fully recovered from anaesthesia. Behavioural experiments followed by ex vivo electrophysiological recordings were conducted approximately 2 weeks after surgery. Behavioural experiments requiring expression via rabies virus were conducted 4–5 days after surgery ( Fig. 2 ). Behavioural experiments requiring viral expression via the Cre-DIO strategy were conducted 3 months after surgery ( Fig. 3 ). Pavlovian conditioning Fear conditioning The mice were conditioned using behavioural hardware boxes (MedAssociate) placed in custom-made sound-attenuating chambers. Each box contained a modular test cage with an electrifiable grid floor and a speaker. On the day of conditioning, all animals were exposed to two 30 min conditioning sessions separated by 20 min in the home cage ( Extended Data Fig. 1e ). During the first session, mice in the unpaired group received six tone presentations, while mice in the paired group received no cues. During the second session, mice in the unpaired group received six foot shocks whereas mice in the paired group received six tones, each co-terminating with a shock (paired fear). During each session, a period of acclimation lasting 200 s preceded the presentation of the first tone or foot shock. The conditioned stimulus consisted of a 2 kHz, 80 dB pure tone lasting 20 s. The unconditioned stimulus consisted of scrambled 1.5 mA foot shock lasting 2 s. During paired conditioning, the conditioned and unconditioned stimuli were co-terminating. Cue presentations were separated by 70 to 130 s. Following conditioning, mice were returned to their home cages for ∼ 24 h until preparation of brain slices ( Extended Data Fig. 1e ). Videos of the mice were acquired during the second session for the paired group and both sessions for the unpaired group, to allow freezing quantification during the tone (an infrared LED was switched on during the period of the tone to synchronize CS with the video). Measurement of fear behaviour Percentage time freezing during conditioned stimulus presentation ( Extended Data Fig. 1f ) was quantified by manual scoring with the help of custom-written software in MATLAB. For each of the six trials, a segment of the video containing 20 s of the tone and the 20 s preceding the tone was extracted frame by frame and exported into MATLAB. Two additional segments of the same duration were extracted starting at randomly chosen points in the video. The sequence of these eight trials was then randomized and presented to a human scorer who was blind to the trial number. The scores were then re-assembled and the percentage of freezing during the conditioned stimulus was calculated. Frame numbers of the video containing the tone were confirmed by generating a heat map of the intensity of the pixels containing the infrared LED. Reward conditioning Twenty hours after food restriction, the mice were conditioned in sound-attenuating boxes (MedAssociates). Each box contained a modular test cage assembled with a sucrose delivery port, a speaker and a house light placed under the sucrose port. The conditioned stimulus consisted of a compound light–tone cue, ended by a beam break, 400 ms after port entry detection. If the mouse did not enter the port, the tone lasted for 30 s. The tone was a 5 kHz, 80 dB pure tone. For the paired group, a small volume (15 µl) of a sucrose solution (30%) was delivered into the port 1 s after conditioned stimulus onset, only if the mouse had entered the port after the onset of the previous conditioned stimulus, to prevent sucrose accumulation. For the unpaired group, no sucrose was delivered to the port. The inter-trial interval (ITI) of the conditioned stimulus presentations was chosen randomly from a list at runtime and was 143 ± 40 s for the first 20 conditioned stimulus presentations and 108 ± 32 s for the 100 subsequent conditioned stimulus presentations. The conditioning session was terminated after 120 sucrose deliveries and lasted for about 4 h. Performance of the mouse was assessed during the second half of a conditioning session. After the training session, the unpaired mice received the same amount of sucrose as the paired mouse (in their home cage 15 µl × 120 = 1.8 ml, Extended Data Fig. 1e ). In the paired group, if the mice did not claim all the sucrose, the remaining volume was made available to the mouse in its home cage after the conditioning session. After 20 min in their home cages, all the animals had ad libitum food until the electrophysiology experiment the following day. Learning criterion for reward learning A mouse was considered to have acquired the task (and classified as a learner) if the number of port entries from +1 s to +8 s relative to conditioned stimulus onset ( Extended Data Fig. 1g , black line) was significantly higher than the number of port entries from −1 s to −8 s relative to conditioned stimulus onset ( Extended Data Fig. 1g , grey line). Statistical significance was tested using a one-sided Wilcoxon rank-sum test (MATLAB) and the threshold for learning was set at P < 0.001. Conditioning for photoinhibition experiments In these experiments, the same animals experienced reward conditioning followed by fear conditioning after 1–4 weeks. Pure tones of 2 and 10 kHz were used as conditioning stimuli in these two paradigms; the use of these two tones was counterbalanced across animals. Animals were tethered using a dual commutator for light delivery. During fear conditioning, yellow light was delivered 1 s before shock onset until 1 s after shock termination (shock intensity = 0.5 mA). During reward learning, yellow light was delivered for 7.5 s immediately following a port entry during conditioned stimulus (tone) presentation. Ex vivo electrophysiology Brain tissue preparation About 2 weeks after retrobead injections in NAc and CeM, 114 mice were anaesthetized with 90 mg kg −1 pentobarbital sodium and perfused transcardially with 10 ml of modified artificial cerebrospinal fluid (ACSF) at ∼ 4 °C saturated with 95% O 2 and 5% CO 2 , containing: 75 mM sucrose, 87 mM NaCl, 2.5 mM KCl, 1.3 mM NaH 2 PO 4 , 7 mM MgCl 2 , 0.5 mM CaCl 2 , 25 mM NaHCO 3 and 5 mM ascorbic acid (pH 7.25–7.4, 327 ± 3 mOsm). The brain was then extracted and glued (Roti coll 1; Carh Roth GmbH) on the platform of a semiautomatic vibrating blade microtome (VT1200; Leica). The platform was then placed in the slicing chamber containing modified ACSF at 4 °C. Coronal sections of 300 µm containing the NAc, CeM or BLA were collected in a holding chamber filled with ACSF saturated with 95% O 2 and 5% CO 2 , containing: 126 mM NaCl, 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 1.0 mM MgCl 2 , 2.4 mM CaCl 2 , 26.0 mM NaHCO 3 , 10 mM glucose (pH 7.25–7.4, 298 ± 2 mOsm). Recordings were started 1 h after slicing and the temperature was maintained at approximately 31 °C both in the holding chamber and during the recordings. All retrobead injection sites were checked and imaged with a camera (Hamatsu) attached to the microscope (BX51; Olympus, Extended Data Fig. 1 ). The slice images were reported on the mouse brain atlas (Paxinos and Watson) and the centre of the injection was taken at the brightest point of the fluorescence. Some of the retrobead injections had dorsoventral leaks. In this case, the centre of the injection was taken on the brightest fluorescent point in the target structure ( Extended Data Fig. 1 ). If the injection site was outside NAc or CeM, respective projectors of this injection were not recorded in the BLA. In addition, all the CeM injections were overlaid on the mouse atlas. CeM projector recordings collected from animals with injection sites that had leaks into CeC and/or CeL were discarded. Whole-cell patch-clamp recording Recordings were made from visually identified neurons containing retrobeads. Patched cells were filled with Alexa Fluor 350 and biocytin, visualized and superimposed with retrobead fluorescence to confirm whether the patched cell was retrobead positive. For measuring AMPAR/NMDAR ratio, brain slices containing the BLA were then placed in the recording chamber perfused with ACSF containing 100 µM of the γ-aminobutyric A receptor (GABA A R) antagonist picrotoxin (R&D systems). Picrotoxin was not used to assay passive membrane properties from NAc- and CeM-projecting BLA neurons. A bipolar stimulating electrode ( ∼ 80 µm spacing between tips) was placed in the amygdala–striatal transition zone containing internal capsule fibres ( Extended Data Fig. 2 ). Electric stimulation intensity was between 0.01 and 0.2 mA. For electrophysiological characterization of NAc and CeM projectors ( Fig. 3a–g ) and confirmation of the health of rabies-virus-transduced cells ( Extended Data Fig. 7 ), picrotoxin was not added to the ACSF. Voltage-clamp recordings were made using glass microelectrodes (4–6 MΩ) shaped with a horizontal puller (P-1000) and filled with a solution containing: 120 mM caesium methansulphonate, 20 mM HEPES, 0.4 mM EGTA, 2.8 mM NaCl, 5 mM tetraethylammonium chloride, 2.5 mM MgATP, 0.25 mM NaGTP, 8 mM biocytin and 2 mM Alexa Fluor 350 (pH 7.3, 283 mOsm). The cells were first clamped at −70mV to determine optimal intensity for the electric stimulation of the internal capsule. Current-clamp recordings to characterize electrophysiological properties of NAc and CeM projectors were made using similar glass microelectrodes (4–6 MΩ) filled with a solution containing: 125 mM potassium gluconate, 20 mM HEPES, 10 mM NaCl, 3 mM MgATP, 8 mM biocytin and 2 mM Alexa Fluor 350 (pH 7.3; 283 mOsm). Recorded signals were amplified using Multiclamp 700B amplifier (Molecular Devices). Analogue signals were digitized at 10 kHz using a Digidata 1440 and pClamp9 software (Molecular Devices). ACSF and drugs were applied to the slice via a peristaltic pump (Minipuls3; Gilson) at 3 ml per min. All recordings were performed blind to the performance of the animal, and a subset of the data was obtained blind to the behavioural conditioning group of the animal. We recorded the position of the cells within the BLA ( Extended Data Fig. 2 ) and the placement of the stimulating electrode relative to the BLA. There was no observable difference in either of these parameters across slices obtained from animals in different behavioural conditioning groups. In order to obtain the AMPAR/NMDAR ratio, the cell was first voltage clamped at +40 mV. Once we obtained a stable baseline excitatory post-synaptic current (EPSC) amplitude in response to internal capsule fibre stimulation (compound AMPAR + NMDAR current), we bath applied the NMDAR antagonist AP5 ( d -(-)-2amino-5-phosphonopentanoate; R&D systems) at a concentration of 50 µM. AMPAR EPSCs were recorded starting from 5 min after the action of AP5. NMDAR current was obtained by subtracting the average EPSC trace of the AMPAR current from the compound current. Each group in Fig. 1 included 9–13 neurons recorded from 6–9 mice. Histology The location of all recorded neurons was checked after the recording. Co-localization of Alexa Fluor 350 and retrobeads was confirmed at the end of the recording, and double-checked with confocal microscopy for the cells that were recovered with streptavidin staining. For each experiment, the slices containing a retrobead injection site (NAc and CeM) or a recorded neuron (BLA) were fixed overnight at 4 °C in 4% paraformaldehyde (PFA), and then kept in phosphate buffered saline (PBS). Slices containing patched neurons were incubated for 2 h in streptavidin-CF405 (2 mg ml −1 , dilution 1:500; Biotium), mounted on microscope slides with PVA-DABCO and imaged under the confocal microscope ( Extended Data Fig. 3 ). Data analysis Offline analysis of AMPAR/NMDAR ratios, paired-pulse ratios and electric properties was performed using Clampfit (Molecular Devices) and MATLAB software written by P.N. Membrane properties including access resistance of the cell were computed by a MATLAB implementation of the Q-method 35 . All custom-written software is available upon request. In vivo optogenetic behaviour Light delivery For optical stimulation during behavioural assays ( Fig. 2 ), a 473 nm (blue light) or 589/593 nm laser (yellow light) (OEM Laser Systems) was connected to a patch cord with a pair of FC/PC connectors in each end (Doric). This patch cord was connected through a fibre optic rotary joint (which allows free rotation of the fibre; Doric) with another patch cord with an FC/PC connector on one side and a ferrule connection on the other side (matching the size of the ferrule glued to the optic fibre implanted in the mouse). The optic fibre implanted in the mouse (300 µm diameter, 0.37 NA) was connected to the optic patch cord using ceramic mating sleeves (PFP). Blue light was delivered at 20 mW in 20 Hz, 5 ms light pulses. Yellow light was delivered at 10 mW, for 4 s during fear conditioning and 7.5 s during reward conditioning. Laser output was modulated with a Master 8 pulse stimulator (A.M.P.I.). Onset of laser pulses was determined by behavioural hardware (MedPC Associates). Intracranial optical self-stimulation (ICSS) 4.5 days after surgery, mice were food-restricted overnight. Immediately before the start of each session, fibre optic implants were connected to a patch cord and the mouse was placed in conditioning chambers equipped with active and inactive nose-poke ports directly below two cue lights, as well as auditory stimulus generators and video cameras. Mice were given two self-stimulation sessions on two consecutive days (5.5 and 6.5 days after surgery) in which they could respond freely at either nose-poke port. On day 1 both nose-poke ports were baited with a crushed cereal treat to facilitate initial investigation. The start of each 2 h session was indicated by the illumination of both nose-poke ports and the onset of low-volume white noise to mask unrelated sounds. Each nose poke in the active port resulted in light stimulation of BLA projectors (60 pulses, 20 Hz, 5 ms pulse duration). Concurrently, the cue-light above the respective port was illuminated and a distinct 1-s tone was played for each nose poke (1 kHz and 1.5 kHz counterbalanced), providing a visual and auditory cue whenever a nose poke occurred. Active and inactive nose-poke time stamps were recorded using Med-PC software and day 2 data were analysed using MATLAB and Microsoft Excel. Real-time place avoidance (RTPA) The RTPA chamber was constructed from transparent plastic (50 × 53 cm) and divided into two equal compartments. One of these was assigned as the photo-stimulated zone (counterbalanced between animals). At the start of the 1 h session, individual mice were placed in the non-stimulated side of the chamber. Every time the mouse crossed to the side of the chamber paired with photostimulation, 20 Hz ( ∼ 20 mW, 5 ms pulse duration) laser stimulation was delivered until the mouse crossed back into the non-stimulated side. Ethovision XT video tracking software (Noldus Information Technologies) was used to track the animal and control the onset and offset of light pulse trains. Data were subsequently analysed using MATLAB and Microsoft Excel software. Histology After optogenetic experiments, all mice were anaesthetized with pentobarbital sodium, and transcardially perfused with ice-cold Ringer’s solution followed by ice-cold 4% PFA in PBS (pH 7.3). Extracted brains were fixed in 4% PFA overnight and then equilibrated in 30% sucrose in PBS. 40-µm thick coronal sections were sliced using a sliding microtome (HM430; Thermo Fisher Scientific) and stored in PBS at 4 °C until they were processed for histology. Sections were then incubated with a DNA-specific fluorescent probe (DAPI: 4′,6-diamidino-2-phenylindole (1:50,000)) for 30 min, and finally washed with 1× PBS followed by mounting on microscope slides with PVA-DABCO. Imaging Confocal fluorescence images were acquired on an Olympus FV1000 confocal laser scanning microscope using a 10×/0.40 NA objective for viral injections and fibre placements imaging or a 40×/1.30 NA or 60×/1.42 NA oil-immersion objectives for imaging streptavidin-CF405 stained neurons.The centre of the viral injection was taken at the brightest fluorescent point in anteroposterior, mediolateral and dorsoventral axis. The tip of the fibre was determined by the ∼ 50-µm thick gliosis generated by the fibre. Neurons recovered from the streptavidin staining ( ∼ 60% recorded in the whole cell) were imaged covering the whole dendritic and axonal arbourization contained in the slice. Neuron reconstructions Imaris software (Bitplane Inc.) was used to reconstruct neurons from z -stacks of confocal images and to perform Sholl analysis 36 . Since we are reconstructing neurons filled during whole-cell patch clamp recordings, we chose to focus solely on dendritic branching patterns and did not examine parameters such as the volume of the soma, since this measure might be compromised by experimental procedures. Overlaying the atlas on images of brain slices revealed that the distances depicted in the atlas are about 90% smaller than physical distances in brain slices. For example, if the length of a brain region is 900 µm in the atlas, then the real distance measured in our brain sections was about 1000 µm. Therefore, to depict the reconstructed neurons ( Fig. 4h, i and Extended Data Fig. 8 ), we multiplied physical distances by a factor of 0.9. RNA sequencing (RNA-seq) Manual cell sorting and RNA sequencing The RNA-seq experiment was repeated twice to verify reproducibility, and we refer to these biological replicates as experiment 1 and experiment 2 (see Extended Data Fig. 9c : samples used in experiment 1 and 2 are indicated in black ( n = 9) and blue ( n = 8) respectively below the heatmap). Manual sorting of fluorescent cells were carried out as described in ref. 37 . In brief, adult C57Bl6 male mice ( ∼ 6–8-weeks-old at surgery, C57BL/6NCrl for experiment 1, C57BL/6J for experiment 2) were injected with red fluorescent retrograde beads into either NAc or CeM. Surgeries for experiment 1 were conducted at Harvard Medical School by P.N. and surgeries for experiment 2 were conducted at MIT by G.G.C. About 2 weeks after surgery, animals were decapitated under isoflurane anaesthesia and their brain was quickly removed and transferred into ice-cold oxygenated ACSF, containing 126 mM NaCl, 20 mM NaHCO 3 , 20 mM dextrose, 3 mM KCl, 1.25 mM NaH 2 PO 4 , 2 mM CaCl 2 , 2 mM MgCl 2 , 50 μM AP5, 20 μM DNQX and 100 nM TTX. Acute 330-μm coronal brain slices were prepared and incubated in a protease (1.2 mg ml −1 protease E; Sigma-Aldrich) containing oxygenated ACSF for 50 min. After 15 min of washes in the ACSF, BLA tissue was microdissected using a pair of fine scissors under a fluorescent dissecting microscope (Leica M165FC stereomicroscope). The dissected BLA tissue was then triturated in ACSF using a series of three Pasteur pipettes of decreasing tip diameters and the dissociated cells were transferred into a small Petri dish. With visual control under a fluorescent dissecting microscope, red-retrobead-positive neurons were aspirated into a micropipette with a 30–50 μm tip diameter and transferred into a clean Petri dish. A total of 35–60 retrobead-positive neurons were pooled for each sample, which were immediately lysed in 50 μl of extraction buffer (PicoPure RNA isolation kit, Arcturus, Life Technologies) and total mRNA was subsequently isolated. Complementary DNA was synthesized using Ovation RNA-seq System V2 kit (Neugen). We obtained approximately 6 μg of cDNA from 35–60 cells from each group. Then, the cDNA library was prepared using Ovation Ultralow DRMultiplex Systems (Nugen). Sequencing was conducted on an Illumina HiSeq2500 using single-end 50 base pairs at the Biopolymer facility, Harvard Medical School (for experiment 1), and an illumina NextSeq 500 using single-end 75 base pairs with high flow cell size at the FAS centre for systems biology, Harvard University (for experiment 2). The total number of reads that we obtained for each sample were approximately 34 million (for experiment 1) and 50 million (for experiment 2). Analysis of RNA-seq data Sequencing reads were mapped using Tophat version 2.0.10 ( ) against the Mus musculus UCSC version mm10 genome. After alignment, the read counts for each gene were extracted using HTseq ( ) based on the mm10 Refseq gff file. Log2 fold differences were computed from each of two independent experiments using DEseq2. Candidate differentially expressed genes were required to be enriched in CeM or NAc projectors at a quantile fold-difference threshold of 0.01 ( Fig. 4 ) or 0.02 ( Extended Data Fig. 9 ) in each of two independent experiments ( n = 8, NAc; n = 9, CeM total). To estimate false discovery rate ( Extended Data Fig. 9 ), we used two types of chance estimates. One of the chance estimates, ‘flip-flopped’, is taken from genes that passed the quantile thresholds but were enriched in opposite populations in the two experiments. Another chance estimate, ‘permuted’, is determined based on permuting fold differences across genes within each independent experiment ( Extended Data Fig. 9 ). We estimated false discovery rate using the more conservative chance estimate, flip-flopped with the following formula: (number of genes on the flip-flopped list/number of genes on the differentially expressed genes). RNA-seq data has been deposited in the Gene Expression Omnibus under accession code GSE66345 Statistical analysis Statistical analyses were performed using commercial software (GraphPad Prism; GraphPad Software, Inc.). Within-subject comparisons were made using paired tests. Group differences were detected using either one-way analysis of variance (ANOVA) or with two-way ANOVA, both followed by Bonferroni post-hoc tests. Corrections for multiple comparisons were made when appropriate. The reported numbers of degrees of freedom (df) for each one-way ANOVA are between column degrees of freedom and total degrees of freedom. Since normality tests have little power to detect non-gaussian distributions with small data sets, we did not explicitly test for the normality of our data sets. We used the Grubbs’ test to detect and remove outliers from our data. Single-variable differences were detected with two-tailed paired or unpaired (as noted) Student’s t -tests. For all results, the significance threshold was placed at α = 0.05 (* P < 0.05, ** P < 0.01, *** P < 0.001), and corrections for multiple comparisons were reflected in the P value rather than in α . All data are shown as mean and s.e.m. To assess learning during the reward task, we used a one-sided Wilcoxon rank-sum test (MATLAB) and set the threshold for learning at P < 0.001. Result sheets of statistical tests from GraphPad software detailing (wherever applicable) estimates of variance within each group, confidence intervals, effectiveness of pairing (in case of paired t -tests), comparison of variances across groups, etc. are available upon request. Sample size The target number of samples in each group was determined based on numbers reported in published studies. No statistical methods were used to predetermine sample size. In the photoinhibition experiment, since the viral incubation time was long, we factored in the skill of the surgeon to determine the number of surgeries to be performed. Our target number of animals in each group was 12. The experimenter performing surgeries was known to hit the targets used (NAc, CeM, BLA) with a probability of 0.9. Since there were six targets in each brain (three in each hemisphere—two injections and one optical fibre), the probability of a successful surgery would be approximately (0.9) 6 ≈ 0.5. We therefore performed about 24 surgeries in each group. All sample sizes mentioned in figures represent biological replicates. All animals receiving control virus injections were pooled into one control group ( Figs 2 and 3 ). In the Venus group from Fig. 2c , there were six animals with injections in NAc and six animals with injections in CeM. In the Venus group from Fig. 2e , there were four animals with injection in NAc and two animals with injection in CeM. In the eYFP group from Fig. 3b , there were ten animals with injections in NAc and six animals with injections in CeM. In the eYFP group from Fig. 3c , there were nine animals with injections in NAc and five animals with injections in CeM. Replication Results from AMPAR/NMDAR ratio experiments were replicated once with a different experimenter and the final numbers reported in the paper are pooled across both repetitions of the experiment ( Fig. 1 ). Photostimulation and photoinhibition experiments ( Figs 2 and 3 ) were not replicated. The RNA-seq experiment was also replicated once ( Fig. 4 ). Randomization All surgical and behavioural manipulations performed on each animal were determined randomly. All randomization was performed by an experimenter, and no explicit randomization algorithm was used. For animals used in photostimulation and photoinhibition experiments, the virus used in each animal (ChR2/Venus or NpHR/eYFP) and injection site (NAc/CeM) were determined randomly and the stereotaxic apparatus used for surgery was counterbalanced across groups. For surgeries with unilateral injections and/or fibre placements, the hemisphere used for injections was determined randomly during the time of the surgery. Surgeries were performed on animals caged in groups of 4 or 5 animals. Animals from each cage were allocated to at least two behavioural groups. All animals used in ex vivo electrophysiology experiments were isolated at least 1 day before behavioural conditioning. Exclusion criteria Ex-vivo electrophysiology Data were excluded based on pre-determined histological and electrophysiological criteria, established during pilot experiments. The injection site was determined as the most ventral point where fluorescence was brightest, and data from cells where the corresponding retrobead injection was outside the target region (NAc or CeM) were excluded ( Extended Data Fig. 1a–d ). Each recorded cell was confirmed to be a projector by overlaying the fluorescence from retrobeads with the fluorescence from Alexa Fluor dye contained in the pipette. Each cell was also confirmed to be in the BLA by visualizing with differential interference contrast microscopy under a 4× objective. We did a secondary confirmation under the confocal microscope for cells that were recovered from streptavidin staining. Cells in which evoked responses were polysynaptic (multiple peaks in the evoked current) were discarded. Data from cells whose access resistance was greater than 40 MΩ or cells that died during recording were also excluded. Animals used in photostimulation and photoinhibition experiments Data from animals used in photostimulation and photoinhibition experiments ( Figs 2 and 3 ) were excluded based on histological and performance criteria established during pilot experiments. Histological criteria included injection sites and optical fibre placement ( Extended Data Figs 4 and 5 ). Only animals with injection sites in the region of interest (NAc or CeM) were included. For animals with a rabies virus injection in CeM, atlas outlines were overlaid manually over a confocal image of the BLA containing the damage caused by the tip of the optic fibre. Light cones based on numerical aperture of the optic fibre (NA 0.37, ∼ 15° half angle) were then drawn below the optic fibre and animals in which light cones encompassed central amygdala were excluded from further analysis. For the optical self-stimulation experiment, data from animals that did not respond at least 40 times (sum of nose pokes in active and inactive ports) over the 2 hour period were excluded from further analysis. For photoinhibition experiments, the amount of expression in each hemisphere of the BLA was rated on a scale of 0–5 based on fluorescence intensity by an experimenter blind to the behavioural performance of the animal. These ratings were gathered in an excel sheet, read by a MATLAB script and only data from animals with fluorescence ratings greater than 4 in each hemisphere were included for further analysis. Accession codes Primary accessions Gene Expression Omnibus GSE66345 Data deposits RNA-seq data has been deposited in the NCBI Gene Expression Omnibus (GEO) under accession code GSE66345 . | Neuroscientists have discovered brain circuitry for encoding positive and negative learned associations in mice. After finding that two circuits showed opposite activity following fear and reward learning, the researchers proved that this divergent activity causes either avoidance or reward-driven behaviors. Funded by the National Institutes of Health, they used cutting-edge optical-genetic tools to pinpoint these mechanisms critical to survival, which are also implicated in mental illness. "This study exemplifies the power of new molecular tools that can push and pull on the same circuit to see what drives behavior," explained Thomas R. Insel, M.D., director of NIH's National Institute of Mental Health (NIMH). "Improved understanding of how such emotional memory works holds promise for solving mysteries of brain circuit disorders in which these mechanisms are disrupted." NIMH grantee Kay Tye, Ph.D., Praneeth Namburi and Anna Beyeler, Ph.D., of the Massachusetts Institute of Technology (MIT), Cambridge MA, and colleagues, report their findings April 29, 2015 in the journal Nature. Prior to the new study, scientists suspected involvement of the circuits ultimately implicated, but were stumped by a seeming paradox. A crossroads of convergent circuits in an emotion hub deep in the brain, the basolateral amygdala, seem to be involved in both fear and reward learning, but how one brain region could orchestrate such opposing behaviors - approach and avoidance - remained an enigma. How might signals find the appropriate path to follow at this fork in the road? Neuronal projections encoding negative (red) and positive (green) associations were often intertwined, perhaps hinting at mechanisms by which positive and negative emotional associations may influence each other. Credit: Praneeth Namburi, Anna Beyeler, Ph.D., Kay M. Tye, Ph.D., Massachusetts Institute of Technology To find out, Tye and colleagues explored whether two suspect circuit projections from the crossroads might hold clues. One projects to a reward center, the nucleus accumbens, and the other to a nearby fear center, the centromedial amygdala, the output station of the emotion hub. Each circuit projection is composed of separate populations of intertwined neurons. The researchers first used telltale fluorescent bead tracers to sort out which neurons belonged to each circuit. They then measured an indicator of connectivity - the strength of neural connections - in the projections, after mice underwent fear or reward learning. Animals were trained to either fear a tone paired with a shock or to associate the tone with a sugar reward. Strikingly, crossroads connectivity to reward center projections decreased after fear learning and increased with reward learning. By contrast, connectivity to fear center projections increased with fear learning and decreased after reward learning. These converging mechanisms in anatomically intertwined circuits could hold clues to teasing apart how positive and negative emotional associations may influence each other, Tye suggested. To prove a causal link between the projection-identified circuits and behavior, Tye's team turned to optogenetics, which enables light pulses to control brain circuitry in animals genetically engineered to be light-responsive. Optically stimulating the reward center projection enhanced positive reinforcement, while stimulating the fear center projection promoted negative reinforcement. Similarly, blocking the fear center projection impaired fear learning and enhanced reward learning. Finally, the researchers pinpointed defining electrophysiological, anatomic and genetic features of the two circuits that help to explain the opposite connectivity responses. "Given that many mental health problems, including anxiety, addiction, and depression, may arise from perturbations in emotional processing, these findings could help to pave the way to a circuit-based approach to treating mental illness," said Tye. | 10.1038/nature14366 |
Medicine | Review: Changing views on atherosclerosis | The changing landscape of atherosclerosis, Nature (2021). DOI: 10.1038/s41586-021-03392-8 Journal information: Nature | http://dx.doi.org/10.1038/s41586-021-03392-8 | https://medicalxpress.com/news/2021-04-views-atherosclerosis.html | Abstract Emerging evidence has spurred a considerable evolution of concepts relating to atherosclerosis, and has called into question many previous notions. Here I review this evidence, and discuss its implications for understanding of atherosclerosis. The risk of developing atherosclerosis is no longer concentrated in Western countries, and it is instead involved in the majority of deaths worldwide. Atherosclerosis now affects younger people, and more women and individuals from a diverse range of ethnic backgrounds, than was formerly the case. The risk factor profile has shifted as levels of low-density lipoprotein (LDL) cholesterol, blood pressure and smoking have decreased. Recent research has challenged the protective effects of high-density lipoprotein, and now focuses on triglyceride-rich lipoproteins in addition to low-density lipoprotein as causal in atherosclerosis. Non-traditional drivers of atherosclerosis—such as disturbed sleep, physical inactivity, the microbiome, air pollution and environmental stress—have also gained attention. Inflammatory pathways and leukocytes link traditional and emerging risk factors alike to the altered behaviour of arterial wall cells. Probing the pathogenesis of atherosclerosis has highlighted the role of the bone marrow: somatic mutations in stem cells can cause clonal haematopoiesis, which represents a previously unrecognized but common and potent age-related contributor to the risk of developing cardiovascular disease. Characterizations of the mechanisms that underpin thrombotic complications of atherosclerosis have evolved beyond the ‘vulnerable plaque’ concept. These advances in our understanding of the biology of atherosclerosis have opened avenues to therapeutic interventions that promise to improve the prevention and treatment of now-ubiquitous atherosclerotic diseases. Main Although atherosclerotic cardiovascular disease was previously considered a problem that was concentrated in the industrialized world, it now spans the globe. We have witnessed an ‘epidemiological transition’ 1 , 2 . Increased sanitation, vaccination and the treatment of acute infections have diminished the prevalence of communicable diseases in developing countries, and more individuals now survive to experience chronic diseases such as atherosclerosis (see ref. 3 for an introduction to the fundamental concepts of atherosclerosis). The adoption of less healthy dietary patterns may also have contributed to this trend. Many people now live longer only to suffer the consequences of atherosclerosis, including myocardial infarction, ischaemic cardiomyopathy (which is the commonest cause of heart failure), strokes (which often rob people of their independence, mobility, cognition or ability to communicate) and peripheral arterial disease, which limits activity and jeopardizes limbs. These conditions contribute to ‘morbidity extension’ in the developing world such that, although many individuals escape early death, they must bear the burden not only of chronic cardiovascular diseases but also of arthritis, depression and other long-term impediments to a healthy life. Today, the major pool of risk for developing cardiovascular disease occurs not in Western countries but in the more populous developing world. Atherosclerotic cardiovascular disease now accounts for the majority of mortality worldwide. This global spread creates an urgent need to understand the genesis of this malady, advance in its management and develop prospects for alleviating its burden. This Review addresses the evolving concepts of atherogenesis and the opportunities for preventing and treating atherosclerosis afforded by new insights into its pathogenesis. The changing face of atherosclerosis The classic candidate for heart attack was a middle-aged, white man with hypertension and hypercholesterolaemia, who smoked cigarettes. This picture has evolved considerably in recent decades. We now possess effective therapies for the treatment of high blood pressure and lipid disorders, and control of hypertension and hypercholesterolaemia has improved 4 . A reduction in smoking, accompanied by a decrease in second-hand smoke, has gained a foothold in many societies. We have further witnessed a vast change in people with cardiovascular risk and those who suffer from acute coronary syndromes. Although individuals in mid-life do not evade risk, coronary artery disease now affects an increased number of younger women, and—with the ageing of the population in many countries—the very old now account for an increasing proportion of patients with cardiac conditions 4 , 5 , 6 . An epidemic of obesity has swept across the world 7 , 8 . Excess adiposity (especially that accumulated in the abdomen) and a fatty liver drive insulin resistance, which sets the stage for diabetes, and shows links with hypertension. From a metabolic perspective, individuals of some ethnicities often tolerate the accumulation of visceral adipose tissue particularly poorly 9 . Asian and South Asian individuals, as well as people of some ethnicities in Central and South America, can develop dysmetabolism manifested as glucose intolerance at lower abdominal girths than white individuals. Given the large populations in Asia and Central and South America, increased prosperity—with attendant shifts from traditional dietary habits—and continuing tobacco use, as well as the growing burden of obesity and diabetes (often accompanied by hypertension) presents an enormous public health challenge and is a contributor to regional risks of atherosclerotic cardiovascular disease. Rather than elevated low-density lipoprotein (LDL) cholesterol, an elevation in triglyceride-rich lipoproteins (TGRL) and low high-density lipoprotein (HDL) now comprise the major pattern of lipid abnormality in many patients who are treated for atherosclerotic cardiovascular disease 10 . Highly effective and now-inexpensive therapies for lowering LDL have contributed to an overall drop in LDL, whereas obesity, its attendant insulin resistance and a high-carbohydrate diet favour a rise in the prevalence of the cluster of conditions referred to as the ‘metabolic syndrome’, which is characterized in part by an elevation in TGRL. The prevalence of this cluster—which also includes increased waist circumference, low HDL cholesterol, high blood pressure and raised fasting blood glucose—rose by 35% from 1988–1994 to 2007–2012 in the USA 11 . Women, members of minoritized groups and populations in developing countries thus bear an increasing burden of atherosclerotic cardiovascular disease. Indeed, despite advances in controlling risk factors in many high-income countries, atherosclerotic cardiovascular disease now predominates in lower-income areas. The Global Burden of Disease study shows that world prevalence of ischaemic heart disease has increased from about 100 million in 1990 to over 180 million cases in 2019 8 . In some regions of the USA and UK, the decline in prevalence of ischaemic heart disease—attributed to controlling risk factor—has slowed or halted in the period 2014 to 2019 8 . Atherosclerotic cardiovascular disease has become a global concern, and we may be losing ground in prevention even in higher-income countries. A reassessment of the lipid risk factors LDL, a particle encircled by its signature apolipoprotein B component, causes atherosclerosis 12 , 13 . If the entire population maintained LDL concentrations akin to those of a neonate (or to those of adults of most other animal species), atherosclerosis might well be an orphan disease 14 . The duration and extent of exposure to above-ideal concentrations of LDL associate with atherosclerotic disease 13 , 15 . However, the treatment of children and adolescents with cholesterol-lowering drugs presents many challenges, and lifelong elevated concentrations of LDL cholesterol have already sown the seeds of atherosclerosis in millions of people, increasing their risk for cardiovascular disease. Despite effective interventions for the control of LDL, blood pressure and other traditional risk factors, a considerable residual risk remains for atherosclerotic cardiovascular disease 16 . For example, recent clinical trials of novel cardiovascular agents performed in subjects optimally treated with standard-of-care background therapy found that about 1 in 20 will have a recurrent ischaemic event in the year after an acute coronary syndrome 17 , 18 . One in ten individuals who survive an acute myocardial infarction in the USA will require readmission within one month, at considerable personal and societal cost 19 . Moreover, in addition to obesity and its associated insulin resistance, rises in air pollution, transitions from traditional diets towards those that may aggravate cardiovascular risk and other exposures under intense investigation (ranging from environmental noise to impaired sleep) may mitigate some of the advances that have been made in prevention 20 , 21 , 22 . Chief among readily remedied shifts towards unhealthy diets, the consumption of sugar-sweetened beverages, often high in fructose content, may contribute to obesity and its adverse metabolic consequences 23 , 24 Indeed, modifiable risk factors contribute enormously to the global burden of ischaemic heart disease 2 . Large-scale cohort investigations, such as the Framingham study, revealed risk factors for atherosclerosis that we now regard as ‘traditional’ 25 . However, long-term trends have modified risk factors such that these traditional factors no longer capture the contemporary reality of atherosclerosis. Genetic risk scores have undergone continuing refinement and incorporate increasingly expanded numbers of inherited variants that influence atherosclerotic events. As these genetic panels can predict risk from birth, they may inform the early targeted allocation of preventive measures in younger individuals who have an augmented genetic predilection to develop atherosclerotic disease 26 . Indeed, lifestyle measures appear to mitigate risk of cardiovascular events across the spectrum of estimated genetic risk. Yet, the ability of even the latest generation of genetic risk scores to improve prediction of atherosclerotic events over more traditional algorithms remains controversial 27 , 28 . Recent research has challenged and expanded on the traditional risk factors. With global trends towards a decrease in LDL and the introduction of highly effective therapies for lowering LDL, as well as inexpensive and efficacious antihypertensive therapies, these drivers of chronic risk contribute less today than in previous years. Most markedly and despite decades of belief that HDL protected from atherosclerosis, recent human genetic studies—and the failure of several independent pharmacological measures to raise HDL to reduce atherosclerotic events—have called into question the protective effect of HDL 29 . Mendelian randomization studies that have corrected for pleiotropy have, however, provided some support for the protective effect of HDL 30 . Moreover, functional attributes of HDL fractions that are not captured by steady-state measurements of total HDL cholesterol concentrations (such as the capacity to mediate cholesterol efflux or anti-inflammatory actions) may yet exert anti-atherosclerotic effects 31 , 32 . The risk of plasma triglyceride concentration (a biomarker of a class of lipoproteins that include the TGRL) was passed over for many years, as the belief in the protective effect of HDL rendered it logical to adjust triglycerides for HDL—a precaution that attenuated the risk attributed to TGRL 33 . Triglycerides and HDL tend to vary inversely, and a recent ranking 34 , 35 of the relevant risk factors demotes HDL as a protective factor and points to TGRL as a potent predictor of cardiovascular risk. Moreover, in contrast to the situation with HDL, contemporary and concordant human genetic studies strongly support a causal role for TGRL in atherosclerosis and its complications 36 . A variety of inherited sequence variations that affect lipoprotein lipase, or factors that modulate the activity of this enzyme, alter the rate of atherosclerotic events, and these findings furnish strong human genetic evidence for the causal role of TGRL in their pathogenesis. Apolipoprotein CIII, ANGPTL3 and ANGPTL4 inhibit the ability of lipoprotein lipase to hydrolyse triglycerides in TGRL, and thus cause accumulation of these particles. By contrast, apolipoprotein V augments the activity of lipoprotein lipase and enhances TGRL clearance 37 , 38 . The activity of lipoprotein lipase thus regulates plasma triglyceride concentrations. Gain- or loss-of-function variants in this pathway that raise TGRL track with increased numbers of atherosclerotic events, and those that lower TGRL correlate with improved outcomes. The triglyceride component of TGRL does not appear to account for their atherogenicity 10 . TGRL, as with LDL, bear apolipoprotein B; they also contain cholesterol, and can deliver it effectively to macrophages in the atheroma. TGRL provoke inflammation, in part owing to their apolipoprotein CIII content. TGRL concentrations correlate better with inflammatory status than does LDL itself 39 , 40 . This refocusing on TGRL as a causal risk factor, and lack of actionability of altering HDL thus far, has notable therapeutic implications. Observational epidemiological studies have long associated a special form of LDL, lipoprotein(a), with atherothrombotic risk 41 . Lipoprotein(a) consists of an LDL particle, the signature apolipoprotein (apolipoprotein B) of which has bound covalently to apolipoprotein(a). Lipoprotein(a) carries oxidized lipids and may inhibit fibrinolysis owing to a structural similarity with plasminogen. Concordant human genetic studies provide persuasive evidence for the causality of elevated lipoprotein(a) not only in atherosclerosis but also in calcific aortic valve disease 42 , 43 , 44 , 45 . Inflammation drives atherosclerosis Beyond dyslipidaemia, a convincing body of experimental and clinical data now indicates that inflammation participates fundamentally in atherogenesis and in the pathophysiology of ischaemic events 46 . Inflammation does not supplant or demote lipid risk; rather, inflammatory responses provide a series of pathways that link lipids and other traditional risk factors to atherosclerosis. For example, concentrations of remnant lipoprotein show links with levels of C-reactive protein, a biomarker of inflammation 40 . A large body of evidence implicates inflammation in hypertension 47 . Experimental investigations have pinpointed the participation of innate and adaptive immunity in atherosclerosis (Figs. 1 , 2 ). Human biomarker studies have shown that indicators of inflammation predict risk of cardiovascular disease in a broad swath of individuals with or without manifest cardiovascular disease, and independently of all traditional risk factors 48 . The acute phase reactant (C-reactive protein), which can be measured with a highly sensitive assay (known as hsCRP), is a validated and clinically useful gauge of the overall innate immune status of an individual in relation to atherosclerotic risk 49 . Fig. 1: Initiation of atherosclerosis. The normal artery comprises three layers: the innermost intima (in close contact with the bloodstream), the tunica media, and the outer coat and adventitia. Under homeostatic conditions, the endothelial monolayer that lines the intima does not gather blood leukocytes. When activated by proinflammatory cytokines or other irritative stimuli related to cardiovascular risk factors, endothelial cells can express a leukocyte adhesion molecule (such as VCAM-1) that interacts with its cognate ligands (VLA4) to promote the rolling, and eventually adherence, of blood monocytes and lymphocytes to the endothelial layer. Chemoattractant cytokines can direct the migration of these bound leukocytes into the intima. Within the intima, foam cells form by uptake of lipids. Some of these lipid-laden foam cells arise from blood monocytes that have matured into macrophages. Recent evidence 146 in mice indicates that smooth muscle cells can undergo metaplasia, and give rise to foam cells that bear markers in common with those of macrophages. T lymphocytes—although fewer in number than the foam cells—produce mediators that orchestrate many functions of these innate immune cells. In humans (but not in many of the small animals that are often used experimentally), the intima contains resident smooth muscle cells. Other smooth muscle cells (that are usually located in the media) can penetrate into the intima, where they join resident smooth muscle cells to promote the accumulation of extracellular matrix that these cells synthesize within the expanding intima. Full size image Fig. 2: The progression of atherosclerosis reflects and interplay between factors that promote or mitigate atherogenesis. This diagram summarizes results from experimental studies in mice, and observations on human atherosclerotic plaques. Pathways thought to promote lesion formation (factors in red) are shown on the left, and mechanisms that may moderate atherogenesis (factors in blue) are on the right. Smooth muscle cells and macrophages can proliferate as the intimal lesion grows. PDGF promotes the migration and replication of smooth muscle cells, and then production of extracellular matrix. All cells in the atheromatous plaque can secrete cytokines, examples of which include IL-1, TNF and M-CSF (also known as CSF1). Activated T-helper 1 (T H 1) lymphocytes produce IFNγ, which can stimulate mononuclear phagocytes and aggravate atherosclerosis. Other types of cell produce countervailing mediators. B1 lymphocytes can secrete IgM natural antibody; T-helper 2 (T H 2) lymphocytes produce the anti-inflammatory cytokine IL-10; and regulatory T (T reg ) cells can secrete TGFβ. These mediators can antagonize cellular proliferation, promote extracellular matrix synthesis and quell inflammation. Mononuclear phagocytes can engulf dying or dead cells that arise through apoptosis, through a process known as efferocytosis. Inefficient efferocytosis favours the accumulation of debris from dead or dying cells, and promotes formation of the central lipid core of the atherosclerotic plaque. B2 lymphocytes secrete mediators (such as BAFF, a member of the TNF family) that can aggravate atherogenesis. This diagram shows only a subset of the mediators that have been implicated in promoting or antagonizing aspects of atherogenesis. Current research suggests an ongoing struggle between proliferation and death, involving proinflammatory, anti-inflammatory and proresolving mediators—generally through a prolonged course of many years in the evolution of the human atherosclerotic plaque. Full size image In addition to innate immunity (which depends largely on cytokines and macrophages), the adaptive arm of the immune response operates during atherogenesis. T lymphocytes primarily aggravate atherogenesis, but T-helper 2 and regulatory T cells can mute this process, at least in mice 46 , 50 . Candidate antigens include forms of LDL as discussed in ‘Oxidized LDL and the initiation of lesions’. B lymphocytes also can exert a dual role in atherogenesis. Natural IgM antibodies encoded in the germline and produced by B1 cells mitigate experimental atherosclerosis 51 , 52 . B2 cells can produce antibodies that may drive this process. One candidate antigen, the mitochondrial enzyme ALDH4A1, emerged from analysis of B cells isolated from mouse atheromata 53 . An increased number of links between inflammation, immunity and intermediary metabolism have recently emerged. Inflammatory activation of mononuclear phagocytes and endothelial cells tends to shift their metabolism towards glycolytic pathways 54 , 55 , 56 . Altered tryptophan metabolism has also garnered considerable interest in atherosclerosis. Cytokines induce indolamine dioxygenase (the rate-limiting step in tryptophan catabolism), which lowers intracellular tryptophan stores and augments the production of kynurenine and its metabolites; this latter pathway may have a counter-regulatory function by muting inflammation and the cellular immune response 57 , 58 , 59 . The applicability to humans of experimental evidence that implicated inflammation in atherosclerosis has engendered considerable controversy 60 , 61 . However, recent clinical trials have shown that targeting inflammation can reduce cardiovascular events even in individuals who have already been treated with a full panel of effective standard therapies. The ‘Canakinumab Anti-inflammatory Thrombosis Outcomes Study’ (CANTOS) allocated randomly an antibody that neutralizes the proinflammatory cytokine IL-1β to patients with stable coronary artery disease at least one month after a qualifying myocardial infarction 62 . The enrolled population had signs of inflammation despite standard-of-care medical therapy, as mandated by prevailing guidelines (gauged by an hsCRP above 2 mg l −1 ). The participants had a baseline LDL of approximately 2 mM (81 mg dl −1 ). The anti-inflammatory therapy yielded a 15% relative reduction in risk for recurrent myocardial infarction, stroke or cardiac death. In an on-treatment analysis, individuals who responded to the IL-1β neutralization by achieving a greater-than-median reduction in hsCRP had a 26% reduction in the primary end point, and a decrease in all-cause mortality. As IL-1β participates in host defences, it was not surprising that CANTOS showed a small but statistically significant increase in infections (including fatal infections) in patients randomized to canakinumab. A highly significant reduction in incident and fatal lung cancer noted in exploratory analyses counterbalanced this risk of infection 63 . The natural product colchicine has served as an anti-inflammatory therapy for many years, and its use has become standard-of-care in the treatment of pericarditis. Two recent large-scale outcome trials have indicated efficacy in reducing recurrent cardiovascular events after the development of acute coronary syndromes. The ‘Colchicine Cardiovascular Outcomes Trial’ (COLCOT) showed a 23% reduction in the composite primary end point, driven primarily by fewer revascularizations in patients treated in the early phase after developing an acute coronary syndrome (4–30 days) 64 . The incidence of pneumonia more than doubled in the group who were treated with colchicine. The ‘Low Dose Colchicine 2’ (LoDoCo2) study administered colchicine (5 mg d −1 ) to individuals with stable coronary artery disease, and reported a reduction in recurrent events similar to that seen in the COLCOT 65 . These recent large-scale clinical outcome trials have bolstered the clinical applicability of decades of fundamental research into inflammatory pathways in the pathogenesis of atherosclerosis. The increase in infections noted with canakinumab and colchicine indicate an opportunity for the refinement of anti-inflammatory therapy for atherosclerosis that retains efficacy in limiting adverse outcomes, while interfering less with host defence. But not all anti-inflammatory interventions have yielded clinical benefit 66 . For example, a trial with low-dose weekly methotrexate did not improve cardiovascular outcomes nor did it exert an anti-inflammatory effect in the population studied 67 . Obesity and its attendant dysmetabolism, often manifested by insulin resistance and diabetes, now drives an increasing proportion of cardiovascular disease risk worldwide. Adipose tissue abounds with inflammatory cells, produces proinflammatory mediators and inflammation contributes mechanistically to the link between obesity, insulin resistance and atherosclerotic risk 68 . Moreover, environmental factors, such as air pollution, noise, disturbed sleep and other stressors have gained increasing recognition as contributors to the risk of atherosclerotic events in part thorough the activation of inflammatory pathways 20 , 21 . An additional aspect of risk allied with inflammation has recently emerged. With age, we accumulate somatic mutations in haematopoietic stem cells in the bone marrow in genes that drive the development of acute leukaemia 69 , 70 . Investigators seeking the origins of leukaemia found that individuals who are apparently well and do not have haematological malignancies can generate clones of leukocytes that circulate in peripheral blood and that bear mutations in a handful of the known driver genes for leukaemia. The prevalence of this condition in individuals aged 70 exceeds 10%, and this burden increases with further ageing. As the population ages, the number of individuals who bear these clones will increase. As expected, those who have this condition—known as clonal haematopoiesis of indeterminate potential (CHIP)—have a risk of developing acute leukaemia of more than tenfold that of unaffected individuals. However, the increase in total mortality in individuals with CHIP by far exceeds that attributable to transformation to acute leukaemia. Cardiovascular disease accounts for this gap in mortality 71 . Fully adjusted for all traditional risk factors, CHIP confers a risk for myocardial infarction and stroke equal to or greater than previously recognized contributors to cardiovascular risk, save for age itself. The qualification ‘indeterminant potential’ reflects the lack of symptoms or consistent laboratory findings in bearers of these mutant clones of peripheral leukocytes and our current inability to predict which individuals with CHIP will develop leukaemia or cardiovascular disease. Indeed, many carriers of CHIP will never know that they have this condition 72 , 73 . The connection between CHIP and inflammation arises from experimental work that demonstrated that mice engineered to simulate CHIP with mutations in Dnmt3a or Tet2 have accelerated atherosclerosis, and demonstrate increased activity of the NLRP inflammasome IL-1β–IL-6 proinflammatory pathway 71 , 74 . Mice with myeloid cells that bear Jak2 V617F (another mutation associated with CHIP) show activation of the AIM2 inflammasome 75 . Concentrations of the marker of inflammation hsCRP do not rise consistently in individuals with CHIP. This observation indicates the existence of aspects of inflammation that mark augmented cardiovascular risk but that are not captured by measurements of hsCRP. In sum, we have witnessed a considerable change in the ranking of risk factors for atherosclerosis: some factors have receded in importance and relevance given current therapies, and others have rapidly expanded, in part owing to socioeconomic and behavioural factors 2 (Table 1 ). Revised concepts of atherogenesis Oxidized LDL and the initiation of lesions Most reviews of the mechanisms of atherosclerosis posit a pivotal role for oxidized LDL as the prime mover of this disease (Fig. 1 ). Although LDL participates causally in atherogenesis, despite a large body of animal research, scant evidence actually supports the causal role of oxidized LDL in humans. A variety of clinical intervention trials with antioxidant vitamins or one of a highly effective lipophilic antioxidant drug have not reduced atherosclerotic events. Native, rather than oxidized, LDL appears to drive the adaptive immune response in mice 76 . LDL per se appears to be a relatively weak stimulus to innate immune activation. Recent work supports the participation of cavelolin-1-dependent LDL transcytosis through the endothelium in experimental atherosclerosis 77 . ALK1 and SRB1 can also participate in LDL transcytosis 78 , 79 . Although these results emphasize the causality of LDL in atherogenesis, they do not invoke the participation of oxidized LDL in this process. How LDL causes atherosclerosis is not understood completely, and we should seek explanations beyond the oxidation hypothesis. When oxidized lipid binds to plasminogen, they can activate fibrinolysis 80 . Thus, oxidized lipids may promote atherogenesis but boost thrombolysis—an opposing effect that could contribute to the net lack of benefit in trials of anti-oxidant strategies 81 . LDL that aggregates in the intima in association with proteoglycan, or adaptive immune responses to native LDL, provide alternative mechanisms through which this lipoprotein promotes atherogenesis. Macrophages in plaques take up aggregated LDL 82 , and the LDL receptor-related protein can mediate the update of aggregated LDL by intimal smooth muscle cells 83 . Regardless of the initial trigger or triggers, experimental and human observations agree that the recruitment of blood leukocytes mediated by activation of endothelial cells that line the arterial lumen occurs early in lesion formation (Fig. 1 ). The resting endothelium resists attachment of blood leukocytes. In an atherogenic environment, endothelial cells can express leukocyte adhesion molecules that mediate the rolling and firm attachment of white blood cells to the intimal surface (Fig. 1 ). Chemokines direct the migration of the adherent leukocytes into the arterial intima. Mononuclear phagocytes can proliferate within the intimal layer (the site of lesion initiation) 84 . These cells engulf lipids and become foam cells, the hallmark of atherosclerotic lesions. T lymphocytes, which drive the adaptive immune response, interact with innate immune cells within the intima 85 , 86 . A proinflammatory subset of monocytes gives rise to lesional macrophages 87 , 88 . Recent lineage-tracking experiments support a smooth muscle origin of many foam cells in mouse atheromata 89 . The cooperation between these cellular constituents of innate and adaptive immunity stimulate the production of proinflammatory cytokines that sustain and amplify the local inflammatory response. Inexorability of atheroma progression Many have considered atherosclerosis an inevitable ‘degenerative’ process that progresses continuously over time (Fig. 2 ), but current evidence supports a much more dynamic and discontinuous evolution of atheromata 90 , 91 , 92 . Episodes of systemic inflammation or regional inflammation that is remote from the atherosclerotic plaque itself can provoke ‘crises’ in the evolution of the plaque, and stimulate a round of inflammatory activation that can promote cell migration, proliferation, lesion progression and complication (Fig. 2 ). The concept of ‘trained immunity’ raises the possibility that successive encounters with irritative stimuli elicit exaggerated responses 93 , 94 . Arterial smooth muscle cells—which normally reside quiescent in the middle layer of the artery (the tunica media)—enter the intimal layer, where they can proliferate and may undergo metaplasia to become macrophage-like cells 89 . The application of advanced cell-sorting techniques and of single-cell RNA sequencing has disclosed a high degree of heterogeneity in the cellular contributors to atherosclerosis 95 , 96 , 97 . Sorting out the functional consequences of the newly identified cell types that participate in atherogenesis will require considerable work, and holds promise for advancing the identification of new therapeutic targets. Atherosclerosis may not proceed continuously, but rather in phases of relative quiescence punctuated by periods of rapid growth. Emerging evidence points to haematopoiesis as a key contributor to lesion evolution and as a link between regional inflammation, environmental stimuli and atherogenesis 98 . Mental stress, sleep disturbance and remote injury or infection can stimulate haematopoiesis in the bone marrow, furnishing leukocytes that can populate the plaque 98 , 99 . Extramedullary haematopoiesis, as well as the mobilization of preformed pools of leukocytes in the spleen, furnish further leukocytes that can home in on atheromata under stress situations. Indeed, the work that identified CHIP as a risk factor for atherosclerosis underscores the link between atherosclerosis and haematopoiesis. These observations have opened a window onto the pathogenesis of atherosclerosis, and provide a link between oncogenesis and atherogenesis that was unsuspected only a few years ago. The death of mononuclear phagocytes in the lesion, and their ineffective clearance (defective efferocytosis), promote the formation of the lipid or necrotic core of the atherosclerotic lesion 100 . Lesion progression can occur silently over many decades. Indeed, many young or middle-aged individuals contain asymptomatic and subclinical atherosclerotic lesions, as shown by autopsy and imaging studies 101 , 102 , 103 . ‘Vulnerable plaques’ The acute events such as myocardial infarctions and ischaemic strokes that complicate atherosclerosis arise from thrombosis or formation of blood clots (Fig. 3 ), a physical disruption of atherosclerotic plaques provokes most acute thromboses. The so-called ‘vulnerable plaque’ has received considerable attention 104 , 105 . A fracture of the fibrous cap of the plaque (which overlies the necrotic core) exposes blood and its coagulation proteins to thrombogenic substances (such as tissue factor) within the plaque, triggering acute thrombosis 106 (Fig. 3a ). The fibrous cap owes its tensile strength largely to interstitial collagen. The thinning of the fibrous cap arises from an inflammation-related decrease in collagen synthesis and augmented degradation owing to overexpression of collagenases by inflammatory cells 107 . Autopsy studies 104 have implicated rupture of the fibrous cap as the cause of the majority of fatal acute coronary syndromes, stimulating focus on the thin-capped fibroatheroma as a possible culprit. Yet, post-mortem studies such as these lack a denominator for how many lesions with the characteristics attributed to vulnerability do not cause acute thrombotic complications. Recent in vivo imaging studies in humans have furnished this missing information, and have shown that plaques that thin-capped plaques seldom cause clinical events 108 , 109 , 110 . Thus, current evidence shows that 'vulnerable plaque' is a misnomer 111 , 112 . Fig. 3: Thrombotic complications of atherosclerosis and evolution of the atherosclerotic plaque. a , Plaque rupture. This involves a fracture or fissure of the fibrous cap that overlies the lipid core of the plaque. This physical disruption permits contact of blood coagulation factors with thrombogenic material (principally the potent procoagulant tissue factor) within the plaque. The ensuing thrombosis can obstruct blood flow and lead to cardiac ischaemia. This mechanism accounts for about two-thirds of acute myocardial infarction, but appears to be waning; current preventive therapies lead to a reduction in accumulation of lipid within plaques and to the reinforcement of the fibrous cap. b , Superficial erosion. This cause of coronary artery clot formation involves a sloughing or desquamation of the endothelial monolayer. Granulocytes trapped in the plaque or adherent to the intimal basement membrane can form neutrophil extracellular traps (NETs). NETs are strands of nuclear DNA that have unwound, present various neutrophil granular proteins and bear other proteins that they bind from the blood, forming a solid-state reactor on the intimal surface. NETs can propagate inflammation and thrombosis. c , d , Plaques can heal, which augments the bulk of the plaque and promotes the formation of flow-limiting stenosis in previously disrupted arteries. During thrombosis, platelets release PDGF and TGFβ, which promote the synthesis of extracellular matrix proteins that contribute to fibrosis and plaque growth. c , Ruptured plaques that have healed often show morphologic evidence of the rupture underneath a layer of more recently deposited extracellular matrix (a ‘buried’ fibrous cap). d , Plaques can also grow through incorporation of a thrombus. Lesions can also calcify (not shown), in part owing to cell-derived microvesicles that can nucleate this process. Regions of spotty calcification imaged by computed tomography correlate with an increased risk of a thrombotic event. In contrast to smaller deposits of calcium, macroscopic plates of calcium may stabilize plaques from mechanical disruption (rather than create inhomogeneity in stresses that promotes thrombotic complications due to plaque disruption). Full size image In an era of intense lipid lowering, plaques of the classical vulnerable morphology are on the wane 113 . Another mechanism of plaque disruption (known as superficial erosion) currently appears to be on the rise and probably has a distinct pathophysiology 114 , 115 , 116 (Fig. 3b ). This trigger to coronary artery stenosis does not involve fissure or rupture of the fibrous cap of the plaque, but rather a discontinuity in the intimal endothelial lining. The application of an intravascular imaging modality known as optical coherence tomography enables identification of plaque rupture, and has led to the development of criteria for the diagnosis of probable or definite erosion in individuals with acute coronary syndromes 117 . The mechanisms of erosion involve endothelial injury, the participation of polymorphonuclear leukocytes and neutrophil extracellular traps as a local contributor to thrombus formation and propagation 114 , 118 , 119 . Clinical implications From a therapeutic perspective, we have reason for optimism in addressing the growing burden of atherosclerotic risk. First, we face an imperative to encourage healthy lifestyle choices and a public environment that supports them by encouraging physical activity, discouraging sugar-sweetened beverages that contribute to the obesity epidemic and continuing to abate tobacco use. Indeed, a healthy lifestyle can mitigate—in part—genetic risk for atherosclerotic events 120 . Such public health measures include promoting pedestrian zones, bicycle paths and playgrounds, and providing healthy foods in schools. Second, the drug therapy for atherosclerosis has advanced not only in the availability of agents that address an expanded array of causal targets, but also by harnessing biomarkers and genetic information to deploy therapy more precisely 62 , 121 . The application of evidence-based interventions that address the established risk factors provides a firm foundation for future advances. The introduction of statins (agents that are highly effective in lowering LDL and that also quell inflammation independently of effects on lipids) has revolutionized the prevention and treatment of atherosclerosis, a topic that has been well-reviewed in previous publications 122 . The availability of an inhibitor of intestinal absorption of cholesterol that targets the Niemann Pick C 1-like protein 1 yields further reductions in LDL and a further decrement in cardiovascular events 123 . The identification of mutations in PCSK9 as a cause of autosomal dominant hypercholesterolaemia led rapidly to the development of therapeutic agents that can improve further cardiovascular outcomes in patients who were already treated with statins 17 , 18 , 124 . PCSK9 conducts the LDL receptor to the lysosome, where it undergoes proteolytic degradation. Inhibition of PCSK9 favours the recycling of undegraded LDL receptors to the cell surface, where they can capture and internalize LDL, and thus lowers the plasma concentration of this highly atherogenic lipoprotein 124 . The introduction and recent approval in the USA of bempedoic acid (an inhibitor of ATP citrate lyase, which acts upstream of hydroxymethyglutaryl co-enzyme A (the target of statins)) adds to the range of non-statin agents that lower LDL 125 , 126 . The availability of inclisiran (a small interfering RNA that limits the production of PCSK9) provides a notably long duration of action, and could be administered twice a year or even annually 127 . Large-scale clinical trials in progress will evaluate the ability of these newer LDL-lowering treatments to improve cardiovascular outcomes. An antisense RNA agent has entered clinical investigation that targets lipoprotein(a), the highly atherogenic cousin of LDL 128 . Treatment of elevated lipoprotein(a) (which is often familial) has proven an enduring problem in preventive cardiology 129 . Beyond LDL, a cardiovascular end-point study is evaluating a selective PPARα agonist in individuals with high triglycerides and low HDL 130 . The ‘Reduction of Cardiovascular Events with Icosapent Ethyl–Intervention Trial’ revealed that prescription-grade eicosapentaenoic acid can reduce events substantially in individuals with hypertriglyceridaemia 131 , 132 . Part of this benefit (but probably not all of it) results from a lowering of blood triglycerides; some of the benefit may accrue from an anti-inflammatory action 133 . Recent studies targeting IL-1β (CANTOS) and trials with colchicine (COLCOT and LoDoCO2) have demonstrated the ability of anti-inflammatory therapies that do not lower atherogenic lipids to reduce cardiovascular events in patients who are already receiving a full standard-of-care regimen, including high-dose statins 62 . Beyond affirming the inflammation hypothesis of atherosclerosis, these studies identify colchicine as a readily actionable anti-inflammatory therapeutic agent for atherosclerosis. There is currently a revolution at the conjunction of diabetes and cardiovascular disease. A ‘glucocentric’ view of diabetic complications has long prevailed 134 . Although microvascular complications (such as retinopathy, nephropathy and neuropathy) do respond to glucose lowering, the heightened atherosclerotic risk in people with diabetes had—until recently—proven intractable to traditional hypoglycaemic interventions. Recent studies with SGLT2 inhibitors and GLP1 receptor agonists indicate that the macrovascular complications of diabetes involve mechanisms beyond glucose lowering 135 , 136 , 137 , 138 , 139 , 140 , 141 . These late-generation agents promise to make inroads in the prevention of cardiovascular complications of diabetes (including myocardial infarction, stroke, heart failure, renal disease and premature death). The success of these drugs in forestalling cardiovascular events emphasizes protective mechanisms beyond those of glucose lowering; the investigation of these mechanisms promises to open avenues in the understanding and treatment of atherosclerosis and heart failure, a common complication of coronary artery disease. Atherosclerosis is a moving target A conjunction of fundamental research and clinical investigations has markedly altered traditional concepts of atherosclerosis, and has informed improvements in our ability to manage atherosclerotic risk (Table 1 ). At the same time, the clinical profile of patients with atherosclerosis has evolved considerably from that of classical cohort studies that have long furnished the basis of our thinking about this disease. Research advances have arisen from improvements in human genetics studies enabled by next-generation sequencing and other technological innovations (including bulk and single-cell RNA sequencing), and the ever-evolving toolkit for genetic manipulation of mice including gene-editing and induced pluripotential stem cell methodology 142 , 143 . Beyond DNA and mRNA analyses, understanding the functions of non-coding RNAs in atherosclerosis has improved. MicroRNAs and long non-coding RNAs alter the transcription of genes implicated in atherosclerosis 144 , 145 . These advances will doubtless lead to therapies to address the unacceptable burden of risk that persists in spite of current interventions. Table 1 Changing views on atherosclerosis Full size table Transforming these scientific advances in therapies has required large-scale clinical investigations, which—because of the success of standard-of-care therapies—have required increasing ingenuity and investment. Placebo-controlled, randomized clinical trials remain the most reliable tool for validating the application to humans of therapies derived from laboratory discoveries. However, we also need to embrace targeting segments of the larger population of patients to enrich those enrolled in clinical trials for enhanced risk and responsiveness to specific interventions. This approach has revolutionized the management of cancer, but has barely begun in the cardiovascular arena. The application of polygenic risk scores may identify young people who may benefit particularly from early preventive treatments. On the one hand, we are witnessing a globalization of cardiovascular disease risk that has increased the overall burden of atherosclerotic disease. On the other hand, progress in laboratory and clinical investigation promises to provide us with tools to confront this global epidemic. Ultimately, making inroads in the control of atherosclerosis will require a multidisciplinary partnership of public health measures, applied behavioural psychology, risk factor control, consistent application of existing therapies, and the development and validation of new therapeutic approaches. | Atherosclerosis—hardening of the arteries—is now involved in the majority of deaths worldwide, and advances in our understanding of the biology of the disease are changing traditional views and opening up new avenues for treatment. The picture of who may be at risk for a heart attack has evolved considerably in recent decades. At one time, a heart attack might have conjured up the image of a middle-aged white man with high cholesterol and high blood pressure who smoked cigarettes. Today, traditional concepts of what contributes to risk have changed. These updated views include new thinking around: Global disease burden: Atherosclerotic cardiovascular disease is now the leading cause of death worldwide.Clinical profile: Women, younger individuals, and people of diverse backgrounds bear an increasing burden of atherosclerotic cardiovascular disease.Role of "good cholesterol": The protective role of HDL cholesterol (so-called "good cholesterol") has been called into question and triglycerides have emerged as a promising target for reducing heart disease risk.Inflammation drives atherosclerosis: New data suggests that inflammation may be a critical link between traditional risk factors such as abnormal lipids, smoking, and diabetes and complications of atherosclerosis including heart attack and stroke. "Advances in our understanding of the biology of atherosclerosis have opened avenues to therapeutic interventions that promise to improve the prevention and treatment of now-ubiquitous atherosclerotic diseases," writes Libby. "From a therapeutic perspective, we have reason for optimism in addressing the growing burden of atherosclerotic risk." | 10.1038/s41586-021-03392-8 |
Physics | Scientists demonstrate time reflection of electromagnetic waves | Andrea Alù, Observation of temporal reflection and broadband frequency translation at photonic time interfaces, Nature Physics (2023). DOI: 10.1038/s41567-023-01975-y. www.nature.com/articles/s41567-023-01975-y Journal information: Nature Physics | https://dx.doi.org/10.1038/s41567-023-01975-y | https://phys.org/news/2023-03-scientists-electromagnetic.html | Abstract Time reflection is a uniform inversion of the temporal evolution of a signal, which arises when an abrupt change in the properties of the host material occurs uniformly in space. At such a time interface, a portion of the input signal is time reversed, and its frequency spectrum is homogeneously translated as its momentum is conserved, forming the temporal counterpart of a spatial interface. Combinations of time interfaces, forming time metamaterials and Floquet matter, exploit the interference of multiple time reflections for extreme wave manipulation, leveraging time as an additional degree of freedom. Here we report the observation of photonic time reflection and associated broadband frequency translation in a switched transmission-line metamaterial whose effective capacitance is homogeneously and abruptly changed via a synchronized array of switches. A pair of temporal interfaces are combined to demonstrate time-reflection-induced wave interference, realizing the temporal counterpart of a Fabry–Pérot cavity. Our results establish the foundational building blocks to realize time metamaterials and Floquet photonic crystals, with opportunities for extreme photon manipulation in space and time. Main Reflection is a universal phenomenon occurring when a travelling wave encounters an inhomogeneity. Spatial reflections arise at a sharp discontinuity in space: here momentum is exchanged between an incoming wave and the interface, which acts as a momentum bath, as frequency is conserved. As the basis of wave scattering, spatial reflections play a key role in wave control and routing, as well as in the formation of resonant modes, filtering, band engineering and metamaterial responses. Recently, advances across nonlinear wave sciences have stirred substantial interest in the use of time as an additional degree of freedom for wave scattering, leveraging time-varying media as reservoirs that mix and exchange energy with waves in the system. As examples of these opportunities, photonic time crystals and Floquet wave phenomena have raised interest across the broader physics community 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 . In this context, time reflection (TR) constitutes the temporal counterpart of spatial reflection, with dual features. This effect occurs at a time interface, that is, when the properties of the host medium are homogeneously switched in space over a timespan much faster than the wave dynamics. On TR, an input wave is partly time reversed: its energy and frequency content are generally transformed as momentum is conserved because of spatial translational symmetry 9 , 10 . Time reversal is a key functionality for a variety of applications, from channel estimation in communication systems to compensation of signal distortion and dispersion. The most common way of realizing time reversal is through the digitization and retransmission of a recorded signal through a computer 11 , but with notable requirements in terms of processing time and energy, as well as memory demands. In the analogue domain, time reversal can be achieved by periodically modulating the properties of the host medium at twice the frequency of the signal. This phenomenon has been observed in acoustics 12 and magnonics 13 , as well as for electromagnetic waves at both radio-frequencies 11 and with lower efficiency in optics 14 . However, parametric phenomena are inherently slow and narrowband, relying on extended exposure of the signal of interest to a periodic modulation driving the resonant coupling between the positive and negative frequencies 14 and hence subject to instabilities leading to highly dispersive and nonlinear distortions. On the contrary, TR at a time interface enables ultrafast and ultrabroadband time reversal and—where desirable—efficient frequency translation of an arbitrary waveform. Although several exciting theoretical proposals have been put forward to exploit these features for a variety of exotic photonic functionalities, including subwavelength focusing 15 imaging through random media 16 , temporal anti-reflection coatings and advanced frequency filtering 17 , inverse prisms 18 , temporal aiming 19 , analogue computing 20 and the ultrafast generation of squeezed states of quantum light 21 , time interfaces have so far been observed only for water waves 22 , remaining elusive to photonics and thus drastically limiting their impact. The key challenge in this quest consists of designing and realizing a setup capable of imparting sufficiently strong and fast variations to the electromagnetic properties of a material uniformly in space and hence requiring a metamaterial featuring a temporal response much faster than temporal wave dynamics. Energy and dispersion requirements may also become very demanding when imparting such strong and fast modulations of the material properties 23 , factors that have been hindering the experimental demonstration of TR in electromagnetics to date. Here we are able to tackle these challenges, and demonstrate time interfaces and TR in a microwave transmission-line metamaterial (TLM) periodically loaded by a deeply subwavelength array of lumped capacitors, synchronously added to, or removed from, a microstrip through voltage-controlled reflective switches. On switching, we can uniformly and strongly change the effective capacitance per unit length of the TLM much faster than the temporal variations of the broadband signals propagating through it. This realizes a time interface, with associated photonic TR, as well as broadband, efficient frequency translation (Fig. 1a ). As we discuss in the following, by switching the capacitive loads in and out, we are able to drastically and abruptly modify the electromagnetic properties of the TLM without affecting its linear dispersion and without large energy requirements. By implementing a pair of such time interfaces, we form a temporal slab in which the reflected and refracted signals at each time interface interfere, demonstrating the temporal analogue of a Fabry–Pérot filter 17 . Our results establish the fundamental building blocks to exploit time as an additional degree of freedom for extreme wave manipulation in metamaterials 24 , 25 , 26 . Fig. 1: Observation of photonic TR. a , Illustration of a time interface in a uniformly switched TLM. A step-like bias signal (green) is used to uniformly activate a set of switches distributed along the TLM, with spacing much smaller than the wavelengths of operation. On closing (opening) of the switches, the effective TLM impedance is abruptly decreased (increased) by a factor of two, causing a broadband forward-propagating signal (blue) to be split into a time-refracted and TR signal, both with redshifted frequencies (red). b , Photograph of the fabricated time-switched TLM. c , Reflection at a spatial interface causes the reflected signal to invert its profile in space. d , A temporal interface breaks time-translation symmetry in a spatially homogeneous medium, uniformly inverting the evolution of an input signal in time. e , Simulated (Sim.) and measured (Meas.) dispersion relations of the fabricated TLM before (blue) and after (red) activating the switches. The purple arrow indicates broadband transitions induced by spatial reflection, coupling positive and negative momenta, whereas the green arrow indicates TR, coupling positive and negative frequencies. f , Experimental observation of photonic TR at a time interface with an asymmetric pulse consisting of a smaller input signal (yellow marker in input port voltage V 1 ) followed by a larger one (purple marker). Within 3 ns after the switch logic (middle) turns on, a portion of the input signal undergoes TR, propagating back to the input port, where the two signals are measured in reverse order (purple marker and then yellow marker; zoomed-in view in the inset), with flipped polarity. After ~140 ns, the time-refracted signal, having undergone a spatial reflection at the end of the TLM, returns to the V 1 port in the original yellow–purple order. The output port voltage V 2 shows the time-refracted signal (bottom), broadened in time due to the broadband redshift in its frequency content induced by the time interface. Signal amplitudes are plotted accounting for the power lost in the splitter and impedance change (Supplementary Section 3 ). Source data Full size image A photograph of our fabricated TLM is shown in Fig. 1b . A broadband input signal is injected from one of the ports, and it travels along the meandered microstrip line, loaded by an array of 30 switches connected in series to an array of subwavelength-spaced capacitors (unit-cell length, ~20.1 cm; Supplementary Sections 1 and 2 provide details on the TLM design and implementation). The meandered microstrip emulates an unbounded medium with close-to-linear dispersion; although the signal is fully within the TLM, a control signal for the switches is sent via a pair of much shorter microstrips (with 80 times faster transit time across the TLM), synchronously triggering all the switches with a rise time of ~3 ns, much faster than the temporal dynamics of the incoming wave (Supplementary Section 8 shows the details on switch synchronization). This switching event is much faster than half of a wave period, and its amplitude is of the order of unity, as required for efficient TR 27 , resulting in an efficient time interface. At a spatial interface (Fig. 1c ), translational symmetry is broken and hence the reflected waves undergo parity inversion ( z →− z ) and a receiver at the source location registers the features of a reflected signal in the same order as they were originally sent, akin to a sound echo. Frequency is conserved in this scenario, whereas wavevector and momentum are not. Conversely, at a time interface (Fig. 1d ), time-translation symmetry is broken whereas spatial symmetry is preserved. Hence, the ‘echo’ associated with the time-reversed ( t →− t ) signal is detected backwards, whereas the signal retains its original spatial profile due to momentum conservation. In addition, the broadband frequency content of the input signal is abruptly transformed, as predicted by the band diagrams of our TLM (Fig. 1e ); here the blue and red lines depict the TLM dispersion curve before and after the switching, comparing the simulated (lines) and measured (circles) results. Given the small spacing between neighbouring loads compared with the relevant wavelengths, the curves follow a linear dispersion, with different slopes corresponding to the different effective capacitance before and after switching. Wave scattering at a spatial discontinuity (Fig. 1c ) is equivalent to a horizontal transition in the dispersion diagram (Fig. 1e , purple arrow), preserving frequency and generating waves with new positive and negative momenta. Conversely, a time interface (Fig. 1d ) corresponds to a vertical transition (Fig. 1e , green arrow), which preserves the wavenumber and generates new positive and negative frequencies, efficiently translating the entire frequency spectrum of a broadband input wave, and conserving the entire spatial structure of the pulse. These features are clearly observed in our time-domain experimental measurements (Fig. 1f ): we excite the TLM with an input signal consisting of an asymmetric pair of Gaussian pulses, measured by the input port voltage V 1 ( t ) as a first smaller pulse (yellow marker) followed by a larger one (purple marker). Approximately 15 ns after the activation of the switches (Fig. 1f (middle); Methods provides the timing details), we record the TR signal at the input port, whose zoomed-in view reveals it to be the TR copy (purple→yellow) of the input. This TR signal has inverted polarity with respect to the input signal, indicating that the TR coefficient is negative, as expected from the scattering coefficients for a reduction in wave impedance achieved by connecting the lumped capacitors in our TLM (Supplementary Section 5 shows the derivation): $$R_{1 \to 2} = \frac{{Z_2\left( {Z_2 - Z_1} \right)}}{{2Z_1^2}}$$ (1) where in our system, Z 1 ≈ 50 Ω is the line impedance before switching and Z 2 ≈ 25 Ω is the line impedance after switching. Approximately 140 ns later, an attenuated signal is received at the input port, corresponding to the time-refracted signal that has been travelling to the end of the TLM and then spatially reflected backwards at the mismatched termination. As expected, this second signal has inverted symmetry compared with the TR signal (yellow→purple). The TR signal, as well as the time-refracted signal at the output port V 2 (Fig. 1f , bottom), retain the same spatial profile as the incident signal due to the preserved spatial symmetry, but they slow down as they travel in a line with increased effective permittivity. This phenomenon underpins the broadband and efficient frequency translation process abruptly occurring at the time interface, based on which each frequency component of the input signal is transformed according to ω 1 → ω 2 = ( Z 2 / Z 1 ) ω 1 (Supplementary Section 5 shows the derivation). In Fig. 2a , we retrieve the TR ( R ) and time-refraction ( T ) coefficients by performing a Fourier analysis on the measured output signals V 1 and V 2 (Fig. 1f and Supplementary Section 4 ); the retrieved amplitudes (top) and phases (bottom) of the temporal scattering coefficients as a function of the input wavevector k (lower horizontal axis) and corresponding input frequency (upper horizontal axis) agree well with our theoretical predictions assuming an instantaneous switching event (dashed lines). As expected, the time-refracted signal is in phase with the input, whereas the TR signal flips sign for all the input momenta of our broadband pulse spectrum. The slight wavevector dependence of the scattering coefficients at higher frequencies can be attributed to the finite switching speed (Supplementary Section 9 and 10 show an investigation of the switching rise time), the frequency dispersion associated with non-idealities of the circuit components, as well as the finite spacing between neighbouring switches. Fig. 2: Spectral analysis of photonic TR. Schematic of the negative (top row) and positive (bottom row) switching of the effective impedance. Here f 1 (blue) and f 2 (red) denote the wave frequencies when the switches are open and close, respectively. a , b , Amplitude (top) and phase (bottom) of the spectra of the measured (shaded) and theoretical (dashed) time-refraction ( T ) and TR ( R ) coefficients on a decrease ( a ) and increase ( b ) in impedance, measured from the Fourier transforms of the respective scattered pulses. Note the π-phase shift in TR ( b ), contrasting with the TR coefficients derived in the literature when switching the medium permittivity. c , d , Carrier-wave measurements of incoming (blue), TR (purple) and time-refracted (yellow) in time (left column) and frequency (right column) domains, showing broadband redshift ( c ) and blueshift ( d ) induced by the time interface. e , f , Theoretical (dashed) and measured broadband redshift ( e ) and blueshift ( f ) of the outgoing TR (dot–dashed purple line) and time-refracted (continuous yellow line) signals induced by the time interface, obtained by scanning the carrier frequency of a narrowband input signal. The purple circles and yellow squares denote the scenarios in c and d . Source data Full size image Our results not only demonstrate efficient TR at a time interface but also imply the evidence of a new form of boundary conditions associated with time interfaces. Our platform enables fast and efficient impedance changes by adding and removing reactance to the TLM through switches, rather than modifying the reactance in time. Since the involved additional capacitors are static, this operation enables much faster transitions, without heavy requirements on energy, and hence addressing the challenges pointed out in another work 23 . In turn, our time-switched TLM does not necessarily conserve charge at the temporal boundary, different from the common assumption in the existing literature on time interfaces 9 , 10 , 17 , 28 , 29 . When closing the switches and connecting the capacitors, we do preserve the total charge in the TLM, which ensures continuity of displacement field D . In contrast, when we open the switches, we abruptly cutoff the charged capacitors from the TLM, creating a charge discontinuity as voltage is preserved. In other words, a new boundary condition needs to be introduced at this type of time interface, in which the electric field E and not D is conserved. This modified temporal boundary condition leads to new scattering coefficients (Supplementary Section 5 ): $$R_{2 \to 1} = \frac{{Z_2 - Z_1}}{{2Z_2}}$$ (2) which is different from equation ( 1 ). Indeed, our experiments confirm a close agreement between the experimentally retrieved scattering spectra (Fig. 2b ) for the charge-discontinuous time interface and the predictions given by equation ( 2 ), and unveil the importance of considering the specific microscopic dynamics of a temporal interface to correctly predict its resulting temporal scattering. To quantify the broadband nature of frequency translation at our photonic time interfaces, we carried out temporal scattering experiments with relatively narrowband input signals (bandwidth, ~30 MHz) at time interfaces featuring increasing (decreasing) impedance (Fig. 2c,d ). We observe a clear redshift (blueshift) in the carrier frequency from f 1 = 60.0 MHz to f 2,r = 34.5 MHz and f 2,t = 33.6 MHz ( f 2 = 33.6 MHz to f 1,r = 49.5 MHz and f 1,t = 50.1 MHz), accompanied by a shrinking (broadening) of the pulse width from Δ f 1 = 21.0 MHz to Δ f 2,r = Δ f 2,t = 16.0 MHz (Δ f 2 = 29.1 MHz to Δ f 1,r = 48.3 MHz and Δ f 1,t = 42.0 MHz). In Fig. 2e,f , we sweep the input carrier wavenumbers k (bottom axes), or equivalently the input frequencies (top axes) for the two switching scenarios, and observe the output frequency (vertical axes). When activating the time interface by closing the switches and decreasing the wave impedance (Fig. 2e ), the centre frequency of both TR ( f 2,r ) and time-refracted ( f 2,t ) waves is redshifted almost uniformly by ~55%, over a range of input carrier frequencies spanning the interval of 30–70 MHz (Supplementary Section 6 shows the details on frequency translation measurements). As control experiments, we also measured the frequency upconversion by the reversed time interface (Fig. 2f ). Although scanning the input frequency from 20 to 34 MHz, the observed blueshift exactly mirrors the process demonstrated in Fig. 2e . We stress that the bandwidth and linearity of the frequency conversion process at a time interface are only limited by the dispersion of our TLM, as opposed to conventional narrowband frequency conversion processes, opening exciting opportunities in a wide range of photonic applications. By combining multiple time interfaces, it is possible to leverage TR-induced interference to more markedly manipulate the input signals. For example, by combining two time interfaces, we form a temporal slab, realized by closing and reopening the switches after a delay. These sequential TR events induce temporal wave interference. Due to causality, scattering phenomena at a temporal slab are markedly different from those in a spatial slab: although multiple scatterings occurs in a spatial slab (Fig. 3a ) that generate a superposition of refracted and reflected waves, a temporal slab produces a total of four scattered waves after the second time interface (Fig. 3b ). To experimentally probe the properties of the temporal slab, we launch broadband signals into our TLM (Fig. 3c , black region), and observe the total time scattering (coloured portions). For each duration of the temporal slab ( τ = 15, 25 and 35 ns; Supplementary Section 7 shows the definition of τ ), we indeed observe the two expected reflected pulses, each having undergone one TR and one time-refraction process, in opposite order. As opposed to conventional photonic time interfaces 9 , 10 , 29 and recent theoretical work on temporal slabs 17 , 28 , both TR signals here are out of phase with respect to the input signal, due to the different temporal boundary condition discussed above and shown in the phase spectra in Fig. 2a,b . Importantly, all the scattered pulses have approximately the same duration as the input signal, since the second time interface converts the frequency spectrum back to the original frequencies, corresponding to two opposite vertical transitions in the dispersion diagram. Fig. 3: Wave scattering from a temporal slab. a , b , Conceptual sketches of a spatial ( a ) and temporal ( b ) slab with stepped wave impedance. Inside a spatial slab, multiple partial reflections occur, gradually decaying with increasing scattering orders, whereas in a temporal slab, only four scattered waves interfere with each other. c , Experimentally measured voltages at the input ( V 1 ) and output ( V 2 ) ports after temporal slabs with varying durations. Here τ ∈ [15, 25, 35] ns is the ‘ON’ time of the control signal, and the corresponding logic states for the switches are indicated by the green blocks. In each plot, the double reflection induced by the slab (cyan, purple and magenta portions) is clearly visible. The elapsed time between the reflected pulses is proportional to the slab duration. d , Normalized amplitudes of the total TR signals as a function of wavenumber k for the different temporal slabs in c , exhibiting zero reflection at the selected frequencies, controlled by the slab duration. e , For fixed k (dashed vertical line in d ), we measure the normalized reflection amplitude as a function of switching duration τ , demonstrating large, continuous spectral tunability. The blue dots represent the average value obtained from five separate measurements, whereas the range (shaded region) shows the maximum and minimum values among the measurements. Source data Full size image As evident in Fig. 3c , the time delay between the consecutive scattered pulses is proportional to the slab duration. This suggests that the temporal slab can be tuned to control the wave interference, thereby realizing the temporal analogue of a Fabry–Pérot etalon that enables the accurate shaping of the output frequency content. To explicitly demonstrate this effect (Supplementary Section 7 ), we take the Fourier transform of the TR signal V r ( t ), and normalize it against the transform of the input signal V i ( t ). This reveals the total reflection spectra (Fig. 3d ) of the different temporal slabs as functions of input wavenumber k (bottom horizontal axis), or equivalently, the input frequency f 1 (top axis). For each slab duration, specific values of k feature zero reflection, due to the destructive temporal interference between two TR waves, in analogy with the reflection zeroes of a Fabry–Pérot cavity. In this case, however, the associated phase accumulation does not occur in space, but rather in time. To further highlight the versatile spectral control offered by our temporal slab through temporal interference (Fig. 3e ), we examine the total TR at a fixed input wavenumber of k = 1.8 rad m –1 (Fig. 3d , dashed vertical line), as we increase the slab duration, for six pulses of different half-maximum durations ranging from 5 to 10 ns, comparing the measured and theoretical results. We observe how the total amplitude of the time-reversed waves can be continuously tuned by varying τ , granting us dynamic control over wave interference at the two time interfaces without having to change the lumped capacitance values. In Supplementary Section 11 , we also consider the case of an inverse slab, where the switches are first opened and then closed again. To conclude, in this work we have reported the observation of photonic TR for broadband and efficient phase conjugation and frequency translation at single and double time interfaces, as well as the demonstration of controlled time-reversal-induced interference phenomena from a temporal slab formed by a pair of time interfaces. These results establish the key building blocks towards the realization of time metamaterials and photonic time crystals, opening a wide range of opportunities in the rising field of time-varying photonic media 26 , 29 , 30 , with applications in ultrafast wave routing 19 and focusing 15 , 22 , negative refraction 31 , 32 , efficient and broadband spectral filtering and frequency manipulation 33 , novel forms of ultrafast energy mixing 34 , 35 , and photonic Floquet matter 4 , 24 . Our approach to realize time interfaces using external reactive elements added and removed through switches is key to these demonstrations, and it can be straightforwardly configured to simultaneously introduce spatial and temporal interfaces, for instance, by activating only a portion of the switches, thus blending together spatial and temporal degrees of freedom and enabling even more flexibility in wave control and manipulation. Field-programmable gate arrays controlling the switches may realize real-time reconfigurability and even self-adaptation of the response. More broadly, our results open a pathway to employ time interfaces for broadband, efficient phase conjugation and frequency conversion arising over very short timescales, which is of great relevance for a variety of applications in electromagnetics and photonics. Efficient time reversal is important in the context of wireless communications and radar technologies, for instance, channel estimation, which is now performed through complex digital computations, to estimate complex propagation channels and compensate for signal distortion and dispersion. Broadband efficient frequency conversion is also key for applications spanning from night-vision systems to quantum photonics. Of particular interest would be to extend these concepts to higher frequencies, and we envision several available routes—from modern complementary metal–oxide–semiconductor technology (which may deliver switching speeds up to two orders of magnitude faster than those reported here, extending the frequency range to the terahertz regime) to all-optical approaches leveraging giant nonlinearities in graphene (which offer low switching power and times as short as 260 fs (ref. 36 )) or flash ionization in plasmas with even shorter switching times 37 . Methods Simulations All the circuit simulations are performed in Keysight Advanced Design Systems. Time-domain simulations are carried out using the Transient Solver, whereas frequency-domain analysis is done with the S-Parameter Solver. The main TLM carrying the signal is modelled with physical transmission-line sections with characteristic impedance Z 0 = 50 Ω, length d = 0.208 m, effective dielectric constant ε r,eff = 8.36, attenuation constant α = 0.5 (at f = 100 MHz) and dielectric loss tangent tan δ = 0.0019. It should be noted that the effective dielectric constant of 8.36 is consistent with expectations from well-known design formulas with microstrip lines. To further model the non-idealities associated with our manufactured circuit, each unit cell is connected to its nearest neighbour with a 2 pF capacitor, and to its second-nearest neighbour with a 1 pF capacitor. This models the parasitic capacitive coupling between the unit cells, which limits our operating bandwidth by introducing dispersion and bandgaps. These values are found to produce consistently good matches between the experimental and simulation results. The switches are implemented as ideal voltage-controlled reflective switches with 1 Ω resistance in the ON state. Each switch connects a node on the transmission line to a series RLC circuit, with an 82 pF capacitor representing the load, a 4 Ω resistor representing the parasitic losses and an 8 nH inductor representing the parasitic inductance of the grounding vias. These values are extracted from the measurements of a single unit cell: the parasitic component values are swept in frequency-domain simulations until a good match with the measured S-parameters is observed. Experiments The TLM sample is fabricated in-house using a 1.52-mm-thick Rogers TMM 13i substrate, which has a nominal dielectric constant of 12.85 ± 0.35, and a loss tangent of 0.0019. Supplementary Sections 1 and 2 show the details of the circuit layout and choices of all the circuit components. During our measurements, the input signal consists of a train of repeated copies of an arbitrarily shaped pulse. Since the input pulses generally have durations in the order of tens of nanoseconds, which is much shorter than the period of the pulse train (>1 μs), we can effectively treat each pulse as a single isolated event. The signals entering the TLM through the input port is probed with a T-connector attached to an oscilloscope, whereas the exiting signals are directly probed with the scope. Supplementary Section 2 shows a simplified schematic of the experimental setup. To control the switches, we use a rectangular pulse train that is phased locked to the input pulse train. By adjusting the relative phase between the two signals, we can control the timing of the interface. The switches that we used have internal grounding circuitry which, on opening, will dissipate all the charges stored on the capacitor. This is crucial for accurately realizing our two distinct types of time interface. To induce temporal reflection, we activate the switches after the input signal is registered in V o1 ( t ), but before it is recorded in V o2 ( t ), corresponding to a time when the signal is completely contained within the TLM. The TR signal will travel backwards towards port 1 of the transmission line, where it will again enter the input T-connector. Identical copies of the TR (with 3.5 dB attenuation) are fed to the source and oscilloscope. Hence, the actual amplitude of the temporal reflection generated by our time interface is larger than that recorded by port 1. Supplementary Information details the effect of the T-connector and impedance mismatch. To measure the frequency translation of our time interface, we launched signals with varying carrier frequencies into the transmission line, and observed the TR as well as refracted signals. Supplementary Sections 6 and 12 provide a typical time-domain measurement result, and the accounting of video leakage. We performed a Fourier analysis on the time-gated incident, refracted and reflected signals to observe their individual frequency content. The frequency spectra for each signal will consist of a sharp peak, aligned with its centre frequency. Data availability All data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Source data are provided with this paper. Code availability The codes that support the findings of this study are available from the corresponding author upon reasonable request. | When we look in a mirror, we are used to seeing our faces looking back at us. The reflected images are produced by electromagnetic light waves bouncing off of the mirrored surface, creating the common phenomenon called spatial reflection. Similarly, spatial reflections of sound waves form echoes that carry our words back to us in the same order we spoke them. Scientists have hypothesized for more than six decades the possibility of observing a different form of wave reflections, known as temporal, or time, reflections. In contrast to spatial reflections, which arise when light or sound waves hit a boundary such as a mirror or a wall at a specific location in space, time reflections arise when the entire medium in which the wave is traveling suddenly and abruptly changes its properties across all of space. At such an event, a portion of the wave is time reversed, and its frequency is converted to a new frequency. To date, this phenomenon had never been observed for electromagnetic waves. The fundamental reason for this lack of evidence is that the optical properties of a material cannot be easily changed at a speed and magnitude that induces time reflections. Now, however, in a newly published paper in Nature Physics, researchers at the Advanced Science Research Center at the CUNY Graduate Center (CUNY ASRC) detail a breakthrough experiment in which they were able to observe time reflections of electromagnetic signals in a tailored metamaterial. "This has been really exciting to see, because of how long ago this counterintuitive phenomenon was predicted, and how different time-reflected waves behave compared to space-reflected ones," said the paper's corresponding author Andrea Alù, Distinguished Professor of Physics at The City University of New York Graduate Center and founding director of the CUNY ASRC Photonics Initiative. "Using a sophisticated metamaterial design, we were able to realize the conditions to change the material's properties in time both abruptly and with a large contrast." This feat caused a significant portion of the broadband signals traveling in the metamaterial to be instantaneously time reversed and frequency converted. The effect forms a strange echo in which the last part of the signal is reflected first. As a result, if you were to look into a time mirror, your reflection would be flipped, and you would see your back instead of your face. In the acoustic version of this observation, you would hear sound similar to what is emitted during the rewinding of a tape. The researchers also demonstrated that the duration of the time-reflected signals was stretched in time due to broadband frequency conversion. As a result, if the light signals were visible to our eyes, all their colors would be abruptly transformed, such that red would become green, orange would turn to blue, and yellow would appear violet. To achieve their breakthrough, the researchers used engineered metamaterials. They injected broadband signals into a meandered strip of metal that was about 6 meters long, printed on a board and loaded with a dense array of electronic switches connected to reservoir capacitors. All the switches were then triggered at the same time, suddenly and uniformly doubling the impedance along the line. This quick and large change in electromagnetic properties produced a temporal interface, and the measured signals faithfully carried a time-reversed copy of the incoming signals. The experiment demonstrated that it is possible to realize a time interface, producing efficient time reversal and frequency transformation of broadband electromagnetic waves. Both these operations offer new degrees of freedom for extreme wave control. The achievement can pave the way for exciting applications in wireless communications and for the development of small, low-energy, wave-based computers. "The key roadblock that prevented time reflections in previous studies was the belief that it would require large amounts of energy to create a temporal interface," said Gengyu Xu, the paper's co-first author and a postdoctoral researcher at CUNY ASRC. "It is very difficult to change the properties of a medium quick enough, uniformly, and with enough contrast to time reflect electromagnetic signals because they oscillate very fast. Our idea was to avoid changing the properties of the host material, and instead create a metamaterial in which additional elements can be abruptly added or subtracted through fast switches." "The exotic electromagnetic properties of metamaterials have so far been engineered by combining in smart ways many spatial interfaces," added co-first author Shixiong Yin, a graduate student at CUNY ASRC and at The City College of New York. "Our experiment shows that it is possible to add time interfaces into the mix, extending the degrees of freedom to manipulate waves. We also have been able to create a time version of a resonant cavity, which can be used to realize a new form of filtering technology for electromagnetic signals." The introduced metamaterial platform can powerfully combine multiple time interfaces, enabling electromagnetic time crystals and time metamaterials. Combined with tailored spatial interfaces, the discovery offers the potential to open new directions for photonic technologies, and new ways to enhance and manipulate wave-matter interactions. | 10.1038/s41567-023-01975-y |
Physics | Optical force-induced self-guiding light in human red blood cell suspensions | Rekha Gautam et al. Optical force-induced nonlinearity and self-guiding of light in human red blood cell suspensions, Light: Science & Applications (2019). DOI: 10.1038/s41377-019-0142-1 I. M. Vellekoop et al. Exploiting disorder for perfect focusing, Nature Photonics (2010). DOI: 10.1038/nphoton.2010.3 Roarke Horstmeyer et al. Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue, Nature Photonics (2015). DOI: 10.1038/nphoton.2015.140 Roadmap on structured light. Journal of Optics. iopscience.iop.org/article/10. … 978/19/1/013001/meta Journal information: Nature Photonics , Light: Science & Applications | http://dx.doi.org/10.1038/s41377-019-0142-1 | https://phys.org/news/2019-03-optical-force-induced-self-guiding-human-red.html | Abstract Osmotic conditions play an important role in the cell properties of human red blood cells (RBCs), which are crucial for the pathological analysis of some blood diseases such as malaria. Over the past decades, numerous efforts have mainly focused on the study of the RBC biomechanical properties that arise from the unique deformability of erythrocytes. Here, we demonstrate nonlinear optical effects from human RBCs suspended in different osmotic solutions. Specifically, we observe self-trapping and scattering-resistant nonlinear propagation of a laser beam through RBC suspensions under all three osmotic conditions, where the strength of the optical nonlinearity increases with osmotic pressure on the cells. This tunable nonlinearity is attributed to optical forces, particularly the forward-scattering and gradient forces. Interestingly, in aged blood samples (with lysed cells), a notably different nonlinear behavior is observed due to the presence of free hemoglobin. We use a theoretical model with an optical force-mediated nonlocal nonlinearity to explain the experimental observations. Our work on light self-guiding through scattering bio-soft-matter may introduce new photonic tools for noninvasive biomedical imaging and medical diagnosis. Introduction When a light beam enters a turbid medium such as blood or other biological fluids, it experiences multiple scattering and loses its power and original directionality. This issue has hampered many applications where transmission, focusing, or imaging through scattering media is desirable, which also motivated a great deal of interest in developing new methods and techniques to overcome these hurdles 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 . In particular, for medical and biological applications, low-loss transmission of light is a requisite for deep-tissue imaging, localized laser treatments, and internal optical control of particles or microrobots for medical diagnosis and treatment, to name a few 9 , 10 , 11 , 12 , 13 , 14 , 15 . Until now, significant efforts have been made to mainly enhance light transmission by understanding light-matter interactions in the linear optical domain 16 , 17 , but it remains largely unexplored in the nonlinear domain. In fact, it is commonly thought that light in biological environments only exhibits a negligible or weak optical nonlinearity, so the required power must be very high to change the refractive index of a biological medium with a laser beam, which can cause photodamage. In recent years, typical nonlinear effects have been induced by ultrafast laser pulses such as multiphoton excitation, second and third harmonic generation, and Kerr effects to obtain better resolution and deeper penetration through scattering bio-media 18 , 19 , 20 . Quite recently, we have achieved low-loss propagation of light through bacterial suspensions by exploiting the nonlinear optical properties of marine bacteria while demonstrating that the viability of the cells remains intact 21 . This achievement clearly indicates that even with a continuous-wave (CW) laser operating at relative low intensities, biological media can exhibit appreciable optical nonlinearity. Human red blood cells (RBCs) in normal conditions are disc-shaped malleable cells, approximately 8 µm in diameter and 2 µm in thickness, which have a spatially uniform refractive index because they lack nuclei and most organelles 22 , 23 , 24 . To enable the passage through veins and narrow microcapillaries, RBCs exhibit distinctive deformability following the application of an external force. Deformation can also be elicited by modifying the liquid buffer osmolarity. As an exemplary application of such a unique feature, it has been demonstrated that RBCs can be used as tunable optofluidic microlenses 22 . Important to both in vitro and in vivo disease diagnostics, the optical properties of RBCs depend on the shape and refractive index of cells. The RBC refractive index is mainly determined by hemoglobin (Hb), which is the largest part of the erythrocyte dry content by weight 25 . The refractive index increases if the cell volume decreases with varying osmotic conditions 24 , 26 , 27 , 28 , 29 , 30 . The physical properties of RBCs involving cell size and shape are often closely related to pathophysiological conditions such as sickle cell anemia, malaria, and sepsis 31 , 32 , 33 , 34 , 35 . Considering the intrinsic fundamental interest, the ability of RBCs to react to changes in different osmotic environments make them ideal candidates for the study of light scattering by varying the refractive index and shape of the cells 29 , 32 , 33 . In this work, we study the optical nonlinearity of RBCs in dynamic, monodisperse, colloidal suspensions under different osmotic conditions, and we demonstrate the nonlinear self-trapping of light over centimeter propagation distances through scattering RBC suspensions. If there is no nonlinearity in such suspensions, a passing laser beam should linearly diffract regardless of the optical power. However, increasing the laser beam power to only a few hundreds of mW (as opposed to 3 W in colloidal suspensions of marine bacteria featured by a smaller size 21 , 36 ), we observe that the beam dramatically self-focuses in all three osmotic conditions and forms a self-trapped channel, which is similar to an optical spatial soliton 37 . Interestingly, the optical nonlinearity is also tunable via osmosis, and we find that in fresh blood samples, the nonlinearity increases with the osmotic pressure outside the cells. Intuitively, one can understand such nonlinear beam dynamics from the optical forces that act on the RBCs: the gradient force attracts the RBCs towards the beam center, while the scattering force pushes them forward. When there are sufficiently many cells along the beam path, an effective waveguide is formed as RBCs have a higher index of refraction than the background buffer solution. The theoretical analysis of this behavior is not trivial because the cells do not have a fixed shape, size, or refractive index. In addition, the situation can be further complicated due to the lysed RBCs in aged samples 38 , where free Hb plays an active role in determining the value of the optical nonlinearity along with strong thermal effects. Nevertheless, a theoretical model for the optical force-mediated nonlocal nonlinearity is proposed to analyze the beam dynamics, where the results are qualitatively consistent with our experimental observations. The enhanced transmission of light through scattering blood cells driven by optical forces may find applications in medical diagnostics. For example, the changes in optical forces with the cell density and morphology can provide a powerful noninvasive tool to sort different cells according to the stage of a given disease 27 . Results The typical experimental results for the nonlinear beam propagation in RBC suspensions are presented in Fig. 1 , where the plots in (a–c) illustrate the propagation of a laser beam through an RBC suspension in different osmotic conditions. The focused laser beam ( λ = 532 nm) normally diffracts in the phosphate saline (PBS) background solution alone (i.e., without RBCs) and exhibits no nonlinear self-action at any tested laser power of 10–700 mW. However, when the laser power increases, a nonlinear optical response is observed in the RBC suspensions for all three osmotic conditions (isotonic, hypotonic, and hypertonic), which causes the self-trapping of the light beam and the formation of an optically induced biological waveguide 21 . A side-view propagation of such a self-trapped optical beam is shown in Fig. 1d , and an animation of the dynamical process is shown in the Supplementary Material . Careful measurements show that the RBCs suspended in different osmotic solutions exhibit different strengths in terms of optical nonlinearities, where a laser beam with the same input size requires different laser powers to achieve self-trapping. As shown in Figs. 1e–h , the beam first normally diffracts at a low power of 10 mW and experiences strong scattering due to a random distribution of nonspherically shaped RBCs. In the isotonic solution ( n ~1.42 for RBCs) 22 , 39 , the optimal self-trapping of light (i.e., focusing to the smallest possible spot size) is observed when the beam power reaches ~300 mW (Fig. 1i ). Interestingly, nonlinear self-trapping occurs at a slightly higher power of 350 mW in the hypotonic solution ( n ~1.38) 22 , 39 (Fig. 1j ), and instead it happens at a slightly lower power of 200 mW (Figs. 1k, l ) in the hypertonic solution ( n ~1.44) 22 , 39 . These results clearly show that RBCs have an appreciable nonlinear response, and the optical nonlinearity is tunable because it can change when the optical forces vary under different osmotic conditions. We emphasize that it is not simply the result of a better laser transmission through scattering media at higher powers, since both optical gradient and scattering forces play an active role in the self-trapping, as elaborated below. Fig. 1: Self-trapping of light through human RBC suspensions under different osmotic conditions. a – c Illustrations of the beam dynamics in ( a ) isotonic, ( b ) hypotonic, and ( c ) hypertonic suspensions. d Side-view image of a self-trapped beam. e – g Observed output intensity patterns at a low power, which show the linear diffraction and strong scattering of the laser beam. i – k Corresponding patterns at a high power, which show the beam localization due to nonlinear self-trapping. h , l 3D plots of the intensity patterns corresponding to ( g , k ), respectively Full size image To show that the observed phenomenon arises from nonlinear self-focusing, we measure the normalized transmission (output/input power) as a function of the input beam power. These results are summarized in Fig. 2a , where for direct comparison, we intentionally set the initial transmission to be 1.00 for all conditions; the linear propagation in the PBS-only solution is the control data. Clearly, different nonlinear upward trends of the normalized transmission are observed for different osmotic conditions, which indicates a dissimilar strength of optical nonlinearity and self-focusing of the optical beam (as shown in Fig. 2a ). The initial transmission varies for the RBCs under three different osmotic conditions because of the difference in absorption and shape-dependent scattering 22 , as detailed in the Supplementary Material . Figure 2b illustrates the changes in output beam size due to nonlinear self-action through different osmotic solutions of fresh blood samples (as a function of the beam power). The beam size remains fairly constant in the background medium without RBCs when the power increases, and exhibits no appreciable self-focusing. However, in all three RBC suspensions, when the power increases, the beam size first dramatically decreases due to nonlinear self-trapping and subsequently reaches a minimum at a few hundred mW before starting to increase again due to thermal effects 21 , 40 , 41 . Specifically, in the hypotonic solution, the RBCs are in a “swollen” state, so their effective refractive index decreases when the water-to-Hb ratio increases. In this case, the power required to focus the beam to its smallest spot size (~350 mW) is higher than those for the other two cases, and the power transmission is approximately 28% after propagating approximately 3 cm through the RBC suspension. In contrast, in the hypertonic solution, where the RBCs are in a “shrunk” state, the cells become denser, and the effective refractive index increases due to a reduced water-to-Hb ratio. In this case, the beam has a lower threshold power for self-trapping (~200 mW), since both the optical gradient and scattering forces are higher than for the other two cases. The power transmission is only approximately 20% since the suspension becomes comparatively more turbid, although the number of cells in the suspension is approximately identical. As expected, in the “normal” state of the isotonic condition, the RBCs show an intermediate behavior with respect to the self-trapping threshold power and normalized transmission. Surprisingly, the experiments performed using the same blood samples after the cells had been stored for 2 weeks exhibited notably different outcomes (Fig. 2c ), particularly for the hypotonic suspensions as we discuss below. It is important to note that for the fresh sample, the health of the RBCs was assessed after the laser illumination, and no significant photodamage was observed for the laser power used in our experiments, as discussed in the Supplementary Material . Fig. 2: Normalized transmission and output beam size as a function of input power. a Measurement of the normalized transmission and b output beam size change in fresh RBC suspensions of different buffer solutions. The cyan (triangle) curve depicts the results obtained from the PBS background solution without RBCs as a reference, which indicates no appreciable self-action of the beam in the buffer solution itself. The blue (circle), red (square), and green (diamond) curves show the data obtained from RBC suspensions in hypertonic, isotonic, and hypotonic solutions, respectively, where the error ranges in ( b ) are indicated by the shaded regions. c Corresponding results from the same blood sample but after the RBCs have been stored in a refrigerator for two weeks, where the nonlinear focusing is dramatically enhanced in the hypotonic solutions Full size image To further corroborate the above argument about the difference in optical force-mediated nonlinearity in RBC suspensions under varying osmotic conditions, we used optical tweezers to directly observe the cell motion under a microscope (NA = 1.3) and to do direct force measurement of single cells in the trap. The typical experimental results are presented in Fig. 3 with a 960-nm trapping laser, where the top panels are snapshots from the videos recorded for directional cell movement (marked by red arrows) under the action of the laser beam (see Supplementary Movies), and the bottom panels depict the trap stiffness calculated using the standard optical tweezer tool 42 , 43 . Overall, since the RBCs in all osmotic conditions have a larger index of refraction than the background medium ( n = 1.334) 22 , they are pulled towards the beam center due to the gradient force. For suspended particles or cells with positive polarizability (such as RBCs), a larger refractive index contrast corresponds to a greater polarizability and consequently a larger gradient force. Therefore, the gradient force exerted on the RBCs follows a trend of hypertonic > isotonic > hypotonic, as directly measured from the optical tweezers (Figs. 3d–f ). By examining the motion of the trapped cells from the videos (Figs. 3a–c ), all of which were taken under identical trapping conditions, we found that the attraction due to the gradient force exerted on the RBCs clearly follows the same trend. Although the 532-nm laser beam (for the nonlinear propagation experiments of Figs. 1 and 2 ) can definitely apply the gradient force on the RBCs (see supplementary videos), it cannot create a stable trap for a single cell in our optical tweezer setting due to the strong scattering force at this wavelength. Thus, for a quantitative stiffness measurement, the infrared wavelength (960 nm) was used in the tweezer setup to directly characterize and compare the optical gradient forces that act on the RBCs under different conditions. To simplify our calculations, we assumed the RBCs as disk-shaped (prolate ellipsoid) objects in isotonic conditions and spherical objects in the hypotonic and hypertonic conditions with different average diameters (8.0 µm for isotonic, 9.6 µm for hypotonic, and 6.4 µm for hypertonic cells). Then, the trapping force can be estimated from the Langevin equation 44 , 45 , where the corner frequency \(f_{c,\,x} = \kappa _x/2\pi \gamma\) and trapping stiffness \(\kappa _x\) can be evaluated (here, \(\gamma = 6\pi \eta a\) is the particle friction coefficient, \(\eta\) is the viscosity of the solution, and a is the radius of the particle). The results obtained with single trapped cells are plotted in Figs. 3d–f . Although the gradient forces can vary at 960-nm and 532-nm wavelengths, the force measurement results in Fig. 3 are only used as a guide for comparison, since the videos taken at both wavelengths clearly show that they exhibit identical trends. It is nontrivial to quantitatively measure and compare the forward-scattering force on the RBCs. We can infer that the force also follows the trend of hypertonic > isotonic > hypotonic from the measured power transmission and absorption spectrum under different osmotic conditions (see Supplementary Material ). Interestingly, we found that with a low numerical aperture (NA = 0.65), so that the gradient force is weaker (more comparable to the nonlinear propagation experiment), when the cells were attracted to the beam focus by the 532-nm laser, they were pushed out of the observation plane by the strong scattering force (see supplementary videos). This result is different from the case of the high NA setting, where multiple cells were quickly attracted by the gradient force to the focus to form clusters before they could be pushed away. In fact, as we show in our following theoretical analysis, the forward-scattering force plays an essential role in the nonlinear self-guiding of light in biological suspensions 21 . Fig. 3: Optical gradient forces on RBCs under different osmotic conditions examined by optical tweezers. a – c Snapshots of RBC movement towards a 960-nm laser beam (position marked by a dashed green circle) in isotonic, hypotonic, and hypertonic solutions, respectively, as observed under a microscope. The red arrows illustrate the directional cell movement (see corresponding Media file in the Supplementary Material ). d – f Power spectrum analyses showing the trap stiffness \(\kappa _x\) of a single RBC from the three suspensions in accordance with ( a – c ), where the vertical dashed lines mark the corner frequency f c . The inset in ( f ) illustrates a single RBC that moves into the trap under the action of the gradient force Full size image To better understand the physics of the optical force-mediated nonlinearity, we have developed a model to simulate the nonlinear beam propagation in bio-soft-matter (see Supplementary Material ). Instead of a priori assumption of any particular form for the nonlinearity, we let the beam propagate in a dynamic waveguide, which forms due to the spatial variation of the particle concentration. The time evolution of the particle concentration distribution is modeled using a diffusion-advection equation, where the velocity field is determined by the intensity-dependent optical forces. Contrary to previous models, we consider that the particles are affected not only by an optical gradient force but also by a forward-scattering force 21 , 46 which pushes the particles along the beam propagation direction 47 , 48 . Without the scattering force, the mathematical description is reduced to the exponential nonlinearity model for the nanosuspension of particles in the steady-state limit 49 , 50 . However, the inclusion of a strong forward-scattering force causes a fundamentally different nonlinear response that is nonlocal in the propagation direction (where the particle concentration does not necessarily peak at the beam focus). To simulate the nonlinear self-focusing effects under different buffer conditions, we calculated the change in beam size for different gradient and scattering force parameters. The results are shown in Fig. 4 , where Fig. 4a is a 3D plot of the beam size as a function of the forces, and Fig. 4b and c illustrate the decrease in beam size when the gradient or the scattering forces are varied. The dashed white line in Fig. 4a corresponds to the minimal beam diameters achievable in simulations before beam collapse occurs. (In these simulations, we did not include particle-particle interactions, thermal effects, and random scattering 49 , 50 , which could suppress the collapse.) In the bottom panels, the beam dynamics through isotonic RBC-like particle suspensions with added random scattering is shown for both linear (low power of 10 mW) and nonlinear (high power of 350 mW) propagation based on parameters extracted from our experimental conditions. The change in size, volume and refractive index of the RBCs under different osmotic conditions accounts for the variations in magnitude of the optical forces, which modifies the optical nonlinearity 51 . These results are qualitatively consistent with the experimental observations. Fig. 4: Simulations of the optical force-induced nonlinear beam dynamics in RBC-like suspensions. a – c Beam size (FWHM) change as a function of the gradient and scattering forces obtained via numerical simulations using a 350-mW input power and neglecting random scattering effects, where one observes the change in beam size when either the gradient or the scattering force is “turned off”. d , f Side-view of the beam propagation and e , g corresponding output transverse intensity patterns after propagating through an RBC-like random scattering medium at low ( d , e ) and high ( f , g ) beam power. The beam side-views and output intensity patterns are normalized with respect to their respective maximal input powers Full size image Discussion Finally, we discuss the reproducibility of our experimental results and the role played by free Hb in the optical nonlinearity. The results in Figs. 1 and 2a, b were readily reproduced in fresh blood samples, but the experiments using the same blood samples with identical concentrations after they had been stored in a refrigerator for a certain time period had different results. For example, Fig. 2c shows the measurements with the blood sample in Fig. 2b retaken after two weeks. (Initially, the fresh blood sample was centrifuged, and the supernatant was removed to ensure that only the RBCs would remain before different osmotic solutions were prepared). Clearly, the beam dynamics vs. input power is very similar in Figs. 2b, c for isotonic and hypertonic solutions, but the difference is evident for the hypotonic condition: here, the beam self-traps to a significantly smaller size at a much lower input power (150 mW) than the fresh sample experiment in Fig. 2b (green curve). After counting the number of RBCs using a hemocytometer, we have found that the number of RBCs in the hypotonic buffer is now only one-third of the value in the isotonic and hypertonic buffers, which indicates that most of the hypotonic RBCs were lysed in the old sample and released free Hb. (The increase in membrane stiffness and hemolysis of RBCs with storage is known, and the cells more easily lyse in the hypotonic solution 38 ). To prove that the enhanced nonlinear response from the aged hypotonic RBC suspension can be attributed to the released Hb, fresh RBCs were intentionally lysed in deionized water to prepare solutions at four RBC concentrations. The results are shown in Fig. 5a . Nonlinear self-trapping was clearly realized at a power of ~100 mW at all tested concentrations (2.4, 5.1, 8.6, and 15.0 × 10 6 cells per mL), but a higher concentration caused stronger self-focusing (smaller minimum focused size). When the power was further increased, the thermal effects dominated, so the beam dramatically expanded. A higher concentration also leads to a stronger thermal self-defocusing at 500 mW. Figures 5b–e show the typical output transverse intensity patterns for the self-trapped beam and the formation of thermal defocusing rings at two different concentrations. The measured power transmission is ~70% (50%) for the low (high) concentrations, which is much higher than that in healthy RBC suspensions due to the reduced scattering loss. These results clearly demonstrate that free Hb can exhibit a nonlinear optical response, which results in the enhanced nonlinearity in aged hypotonic RBC suspensions. The underlying mechanism of the Hb-enhanced nonlinear response certainly merits further studies. Intuitively, we believe that it is similar to the optical force-induced nonlinearity in plasmonic nanosuspensions 52 , 53 , 54 , since metallic ion nanoparticles are contained in the protein chain of hemoglobin. The measured response time of self-trapping in Hb solutions was ~200 ms at the laser power used. Thus, we do not think that it is driven by the thermophoresis effect for the “hot-particle” solitons in dielectric nanoparticle suspensions 44 , since the latter typically requires much higher laser power and occurs at much slower time scales. Fig. 5: Nonlinear optical response of lysed RBCs (free hemoglobin) in water. a Output beam size as a function of input power through the Hb solutions for four different concentrations. The RBC concentrations for the four curves (Hb1-Hb4) are 2.4, 5.1, 8.6, and 15.0 million cells per mL. Nonlinear self-focusing of the beam occurs at ~100 mW for high concentrations of Hb, but it subsequently expands into thermal defocusing rings at high powers. b – e Typical output transverse intensity patterns taken for the self-trapped beam ( b , d ) and thermally expanded beam ( c , e ) for low ( d , e ) and high ( b , c ) concentrations Full size image In conclusion, we have studied the nonlinear beam propagation in human RBCs suspended in isotonic, hypotonic, and hypertonic buffer solutions, and we have found that RBCs exhibit a strong self-focusing nonlinearity that can be controlled by the chemistry of the liquid buffer. In particular, the optical nonlinearity can be tuned via osmosis and increases with the osmotic pressure outside the cells in fresh blood samples. In aged blood samples, free hemoglobin from the lysed RBCs plays an active role in the optical nonlinearity and enhances the nonlinear response in hypotonic conditions. From direct video microscopy and optical tweezers measurement, we conclude that the trapping force that acts on the RBCs is the strongest in the hypertonic condition and the weakest in the hypotonic case. Building further on our prior work, we provide a theoretical model to explain the observed effects. Our demonstration of the controlled nonlinear self-trapping of light in RBC suspensions through their tunable optical nonlinearity can introduce new perspectives to the development of diagnostic tools, which is very promising towards future laser treatment therapies of blood-related diseases 52 , 53 . Materials and methods Isolation and preparation of RBC samples The human blood samples in this study were obtained from anonymous donors through the Blood Centers of the Pacific, California. The blood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes and centrifuged at 3500 RPM for 5 min. Then, the supernatant was removed, and the remaining RBCs were washed three times in PBS buffer before the RBC suspensions were prepared for our experiment. To study the nonlinear optical response of RBCs suspended in different osmotic conditions, we intentionally let the RBCs disperse in the hypotonic and hypertonic buffers in addition to their natural isotonic condition. For direct comparison, we diluted 50 µL of RBCs in 4 mL volume of isotonic, hypotonic, and hypertonic buffers to obtain a final concentration of ~10 5 –10 6 cells/mL as counted by a hemocytometer. As it is well known in the literature 22 , 28 , 55 , “normal” RBCs exhibit a biconcave disc shape in the isotonic PBS buffer, “swollen” RBCs take a balloon-like sphere shape in the hypotonic buffer, and “shrunk” RBCs take an irregular spiky shape in the hypertonic buffer (as illustrated in the top row of Fig. 1 ). Different buffers were prepared by changing the NaCl concentration from 8.2 mg/mL (isotonic) to 4.0 mg/mL (hypotonic) and 14.0 mg/mL (hypertonic) while maintaining the concentration of three other salts unchanged (KCl: 0.201 mg/mL; Na 2 HPO 4 : 1.434 mg/mL; KH 2 PO 4 : 0.245 mg/mL). To ensure that there were little or no lysed RBCs that would release Hb into the solutions, the samples were centrifuged for 5 additional minutes at 3500 RPM immediately before the experiment. Nonlinear optical self-trapping experiment In the first set of experiments, a linearly polarized CW laser beam with a wavelength of 532 nm was collimated and subsequently focused (with a lens of 125-mm focal length) into a 3-cm-long glass cuvette filled with the RBC suspensions for different osmotic conditions as described above. In particular, the focused beam was ~28 μm in diameter at the focal point, which was located ~1 cm away from the input facet of the cuvette to avoid heating and surface effects 21 . As a consequence, the beam propagated by ~2 cm after the focal point through the suspension before exiting the output facet. Both linear and nonlinear outputs from the sample were monitored with a CCD camera and a power detector. To maintain unsaturated image detection, the necessary attenuation optical elements were used to adjust the illumination power before reaching the camera. Furthermore, the beam diameters were measured using the Beamview program after careful calibration. The experimental setup is similar to those in our previous work for the study of beam propagation in colloidal suspensions 21 , 49 . Force measurement in the optical tweezers experiment In the second set of experiments, a home-built optical tweezers system 42 , 43 was used to measure the optical gradient force. In our tweezers setting, a CW laser beam ( λ = 960 nm) was coupled into the optical pathway of the microscope and subsequently tightly focused into the sample using an oil immersion objective lens (numerical aperture NA = 1.3). Separate samples that contained droplets of different RBC solutions were sandwiched between a microscope slide and a cover slip. The forward-scattered light from the trapped cells was collected by a condenser lens (NA = 1.2) and subsequently focused onto a position-sensitive detector (PSD). The PSD positional signals were acquired using a custom-made LabVIEW program. The trap stiffness and gradient force were calculated from the corner frequency of the PSD using a standard optical tweezers tool 42 , 43 . To simplify the calculations, the hypotonic (and hypertonic) RBCs were treated as spherical objects, whereas isotonic RBCs were treated as disk-shaped (prolate ellipsoid) objects. In the latter case, a shape-dependent “particle” friction coefficient was used for the force calculation 56 , 57 . In addition, we used a CCD camera to record the cell movements from different RBC solutions under a microscope with two objectives: NA = 0.65 and NA = 1.3, as driven by the 960-nm laser beam (used in the tweezers experiment for the force calibration) or the 532-nm laser beam (used for the beam propagation experiment). These video results are presented in the Supplementary Material . In particular, they are not meant to show a stable single cell trapping but to illustrate the cell movement against Brownian motion under the action of optical forces that depend on the cell conditions (shape and size) and trapping beam. | New photonic tools for medical imaging can be used to understand the nonlinear behavior of laser light in human blood for theranostic applications. When light enters biological fluids it is quickly scattered, however, some cell suspensions can induce nonlinear responses in laser beams to self-focus and enhance the penetration of light for biomedical applications as a quantifiable marker of disease. In a recent study now published in Light: Science and Applications, Rekha Gautam and her colleagues at the San Francisco State University and an international team of co-workers showed that a laser beam shining through red blood cell suspensions could become "self-trapped." The process reduced light scattering to retain the power of the beam of laser light within the biological samples. The observed nonlinearity depended on osmotic conditions and the age of the samples. The scientists propose using the technique to diagnose sickle cell anemia or malaria; diseases which impact the size and shape of blood cells. Osmotic conditions play an important role in the properties of human red blood cells (RBCs) crucial during disease analysis. Numerous efforts in the past decade have focused on the study of the biomechanical properties of RBCs suspended in varying osmotic solutions. In the present work, Gautam et al. determined the self-trapping and scattering-resistant nonlinear propagation of a laser beam through three different osmotic solutions/conditions. The results showed that the strength of the optical nonlinearity increased with osmotic pressure on the cells. Interestingly, in aged blood samples with lysed cells the nonlinear behavior was notably different due to the presence of free hemoglobin. To explain the experimental observations, Gautam et al. used a theoretical model with an optical force-mediated nonlocal nonlinearity. The present work on light self-guiding through scattered soft biological matter can introduce new photonic tools for noninvasive biomedical imaging and medical diagnosis. Self-trapping light through human RBC suspensions under different osmotic conditions. a–c Illustrations of the beam dynamics in (a) isotonic, (b) hypotonic, and (c) hypertonic suspensions. d Side-view image of a self-trapped beam. e–g Observed output intensity patterns at a low power, which show the linear diffraction and strong scattering of the laser beam. i–k Corresponding patterns at a high power, which show the beam localization due to nonlinear self-trapping. h, l 3D plots of the intensity patterns corresponding to (g, k), respectively. Credit: Light: Science & Applications, doi: 10.1038/s41377-019-0142-1. Human RBCs are disc-shaped malleable cells that possess a spatially uniform refractive index as they lack nuclei unlike most organelles, and show distinctive deformability for passage through veins and microcapillaries. The shape change can be prompted by modifying the osmolarity of the surrounding liquid buffer to use RBCs as tunable optofluidic microlenses. The optical properties of RBCs are important for in vitro and in vivo disease diagnostics in which the refractive index of the RBC is determined by hemoglobin (Hb)—the largest part of the erythrocyte dry content by weight. As a result, if the cell volume decreased due to varying osmotic conditions, the refractive index increased. Pathophysiological conditions such as sickle cell anemia, malaria and sepsis are often closely related to the physical properties of RBCs, their shape and size. The fundamental features of varying refractive indices and cell shapes allow RBCs to react to changes in different osmotic environments making them ideal candidates to study scattering light. In the present work, Gautam et al. showed nonlinear self-trapping of light across a centimeter distance of propagation by scattering RBC suspensions. When they increased the power of the laser beam, they showed the beam dramatically self-focus within all three osmotic conditions – much like optical spatial solitons (nonlinear self-trapped wave packets). The optical forces that change with cell density and morphology can provide noninvasive tools to sort diverse cells, according to a specific stage of a given disease. UPPER PANEL: Normalized transmission and output beam size as a function of input power. a Measurement of the normalized transmission and b output beam size change in fresh RBC suspensions of different buffer solutions. The cyan (triangle) curve depicts the results obtained from the PBS background solution without RBCs as a reference, which indicates no appreciable self-action of the beam in the buffer solution itself. The blue (circle), red (square), and green (diamond) curves show the data obtained from RBC suspensions in hypertonic, isotonic, and hypotonic solutions, respectively, where the error ranges in (b) are indicated by the shaded regions. c Corresponding results from the same blood sample but after the RBCs have been stored in a refrigerator for two weeks, where the nonlinear focusing is dramatically enhanced in the hypotonic solutions. LOWER: Optical gradient forces on RBCs under different osmotic conditions examined by optical tweezers. a–c Snapshots of RBC movement towards a 960-nm laser beam (position marked by a dashed green circle) in isotonic, hypotonic, and hypertonic solutions, respectively, as observed under a microscope. The red arrows illustrate the directional cell movement. d–f Power spectrum analyses showing the trap stiffness κx of a single RBC from the three suspensions in accordance with (a–c), where the vertical dashed lines mark the corner frequency fc. The inset in (f) illustrates a single RBC that moves into the trap under the action of the gradient force. Credit: Light: Science & Applications, doi: 10.1038/s41377-019-0142-1. The scientists obtained blood samples from anonymous donors for the experiments. In the first set of experiments, they used a linearly polarized continuous wave (CW) laser beam with a wavelength of 532 nm. They focused the light into a 3 cm long glass cuvette filled with RBC suspensions in diverse osmotic conditions, as previously described. They monitored the linear and nonlinear outputs from the sample using a CCD camera and power detector, and measured the beam diameters using the Beamview program. The beam first diffracted normally at a low power of 10 mW and experienced strong scattering thereafter due to random distribution of non-spherically shaped RBCs. Gautam et al. then measured normalized laser transmission (output/input power) as a function of the input beam power. In hypotonic solutions, they noted the RBCs were in a "swollen" state where the effective refractive index of the cells decreased as the water-to-Hb ratio increased. In contrast, in the hypertonic solution, the scientists observed that RBCs shrunk, and their effective index increased due to reduced water-to-Hb ratio. In a third isotonic solution, the cells exhibited a "normal" state, in which the RBCs showed intermediate behavior. When the experiments were performed using the same blood samples two weeks later, the scientists observed notably different outcomes in which the nonlinear focus dramatically enhanced for the hypertonic solution. Simulations of the optical force-induced nonlinear beam dynamics in RBC-like suspensions. a–c Beam size (FWHM) change as a function of the gradient and scattering forces obtained via numerical simulations using a 350-mW input power and neglecting random scattering effects, where one observes the change in beam size when either the gradient or the scattering force is “turned off”. d, f Side-view of the beam propagation and e, g corresponding output transverse intensity patterns after propagating through an RBC-like random scattering medium at low (d, e) and high (f, g) beam power. The beam side-views and output intensity patterns are normalized with respect to their respective maximal input powers. Credit: Light: Science & Applications, doi: 10.1038/s41377-019-0142-1. In a second set of experiments, the scientists used a home-built optical tweezer system to measure the optical gradient force on RBCs. Gautam et al. collected the forward-scattering light from the trapped cells with a condenser lens and subsequently focused onto a position sensitive detector (PSD). They calculated the stiffness and gradient force in the three separate solutions. To simplify the measurements, Gautam et al. treated hypotonic and hypertonic RBCs as disk shaped objects. They used a CCD camera to record cell movements from the three different solutions along with a microscope with two objectives, where the setup was driven using a 960 nm laser beam. The results illustrated the movement of cells against Brownian motion under the action of optical forces based on the conditions of the cell (shape, size) and their beam trapping capacity. Gautam et al. estimated the trapping force using the Langevin equation and informed that the force followed a trend of hypertonic > isotonic > hypotonic conditions. The scientists then developed a model to simulate nonlinear beam propagation in biological soft matter in order to understand the physics of optical force-mediated nonlinearity. They modelled time evolution of the particle concentration distribution using a diffusion-advection equation and considered the presence of a forward-scattering force to push the particles along the direction of beam propagation, alongside the optical gradient force. Gautam et al. calculated the change in beam size for the different gradient and scattering force parameters to simulate the nonlinear self-focusing effects under different buffer conditions. They recorded the changing size, volume and refractive indices of RBCs under diverse osmotic conditions that were accountable for the varying magnitude of optical forces that modified the optical nonlinearity. The simulated results were qualitatively consistent with the experimental observations. Nonlinear optical response of lysed RBCs (free hemoglobin) in water. a Output beam size as a function of input power through the Hb solutions for four different concentrations. The RBC concentrations for the four curves (Hb1-Hb4) are 2.4, 5.1, 8.6, and 15.0 million cells per mL. Nonlinear self-focusing of the beam occurs at ~100 mW for high concentrations of Hb, but it subsequently expands into thermal defocusing rings at high powers. b–e Typical output transverse intensity patterns taken for the self-trapped beam (b, d) and thermally expanded beam (c, e) for low (d, e) and high (b, c) concentrations. Credit: Light: Science & Applications, doi: 10.1038/s41377-019-0142-1 In this way, Gautam et al. studied nonlinear beam propagation in human RBCs suspended in three diverse buffer solutions. They found that RBCs exhibited a strong self-focusing nonlinearity that could be chemically controlled based on the buffer solution. They therefore propose tuning the optical nonlinearity via osmosis and increased osmotic pressure, outside the cells in fresh blood samples. When the samples aged, free hemoglobin from the lysed RBCs played an active role in the observed optical nonlinearity and enhanced the nonlinear response in hypotonic conditions. Using direct video microscopy and optical tweezer measurements, the scientists showed that the beam trapping force was greatest for RBCs in the hypertonic conditions and weakest for hypotonic solutions. The scientists introduced a theoretical model to validate the observed experimental effects. The work will introduce a new perspective in the development of diagnostic tools as the results are very promising towards the development of laser treatment therapies for blood-related diseases. | 10.1038/s41377-019-0142-1 |
Medicine | Novel drug shows promise for alleviating both heart failure and the sleep apnea associated with it | Renata M. Lataro et al, P2X3 receptor antagonism attenuates the progression of heart failure, Nature Communications (2023). DOI: 10.1038/s41467-023-37077-9 Journal information: Nature Communications | https://dx.doi.org/10.1038/s41467-023-37077-9 | https://medicalxpress.com/news/2023-03-drug-alleviating-heart-failure-apnea.html | Abstract Despite advances in the treatment of heart failure, prognosis is poor, mortality high and there remains no cure. Heart failure is associated with reduced cardiac pump function, autonomic dysregulation, systemic inflammation and sleep-disordered breathing; these morbidities are exacerbated by peripheral chemoreceptor dysfunction. We reveal that in heart failure the carotid body generates spontaneous, episodic burst discharges coincident with the onset of disordered breathing in male rats. Purinergic (P2X3) receptors were upregulated two-fold in peripheral chemosensory afferents in heart failure, and when antagonized abolished these episodic discharges, normalized both peripheral chemoreceptor sensitivity and the breathing pattern, reinstated autonomic balance, improved cardiac function, and reduced both inflammation and biomarkers of cardiac failure. Aberrant ATP transmission in the carotid body triggers episodic discharges that via P2X3 receptors play a crucial role in the progression of heart failure and as such offer a distinct therapeutic angle to reverse multiple components of its pathogenesis. Introduction Heart failure (HF) is a major public health problem and is one of the leading causes of death worldwide 1 . It is estimated that globally 38 million individuals have HF and its prevalence is expected to rise as life longevity increases 1 , 2 . HF involves a complex interaction of several mechanisms including neurohumoral and inflammation 3 resulting in autonomic dysfunction 4 , 5 , 6 , cardiac and respiratory failure 7 that all contribute to the morbidity and mortality of HF. The degree to which sympathetic activity is elevated in HF is strongly associated with its worsening prognosis 4 , 5 , 6 . Thus, understanding the origins of elevated sympatho-excitation in HF is critical, as is how to temper it, which remains unresolved. Increased activity of peripheral chemoreceptors has been advanced as a mechanism for driving the excessive sympatho-excitation observed in HF and their sensitivity is strongly associated with mortality 7 , 8 , 9 , 10 , 11 . Additionally, sleep-disordered breathing and central apneas commonly found in patients with HF have also been ascribed to hyperreflexia (i.e. increased sensitivity) of the peripheral chemoreceptor reflex evoked responses 9 , 10 , 11 . Hence, carotid bodies have been considered as a potential new target to improve the quality of life and decrease mortality in HF patients. In rats with HF, bilateral ablation of the carotid bodies stabilized respiratory function, restored autonomic balance, including a significant reduction in sympathetic activity, reduced cardiac remodelling and improved ejection fraction, and survival 9 . In patients with HF, carotid body resection (uni-/ bi-lateral) also showed promising results, with a decrease in both sympathetic nerve activity and peripheral chemosensitivity, and an improvement in exercise tolerance and life quality 12 . Together, data from both experimental animals and humans reveal an important role for peripheral chemoreceptors in the pathogenesis of HF, and support the hypothesis that inhibition of their hyperactivity represents a novel therapeutic strategy. Although surgical denervation of the carotid body in humans has provided important proof of concept data, such an approach may not be free from risks. In human HF, bilateral carotid body resection did not improve pump function and may have been associated with worsened sleep apnea resulting in greater falls in blood oxygen saturation over more prolonged periods 12 . This raises the question of whether there is a way to normalize the excitability of the carotid bodies without the need to remove them, and maintain their vital homeostatic functions. This would preserve the excitatory respiratory drive during periods of hypoxia/hypercapnia as experienced during sleep apnoea and provide autonomic modulation to support cardiac contractility, for example. The present study was designed to test such an approach. Multiple transmitters are involved in carotid body signaling with adenosine tri-phosphate, ATP, being a major contributor 13 , 14 . In response to hypoxia, ATP is released from glomus cells within the carotid body providing increases in chemosensory afferent drive by acting on P2X receptors. These receptors are members of a family of ATP-gated ion channels consisting of seven receptor subunits: P2X1-P2X7 15 . One of these - the P2X3 receptor - has been associated with dysfunctional afferent signaling 16 . Nurse and colleagues were the first to describe the presence of P2X3-receptors in the carotid body and petrosal ganglion neurons in healthy rats 17 . Recently, these receptors were found to be elevated in neurons of the petrosal ganglion innervating the carotid body of hypertensive rats and following their inhibition both sympathetic activity and blood pressure were reduced 18 . In the present study on rats with chronic heart failure, we tested the hypothesis that pharmacological antagonism of P2X3 receptors within the carotid body would normalize peripheral chemosensitivity, reduce sympathetic activity and reinstate respiratory stability thereby attenuating the progression of HF pathology. Results P2X3 receptors are upregulated in petrosal neurons mediating the chemoreflex in HF rats Juvenile rats were studied ten days after ligating the left anterior descending coronary artery, which resulted in a 39 ± 4.8% ( n = 17, Fig S 1a, b ) infarcted area localized to the left ventricle, as revealed by post hoc histological analysis (Fig S 1a ), and a reduced ejection fraction compared to sham animals (25 ± 9 vs 68 ± 10%, p = 0.001, n = 9). We analyzed the expression of mRNA of P2X3 and P2X2 receptors from physiologically identified petrosal ‘chemoreceptive’ neurons (identified by their response to potassium cyanide) from HF ( n = 6 neurons) and sham rats ( n = 10 neurons) using RT–qPCR. We found an approximately two-fold increase in expression of P2X3 receptors chemoreceptive petrosal neurons of HF versus sham rats (Table S1 ; Fig. 1a , P < 0.0003); P2X2 receptor expression was similar between both groups (Fig. 1a ). In contrast, in HF and sham rats, petrosal neurons not responding to carotid body stimulation had similar levels of P2X3 receptor mRNA expression (Fig S 1e ). Resting membrane potential of petrosal chemoreceptor neurons were more depolarized in HF (−46 ± 3.6 mV) than both sham animals (−57 ± 3 mV) and non-chemoreceptor neurons (Table S1 ; Fig S 1f ). Moreover, immunofluorescence to detect P2X3 receptor expression was abundant in the carotid bodies of HF rats (Fig. 1b & S 11 ). The functional significance of the upregulated P2X3 receptor mRNA in HF rats was tested physiologically as detailed below. Fig. 1: Episodic carotid sinus nerve discharge in chronic heart failure (CHF) rats is mediated by P2X3 receptors and causes respiratory disturbance via activation of active expiration. a RT–qPCR indicated upregulation of P2X3 but not P2X2 receptors in the petrosal ganglion chemoreceptive neurons from HF rats. β-actin was used as a house keeping control gene to normalize reactions. The relative quantitation was determined by the ΔΔCt method. Data shown as mean ± SD; n = 10 and 6 for sham and HF group, respectively. Two-way ANOVA Bonferroni post-test. ** P < 0.01, *** P < 0.001. b Immunofluorescence of P2X3 receptors (green) and glomus cells expressing tyrosine hydroxylase (TH; red) within the carotid body of a heart failure rat. The top panel is a conventional image and two orthogonal views taken at positions X and Y are shown beneath. The absence of P2X3 receptor immunofluorescence superimposition with TH staining supports the viewpoint that P2X3 receptors are on sensory afferent fibres juxtaposed to the glomus cells. This was repeated in three rats. Scale bar 5 μm. For additional P2X3/TH immunofluorescence images see Fig S 11 . c Raw and integrated (∫) simultaneous recordings of carotid sinus nerve (CSN) activity, electromyographic (EMG) recordings from expiratory (Abd EMG ) and inspiratory (Dia EMG ) muscles in anaesthetized rats. Note the presence of episodic CSN discharge (blue arrows) coincident with both breathing irregularity and onset of active expiration in CHF rats. These changes were all prevented by AF-130 treatment (CHF + AF-130) as was the tonic CSN activity (arrowed). d and e show a correlation b e tween the level of CSN activity with respiratory rate (Resp. Rate, n = 7) and changes in activity of Abd EMG ( n = 7) over 10 min from chronic HF rats, respectively. f CSN tonic discharge is enhanced in chronic CHF rats and normalized by AF-130. Data shown are mean ± SD; n = 7 for sham vehicle; n = 11 for CHF vehicle a n d n = 7 for CHF AF-130 group. Data were tested for normality (Kolmogorov–Smirnov test) and compared using One-way ANOVA Bonferroni post-test. *** P < 0.001. Correlations were assessed using Pearson’s correlation coefficients, two-sided. Source data are provided as a Source Data file. Full size image Carotid body afferent recordings reveal spontaneous, episodic discharges in chronic HF rats AF-130, a highly selective P2X3 receptor antagonist with minimal brain penetrance, was administered (30 mg/kg/day, s.c.) three days after myocardial infarction for 7 weeks in chronic HF rats; data were compared to chronic HF and sham-operated controls both given vehicle. Chronic HF was induced by ligation of the left anterior descending coronary artery and infarct size measured 8 weeks later was not different between vehicle versus drug-treated groups (42 ± 8 versus 45 ± 8%, respectively; n.s., Fig S 1a, b ). After eight days of administration, PK of AF-130 was assessed. The plasma concentration of AF-130 was analyzed at 1, 4, and 24 h post-administration (Fig S 1c , n = 6): one hour after administration, plasma AF-130 concentration was 4.187 ± 569 ng/mL, which dropped by 50.4% four hours later (Fig S 1c ). There was no difference in actual weight between animal groups (drug-treated: 518 ± 67, vehicle: 548 ± 47 g, n.s.; Fig S 1d ). Carotid sinus nerve activity, as an index of carotid body excitability, was recorded simultaneously with electromyographic activity from both the diaphragm (Dia EMG , inspiratory) and abdominal muscles (Abd EMG , active expiratory; Fig. 1b ) in anaesthetized rats breathing spontaneously either with ( n = 4) or without trachea cannulation ( n = 7) eight weeks after myocardial infarction. Recordings of the carotid sinus nerve exhibited unexpected, episodic burst discharges that occurred spontaneously at a frequency of 17.9 ± 5.3 events/10 min and of variable duration (6–23 s; 13.9 ± 6.4 s) and persisted for the duration of the recording (>1.5 h; Fig. 1b ). These episodic discharges correlated with the onset of respiratory de-stabilization/hypopnea as revealed by the variable duration and reduced burst frequency of Dia EMG activity (Fig. 1b, c ; r = −0.81; p = 0.02) and with the appearance of burst discharges in Abd EMG activity ( r = 0.91; P = 0.004, Fig. 1b, d ; n = 7), and caused increases in heart rate (398 ± 28 to 448 ± 24.8 bpm, P < 0.05); the latter indicative of cardiac sympathetic activation. Additionally, tonic levels of carotid sinus nerve activity were greater in rats with chronic HF ( n = 11) relative to sham (n = 7) controls (13.4 ± 4.4 vs 1.64 ± 0.4 spikes/s, Fig. 1a, e ; p < 0.0001). In all chronic HF rats, systemic P2X3 receptor antagonism ( n = 7) abolished the episodic carotid sinus nerve bursting (Fig. 1b ) and reduced its tonic activity to levels comparable to those in sham controls (2.18 ± 1.0 spikes/s, Fig. 1b, e ). In the absence of episodic discharges, both the duration and frequency of Dia EMG stabilized completely and Abd EMG bursting was abolished. Given that cardiac arrhythmia were at least twenty times less infrequent (2.33 ± 1.75 arrhythmias per 30 min) than the occurrence of aberrant carotid sinus nerve discharges (17.9 ± 5.3 events/10 min) these novel data suggest that in chronic HF both spontaneous episodic and tonic activity is generated by the carotid body and caused by ATP acting on P2X3 receptors rather than a consequence of reduced blood flow to carotid body triggered by the arrhythmia. We surmised that these episodic discharges trigger inspiratory destabilization through activation of pathways promoting aberrant expiratory activity. Since this supposition was based on data from anesthetized rats, we tested its plausibility in conscious chronic heart failure animals. Systemic P2X3-receptor antagonism reinstates respiratory stability in conscious chronic HF rats Breathing was measured using plethysmography and was unstable in chronic HF rats treated with vehicle (Fig. 2a ) as reflected in the Poincare plots (Fig. 2b ). These respiratory irregularities produced increases in the PaCO 2 (42.4 ± 1.2 vs 38.5 ± 0.7 mmHg; p < 0.0001, Fig S 2a ) and reductions in PaO 2 (74.6 ± 5.9 vs 87.7 ± 2.1 mmHg; p < 0.0001, Fig S 2b ) in chronic HF ( n = 6) relative to sham controls ( n = 5); no difference in blood gases were seen in HF animals during normal breathing relative to sham controls (Fig S 2a, b ). Thus, the hypercapnia and hypoxia was a consequence, not a cause, of the breathing disturbance. When compared to sham controls ( n = 8), chronic HF rats ( n = 8) showed higher: minute ventilation (621 ± 89 vs 351 ± 81 mL/kg/min, p < 0.001, Fig. 2c ), respiratory frequency (135 ± 13 vs 95 ± 14 cycles/min, p < 0.001, Fig. 2d ), short- and long-term breathing interval variability (SD1, 140 ± 86 vs 67 ± 19 ms, p < 0.05; SD2, 162 ± 90 vs 83 ± 20 ms, p < 0.05, Fig. 2e ) and incidence of apneas-hypopneas (27 ± 13 vs 14 ± 6 events/h, p < 0.05, Fig. 2f ); these characteristics are all consistent with hyperexcitability of the carotid body. After AF-130 treatment in chronic HF rats ( n = 9) there were marked reductions in: minute ventilation (470 ± 144 mL/kg/min, p < 0.05), respiratory frequency (104 ± 19 cycles/min, p < 0.01), SD1 (77 ± 23 ms, p < 0.05), and episodes of apneas-hypopneas (12 ± 8 events/h, p < 0.01, Fig. 2a–f ); these values approached those in the sham group (Fig. 2a–f ). Further examination indicated apnea rates were reduced by AF-130 (7.9 ± 5.8 vs vehicle CHF 19.6 ± 15.3 events/h, p < 0.05, Fig S 3a ) but not their duration (Fig S 3b ); also there was no change in hypopneas when analyzed separately (Fig S 3c ). AF-130 changed neither respiratory tidal volume (Fig S 3e ) nor sigh frequency (Fig S 3d ). All told, the data indicate blockade of P2X3 receptors rescues much of the pathological breathing in chronic HF. We next asked whether antagonizing these receptors within the carotid body would also normalize the autonomic and respiratory imbalance found in HF. Fig. 2: Chronic P2X3-receptor antagonism restored normal breathing pattern in chronic heart failure (CHF) rats. a Representative tracings displaying tidal volume (V t ) and respiratory frequency (RR) recorded in conscious rats using plethysmography and b Poincaré plots for breath-to-breath interval (BB N ) versus the subsequent interval (BB N+1 ). Breathing instability in CHF rats is demonstrated in V t and RR tracings, and higher breathing variability in CHF rats. P2X3-receptor antagonism restored normal breathing rhythm in HF rats ( a , b ) and reduced minute ventilation (V E ), respiratory frequency (RR), short- and long-term breathing interval variability (SD1 and SD2), and the incidence of apnoea and hypopnoea (AHI) in CHF rats ( c–f ). Data are shown as mean ± SD. One-way ANOVA Tukey post-test; n = 8 for sham vehicle, n = 8 for CHF vehicle, and n = 9 for CHF AF-130 group. * P < 0.05, ** P < 0.01, *** P < 0.001. Source data are provided as a Source Data file. Full size image Reversal of autonomic dysfunction after blockade of P2X3 receptors in the carotid body of HF rats in situ We first performed whole cell recordings of physiologically characterized chemoreceptive primary afferent petrosal ganglion neurons in arterially perfused in situ rat preparations ten days after ligating the left anterior descending coronary artery (HF) or the sham surgery. The chemoreceptive petrosal neurons from HF ( n = 6 cells) versus sham animals ( n = 10 cells) were more depolarized (−46.3 ± 3.6 vs −56.6 ± 6 mV; p < 0.001, Fig S 4a, b ) and exhibited tonic firing (2.2 ± 1.3 vs 0 Hz; p < 0.001, Fig S 4a, d ). Next, assessments were made on baseline autonomic and respiratory variables after focal delivery of a highly selective, non-competitive P2X3-receptor antagonist (AF-353; 15 nl, 20 mM) infused via a glass pipette placed into the carotid bodies bilaterally. This resulted in: (i) a reduction of thoracic sympathetic nerve activity (tSN; from 21 ± 2.2 to 12.3 ± 1.9%, Fig. 3a, b , p < 0.001) in HF ( n = 16) to levels not different to those in sham ( n = 10) rats (before 11.2 ± 1.6 vs after drug 10.8 ± 1.7%, Fig. 3a, b ; n.s.); (ii) a reduction of perfusion pressure (PP; from 75.3 ± 3.3 to 64 ± 4 mmHg; p < 0.0001) to levels not different to those in sham rats (before 62.3 ± 2.9 vs after drug 60 ± 5.5 mmHg; n.s.; Fig. 3a ); (iii) augmentation of the amplitude of respiratory sinus arrhythmia (from 13 ± 2 to 40 ± 5 bpm, Fig. 3a, c ; p < 0.01), which approached the level of sham animals (54 ± 4 bpm, n.s.) and revealed an improvement in heart rate variability/increased vagal tone; (iv) a lowering of phrenic nerve (PN) frequency from 0.34 ± 0.05 to 0.23 ± 0.05 Hz (Fig. 3a, d ; n = 10, p < 0.001), -a level similar to that found in sham rats (0.25 ± 0.04 Hz; Fig. 3a, d ; n = 10, n.s.). In contrast to HF rats, no effect of the antagonist was observed on tSN, respiratory sinus arrhythmia or PN frequency in sham rats (Fig. 3a–d ). Vehicle infusions into the carotid body were without effect on both basal levels and chemoreflex evoked tSN and PN activities (Fig S 5a–e ). Thus, P2X3 receptors within the carotid body generate aberrant tonic afferent drive causing autonomic and respiratory imbalances in HF. Whether these receptors also contribute to carotid body hypersensitivity (autonomic and/or respiratory responses) was addressed next. Fig. 3: P2X3-receptor antagonism in heart failure (HF) rats is associated with restoring autonomic balance and respiratory activity in the in situ preparation. a Raw (∫) and integrated records of thoracic sympathetic (tSN), phrenic nerve (PN), and heart rate (HR, bpm) in sham and HF rats before and after P2X3 receptor inhibition using AF-353. Note the drug was infused via a micropipette into the carotid bodies directly. In HF rats, there was increased activity of tSN and PN, associated with a reduced magnitude of respiratory sinus arrhythmia (RSA) compared to shams. Average values of b tSN, c respiratory sinus arrhythmia (RSA), and d PN frequency from sham and HF rats before and after AF-353 delivery. Data are shown as mean ± SD; n = 10 and 16 for sham and HF group, respectively. One-way ANOVA Bonferroni post-test. ** P < 0.01, *** P < 0.001. Source data are provided as a Source Data file. Full size image Carotid body afferent hyperresponsiveness in rats with chronic HF is mediated by P2X3 receptors In anesthetized chronic HF ( n = 7) rats (8 weeks since myocardial infarction), the carotid bodies were stimulated with low dose potassium cyanide boluses (i.v.) and displayed greater increases in evoked carotid sinus nerve discharge than sham ( n = 7) controls (354 ± 9.4 vs 239 ± 14 spikes/s; Fig. 4a, b ; p < 0.0001). Following systemic P2X3 receptor antagonism over 7 weeks, the carotid body evoked afferent volley in chronic HF ( n = 7) rats was reduced to levels seen in sham controls (237 ± 8.5 spikes/s; Fig. 4a, b ). These data suggest that in chronic HF rats carotid body afferent hyperresponsiveness is, in large part, mediated by ATP acting on P2X3-receptors. We next determined whether antagonizing these receptors after focal delivery of P2X3-receptor antagonist into the carotid bodies reduced the chemoreflex evoked sympathetic and respiratory responses. Fig. 4: Carotid body P2X3-receptors mediate afferent hypersensitivity and chemoreflex hyperreflexia in heart failure (HF) rats. a Integrated (∫) recordings of carotid sinus nerve (CSN) activity during activation of peripheral chemoreceptors (potassium cyanide - KCN; 0.05 ml, i.v., 0.05%) in anaesthetized rats. Note presence of elevated tonicity (relative to sham and dotted line) and augmented evoked response in CSN b in the chronic HF (CHF) rats and that these were both normalized after P2X3 receptor blockade ( a , b ; drug was given systemically). Data are shown as mean ± SD and were tested for normality (Kolmogorov–Smirnov test) and compared using One-way ANOVA Bonferroni post-test. n = 7 per group; *** P < 0.001. The chemoreflex hyperreflexia of both the thoracic sympathetic (tSN) ( c , d ), and abdominal nerve (AbN) activity ( c . e ), was normalized by P2X3 receptor blockade in the carotid bodies performed by microperfusion of antagonist in the in situ preparation. ∫PN integrated phrenic nerve activity. Data are shown as mean ± SD; n = 10 and 16 for sham and HF group, respectively. One-way ANOVA Bonferroni post-test. ** P < 0.01, *** P < 0.001. Source data are provided as a Source Data file. Full size image Hyperreflexia of chemoreflex evoked responses is normalized by antagonism of P2X3 receptors in the carotid body Ten days post-myocardial infarction, we used the in situ preparation and assessed the carotid body evoked responses of chemoreceptive petrosal neurons in HF and sham rats, as well as the chemoreflex evoked sympathetic and respiratory motor responses before/after bilateral infusion of a P2X3 receptor antagonist infusion directly into the carotid bodies. The chemoreceptive petrosal neurons from HF rats displayed an enhanced firing response compared to sham (55.1 ± 8.8 vs 22.3 ± 4.7 spikes; p < 0.0001; Fig S 4a, c ). Chemoreflex evoked increases in tSN were 286.1 ± 14.3% and 200.7 ± 15.3% for HF ( n = 16) and sham ( n = 10) rats, respectively ( p < 0.0001), and after P2X3 receptor blockade were reduced to 109.9 ± 16.6% and 96.7 ± 10.9% in HF and sham rats, respectively (Fig. 4c, d , p < 0.0001); these new values were not different to one another. Most tSN was positively modulated in the expiratory phase (summed post-inspiration and late expiration) and this was reduced from 219 ± 20 to 79 ± 12% in HF rats (Fig. 4c , p < 0.01) and from 128 ± 6 to 54 ± 12% in sham controls (Fig. 4c , p < 0.05). Overall, the reduction in expiratory modulated tSN was greatest in the HF group ( p < 0.001). The chemoreflex evoked increases in post-inspiratory and late expiratory activities recorded from the AbN were summed and greater in the HF versus sham rats (198.5 ± 20 vs 99.3 ± 13%, Fig. 4c, e ; p < 0.001). P2X3 receptor blockade reduced the AbN chemoreflex evoked expiratory discharge to 70.1 ± 8.9% (Fig. 4c, e ; p < 0.001 compared to HF), a value not different to that observed in sham rats post drug (78.13 ± 11.9%, p > 0.05). On the other hand, the chemoreflex evoked PN discharge were not different between HF and sham animals before (0.24 ± 0.06 vs 0.24 ± 0.07 Hz; HF vs sham) or after P2X3 receptor blockade (0.24 ± 0.09 vs 0.25 ± 0.06 Hz; HF vs sham; Fig. 4c ). These data provide support that in HF carotid body P2X3 receptors are mostly responsible for the hyperreflexia of sympathetic and expiratory chemoreflex responses in HF rats. Given these positive outcomes from the in situ rat, it was important to determine whether such responses translated to conscious rats. P2X3-receptor antagonism improves cardiac autonomic balance in conscious chronic HF rats Since P2X3 receptor blockade in the carotid body of in situ rats with HF improved respiratory sinus arrhythmia (see Fig. 3c ), we assessed heart rate variability (HRV) using spectral analysis in conscious chronic HF rats. Our first analysis normalized the data using percentage change: in chronic HF rats (8 weeks post-myocardial infarction; n = 6), compared to sham animals ( n = 6) there was: (i) an increased low-frequency power (23 ± 10 vs . 11 ± 5%, p < 0.05, Fig. 5a ); (ii) a decreased high-frequency power (77 ± 10 vs. 89 ± 5%, p < 0.05, Fig. 5b ) and (iii) a higher low/higher power ratio (0.35 ± 0.2 vs . 0.13 ± 0.07%, p < 0.05, Fig. 5c ) in heart rate. As expected, these data support dominance of cardiac sympathetic over parasympathetic tone in chronic HF rats. Following systemic P2X3 receptor blockade in chronic HF rats, all HRV values were normalized to comparable levels seen in sham rats (Fig. 5a–c ). In contrast, although un-normalized raw data showed similar trends no significant differences were seen following drug administration (Fig S 6 ). We next sought to determine whether there was an improvement in cardiac pump function. Fig. 5: Chronic treatment with AF-130 improved heart rate variability (HRV) and reduced heart failure (HF) severity in chronic HF (CHF) rats. HRV was examined in the frequency domain (Hz) by spectral analysis. AF-130 treatment improved the cardiac sympathovagal modulation in CHF rats. a Lf: Low-frequency power; b Hf: high-frequency power and c Lf/Hf; nu: normalized units. P2X3-receptor antagonism attenuated the increase in d heart weights/body weight ratio, and prevented the rise in e lung /body weight ratio and f plasma N-Terminal Pro-B-Type natriuretic peptide (NT-proBNP), indicating that the treatment with AF-130 attenuated the HF progression in these animals. Data are shown as mean ± SD. One-way ANOVA Tukey post-test; ( a – e ) n = 6 for sham vehicle, n = 6 for CHF vehicle and n = 5 for CHF AF-130 group; ( f ) n = 5 for sham vehicle, n = 6 for CHF vehicle and n = 6 for CHF AF-130 group. * P < 0.05, ** P < 0.01, *** P < 0.001. Source data are provided as a Source Data file. Full size image Chronic systemic blockade of P2X3 receptors attenuates HF progression Compared to vehicle-treated rats, chronic (7 weeks, n = 8), administration of AF-130 to HF animals revealed increases in both ejection fraction (43 ± 13 vs . 25 ± 13%, Fig. 6a, b , p < 0.001) and stroke volume (843 ± 250 vs . 499 ± 267 μL/kg, Fig. 6c , p = 0.01), which was associated with a reduced end-systolic volume (1115 ± 304 vs. 1436 ± 300 μL/kg, Fig. 6d , p = 0.027). Additionally, drug-treated chronic HF rats had reduced cardiac hypertrophy as indicated by their lower heart/body weight ratio compared to vehicle control rats (3.3 ± 0.36 vs. 4 ± 0.45 mg/g respectively, Fig. 5d , p < 0.05) and lung/body weight ratio (2.2 ± 0.2 vs. 3.2 ± 0.37 mg/g, respectively, Fig. 5e , p < 0.0001); the latter indicative of reduced pulmonary edema. Drug treated chronic HF rats also showed a pronounced reduction in plasma levels of N-terminal pro-B-type natriuretic peptide (177 ± 88 vs . vehicle controls 609 ± 402 ng/g, Fig. 5f , p < 0.05). Thus, P2X3 receptor antagonism prevents the deleterious progression of cardiac dysfunction after myocardial infarction. Mechanistically this may be associated with reduced systemic inflammation. Fig. 6: P2X3-receptor antagonism improves cardiac function in chronic heart failure (CHF) rats. a Representative images of echocardiography in rats submitted to myocardial infarction (MI), before and after 7 weeks of treatment with vehicle or AF-130. Red arrows indicate diastolic ventricular diameter and yellow arrows indicate diastolic ventricular wall thickness. P2X3-receptor antagonism prevented the reduction of ejection fraction ( b ) and stroke volume ( c ) during HF development, and reduced left ventricular (LV) end-systolic volume ( d ). The parameters were analyzed before MI surgery, three days after MI and after 7 weeks of vehicle ( n = 13) or AF-130 ( n = 8) administration. IVS: interventricular septum. Data are mean ± SD. Repeated measures two-way ANOVA, with Student-Newman-Keuls post hoc comparison. * P < 0.05, ** P < 0.01, *** P < 0.001. Source data are provided as a Source Data file. Full size image Chronic blockade of P2X3 receptors attenuates systemic inflammation and cytokines in chronic HF AF-130 (2 mg/kg/h, iv, n = 4) or vehicle ( n = 5) administration started four weeks after myocardial infarction and continued for three weeks; blood samples were taken just before the drug treatment and again after three weeks on drug (Fig. 7a–c ). In vehicle-treated animals, there was an increase in natural killer cells (0.32 ± 0.2 vs. 1.14 ± 0.5%, n = 5, p < 0.01), B cells (1.7 ± 1.6 vs 3.1 ± 1%, n = 5, p < 0.05) but no change in T cells (1.22 ± 0.82 vs . 1.02 ± 0.54%, n = 4, p > 0.05; Fig. 7a–c ) with the progression of HF (i.e. from four to seven weeks). This increase was prevented by AF-130 (4 vs. 7 weeks post-myocardial infarction: natural killer cells: 0.42 ± 0.15 vs 0.2 ± 0.2%, n = 4; B cells: 1.7 ± 0.5 vs. 0.8 ± 0.6%, n = 4; T cells: 2.05 ± 0.06 vs. 0.85 ± 0.3%, n = 4; all p < 0.05, Fig. 7a–c ). Plasma cytokines were analyzed in separate sham and HF rats treated with either vehicle or AF-130 (30 mg/kg/day, s.c.) for 7 weeks. IL-1β was reduced with AF-130 treatment (149 ± 265 vs 3466 ± 1863 pg/mL, n = 4–6, p < 0.05; Fig. 7d ), however, TNF-α (81 ± 14 vs. vehicle 93 ± 30, n = 6–9) and IL-10 (132 ± 304 vs. vehicle 183 ± 377, n = 6–8) were not changed (Fig. 7e, f ). Additional analysis was made in HF rats receiving either vehicle or drug for seven weeks; the following immune cells were either depressed: CD3 + CD4 + CD8- (4.1 ± 1.6 vs 1.15 ± 1%, p = 0.01), CD4-CD8 + (1.34 ± 0.5 vs 0.55 ± 0.4%, p < 0.05) and CD8 + CD28 + (0.76 ± 0.3 vs. 0.05 ± 0.06%, p = 0.006; Fig S 7a–c ) or unaffected by drug treatment (e.g. CD4 + CD11a+; CD8 + CD11a+; CD4+CD28+; Fig S 7d–f ). These data indicate that specific immune cell types and IL-1β are suppressed by antagonizing P2X3 receptors in chronic HF rats. However, despite a reduced immune response, myocardial fibrosis was similar between AF-130 and vehicle treated chronic HF rats (0.42 ± 0.08 vs. 0.45 ± 0.08, n = 6 and 4, respectively, Fig S 8 ). Fig. 7: P2X3 receptor blockade reduces the immune cell response and plasma cytokine levels in chronic heart failure (CHF) rats. Quantitative data for natural killer cells (CD161, a ), B cells (CD45+, b ) and T cells (CD3 + CD4 + CD25+, c ). These specific immune cell types were suppressed by antagonizing P2X3 receptors systemically in CHF rats. The immune cells were analyzed after three weeks of vehicle ( n = 5) or AF-130 ( n = 4) administration. Two-way repeated measures ANOVA, with Tukey post hoc comparison. Plasma levels of IL-1β ( d ), TNF-α ( e ), and IL-10 ( f ) in sham vehicle ( n = 9) and CHF rats treated with either vehicle ( n = 4, 9, and 8, respectively) or AF-130 ( n = 6) are shown; administrations lasted for seven weeks. One-way ANOVA, with Tukey post hoc comparison. Data are mean ± SD. * P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.001. Source data are provided as a Source Data file. Full size image Discussion In 1959, Holton 19 first proposed that ATP was released by sensory nerves and eleven years later Burnstock described it as a transmitter in autonomic nerves 20 . Thirty years on, ATP was reported as a major transmitter substance released by the carotid body to activate sensory endings of the sinus nerve afferent fibers 21 . Subsequently, P2X2 and P2X3 subunits were identified in carotid body afferent terminals and petrosal somas 17 . P2X receptors containing the P2X2 receptor subunit were found to play an essential role in the physiological ventilatory responses to hypoxia in healthy mice 22 whereas P2X3-subunit-containing receptors were the basis for pathological sensitization of carotid body reflexes 18 as has been found in many other sensory systems 16 . For the first time in HF rats, the present study finds increased expression of P2X3 receptors on carotid body petrosal neurons that contribute to both its aberrant tonicity and hypersensitivity of reflex responses following stimulation. A major co-morbidity of HF is respiratory instability with hypopneas and apneas 23 , 24 which are associated with morbidity, mortality and reduced quality of life 25 , 26 , 27 . This breathing instability is cyclical and associated with increased carotid body chemoreflex sensitivity and occurs in at least 60% of HF patients 28 , 29 . This cyclical pattern of hypopnea/apnea is thought to be due to the hypersensitization of the peripheral chemoreceptors such that a period of hyperventilation-induced hypocapnia is followed by a period of disturbed breathing resulting from the reduced chemical drive from carbon dioxide (e.g. Cheyne Stokes breathing). In the present study, chronic HF rats presented hypopneas and apneas with increased expiratory motor activity causing substantive disruption to normal breathing (Figs. 1 , 2 ). An unexpected and new insight was that the carotid bodies emit spontaneous, episodic discharges in HF rats. These are unlikely to be due to cyclical periods of hyperventilation/hypocapnia as recordings from the carotid sinus nerve revealed that this aberrant episodic discharge always preceded the onset of hypopneas and apneas. Moreover, these occurred when the trachea was intubated suggesting that the respiratory disturbance was not caused by upper airway collapse. Further, our blood gas analysis suggests that the hypoxia/hypercapnia resulted from respiratory disturbances, rather than triggering them. The episodic discharges are unlikely to be caused by cardiac arrhythmias triggering reduced carotid body blood flow as the frequency of occurrence of arrhythmias was twenty times slower that of the carotid sinus nerve bursts. As the episodic carotid sinus nerve discharge and respiratory instability were both blocked by focal P2X3 receptor antagonism within the carotid body (Fig. 1 ), we propose that the episodic discharge is caused by an intervallic release of ATP from glomus cells; this may be triggered by the sympathetic nervous system innervation of the carotid body. Of further interest was the emergence of ‘active’ expiratory abdominal motor activity coincident with episodic carotid sinus nerve activity (Fig. 1 ). We suggest that the episodic discharge from the carotid body drives selectively reflex pathways activating expiration preferentially (not inspiration) triggering hypopneas/apneas in HF. This respiratory disturbance was abolished by P2X3 receptor blockade (Fig. 4 ) but it remains unclear why ATP is released episodically from the carotid body in HF unless they are under perfused and hypoxic in heart failure 30 , a condition known to release ATP 31 . Given that AF-130 abolished active expiration in HF rats (Fig. 4 ), we predict P2X3 receptor antagonism may be effective in human HF where patients also exhibit abdominal breathing, which correlates with its severity and dyspnea 32 , yet we acknowledge that this may convey hemodynamic advantage and reduce pre-load and after-load on the left ventricle 33 . Previously, bilateral carotid body resection improved respiratory stability in HF animals 9 , 28 , 34 . In contrast, it worsened blood gas desaturations during periods of sleep apnea in some HF patients 12 ; however, in these patients, sympathetic activity was reduced, and both exercise tolerance and quality of life improved 12 . We surmise that an absence of carotid bodies in HF may exacerbate clinical problems of sleep-disordered breathing or become threatening to those at high altitude or long haul flights when ambient oxygen partial pressures are low. Hence, a major impetus for the present study was to assess whether reducing carotid body hyperexcitability while preserving its physiological function through systemic P2X3 receptor antagonism was a potential novel strategy to control pathological breathing in chronic HF rats. All told, our data support the carotid body chemoreceptors and ATP being major mediators of the breathing disorder in HF 8 , 9 , 12 , 28 , 35 . In addition to correcting the respiratory disturbances, focal P2X3 receptor antagonism within the carotid body also restored autonomic balance (i.e. reducing sympathetic and raising cardiac vagal tone) in the in situ preparation; this was also the case when the drug was given systemically in conscious rats. Clinically most relevant were the findings that systemic chronic treatment of P2X3 receptor blockade in HF rats: (i) prevented the deleterious progression of HF by improving cardiac output substantially; (ii) lowered classic HF biomarkers; (iii) reduced systemic inflammation; (iv) lowered heart weight, indicating that the treatment attenuated hypertrophy/compensatory myocardial pathological remodeling; (v) prevented pulmonary edema as lung weight was normalized, presumably due to improved left heart pump function; (vi) increased cardiac vagal tone and respiratory sinus arrhythmia reflecting improved central baroreceptor transmission in face of attenuated carotid body afferent drive 36 . The present findings support the notion that P2X3 receptors, especially those within the carotid body, play a pivotal role in afferent-driven pathological mechanisms of cardiovascular and respiratory diseases. In our studies where the antagonist was given systemically, we cannot rule out contributions of P2X3 receptors located on other afferent systems (e.g. cardiac 37 , skeletal muscle sensors 38 ) to the aforementioned comorbidities associated with HF. Although fibrosis of the left ventricle remained unchanged, ejection fraction improved when P2X3 receptors were antagonized systemically. This may reflect: (i) the observed improvement in wall movement perhaps due to reinstatement of ‘dormant’ tissue lying outside the epicentre of the infarct; (ii) improved coronary blood flow consequent of elevated cardiac vagal and reduced sympathetic activity; (iii) reduced sympathetic vasoconstrictor tone lowering after-load; (iv) blockade of P2X3 receptors on the heart 39 ; (v) the reduced inflammatory response after P2X3 receptor blockade 40 , and; (vi) recently demonstrated cardiac pump improvement with reinstatement of respiratory sinus arrhythmia in HF 41 , 42 . Several studies have confirmed the importance of immune system activation in the progression of HF in patients, and B cells and IL-1β have emerged as promising targets to treat HF 43 , 44 , 45 , 46 , 47 . The cellular components of the immune response in chronic HF include macrophages, T, B, mast, natural killer and dendritic cells 48 . We observed reduced abundance of plasma natural killer, pro-inflammatory T and B cells, and reduced IL-1β with chronic treatment of the P2X3 receptor antagonist in chronic HF rats versus vehicle controls. Reduction in inflammation/pro-inflammatory cytokines is relevant as these can increase carotid body type I cell excitability 49 , 50 : IL-1β increased carotid nerve chemosensory discharge in anesthetized rats 51 . Indeed, carotid body glomus type I cells express IL-1 receptor type I 52 . Also anti-inflammatory treatment reduced carotid body afferent discharge induced by chronic intermittent hypoxia 53 . Notably, activation of the sympathetic nervous system can activate macrophages, T and B cells in chronic HF 48 . The combination of these factors may contribute to the development of a vicious cycle whereby aberrant carotid body activity induces elevated sympathetic activity to augment inflammatory responses. Hence, P2X3 receptor antagonism may prevent this vicious cycle, reducing immune cell activation as observed herein. Despite the current advances in the treatment of heart diseases, mortality rates in patients with HF diagnose are still high, 50% of the patients with chronic HF may die in 4 years from diagnosis, while 50% with severe HF are likely to die within 1 year 54 . Although current medication slows progression, there is no cure for HF. Patients with chronic HF that exhibit high ventilatory responses to hypoxia (i.e. high chemoreflex sensitivity) have greater mortality compared to chronic HF patients that display normal chemoreflex drive 29 , 55 . Logically, the carotid body could be a target to treat HF. We have demonstrated that antagonism of the upregulated P2X3 receptors in the chemoreflex afferent pathway improves heart function in HF and abolished the associated breathing disturbances. Our current findings support that P2X3 receptors are a potentially valuable therapeutic target to meet the clinical need of improving the quality of life, morbidity, and mortality in HF patients. Methods Animals The study complied with all relevant ethical regulations. The Institutional Ethics Committee in Animal Experimentation-CEUA of the Ribeirão Preto Medical School, University of São Paulo approved the experimental protocols (Protocol number 033/2017). The experiments were carried out in adult (7-8 weeks old) and juvenile (4 weeks old) male Wistar rats supplied by the Animal Facility of the Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil. The animals were housed under standard conditions with 24 h free access to food and water, on a 12 h light 12 h dark cycle. Experimental heart failure HF was induced by myocardial infarction as described previously by us 56 . Briefly, animals were anesthetized with ketamine (50 mg/kg, im; União Química Farmacêutica Nacional S/A, Embu-Guaçu, SP, Brazil) and Xylazine (10 mg/kg im; Hertape Calier Saúde animal S/A, Juatuba, MG, Brazil), submitted to orotracheal intubation and ventilated mechanically (Advanced Safety Ventilator, Harvard Apparatus, MA1 55-7059, Holliston, MA, USA). The depth of anesthesia was assessed frequently by a noxious pinch to the tail or a paw to check for a withdrawal response. Supplemental doses of anesthesia were given as required. The heart was exposed by an incision in the third intercostal space, and the anterior descending branch of the left coronary artery was identified and ligated with a silk suture (4-0). Sham rats underwent a similar surgical procedure but without coronary ligation. In situ working heart–brainstem preparation These experiments could not be performed blind because the heart is exposed and visualized in the preparation. Juvenile male Wistar rats, 4 weeks, weighing 40–60 g were anesthetized deeply with isoflurane (Baxter Hospitalar, São Paulo, SP, Brazil, 5% induction, maintenance 1.5–3%) and submitted to myocardial infarction as described above. The depth of anesthesia was assessed frequently by a noxious pinch to the tail or a paw to check for a withdrawal response. Supplemental doses of anesthesia were given as required. Ten days later, rats were anesthetized deeply using isoflurane (5%), such that breathing was depressed and there was no withdrawal response to a noxious pinch to the tail or a paw, and were prepared as originally described 57 . In brief, rats were bisected below the diaphragm and made insentient via decerebration at the pre-collicular level. The carotid body and petrosal ganglion were isolated on the right side of the preparation. Preparations were transferred to a recording chamber, and a double lumen catheter was placed into the descending aorta for retrograde perfusion with a Ringer solution containing in mM: NaCl (125), NaHCO3 (24), KCl (3), CaCl 2 (2.5), MgSO 4 (1.25), KH 2 PO 4 (1.25), D-glucose (10), and an oncotic agent (1.25% polyethylene glycol, Sigma), saturated with 95% O 2 −5% CO 2 (pH, 7.35-7.4) and warmed to 31 °C. Activation of the chemoreflex was evaluated by administration of potassium cyanide (KCN; 0.05 mL, i.v., 0.05%) 57 . A neuromuscular blocking agent (vecuronium bromide, 3-4 μg/mL, Cristália Produtos Químicos Farmacêuticos) was added to prevent respiratory-related movement. Recordings from the PN, tSN, and AbN were made simultaneously using custom bipolar glass suction electrodes. The activity of the tSN was recorded from levels T8-T12 and AbN at the thoraco–lumbar level. HR was derived from the inter R wave of the ECG. All signals were amplified (10X), band-pass filtered (1700 amplifier; A-M Systems, Sequim, WA, USA; 0.1 Hz–5 kHz), and acquired (5 kHz) with an A/D converter (CED 1401, Cambridge Electronic Design, CED) controlled by a computer running Spike 2 software (Cambridge Electronic Design, CED). The noise level from the sympathetic chain was measured after the application of lidocaine (2%) at the end of each experiment and subtracted. All nerves were recorded in absolute units (μV), and analyses were performed off-line. Signals were rectified and integrated (50 ms time constant). Whole-cell current clamp recordings (Axopatch-200B integrating amplifier; Molecular Devices) of chemoreceptive petrosal neurons were performed 18 using electrodes filled with a solution containing in mM: K-gluconate (130), MgCl 2 (4.5), trisphosphocreatine (14), HEPES (10), EGTA (5), Na-ATP (5), Na-GTP (0.3). This solution had an osmolarity of ~ 300 mOsmol/Kg.H 2 O, pH 7.39, and resistance of 6–8 MΩ. Electrodes were positioned into the petrosal ganglion along its lateral aspect using a micromanipulator (PatchStar; Scientifica, Uckfield, UK) under visual control (microscope; Seiler, St Louis, MO, USA). The chemoreceptive petrosal neurons were functionally identified by their excitatory response to KCN 18 . The signals were amplified (10X), filtered (low pass filter 2 kHz), and acquired (10 kHz) with an A/D converter (Axon Digidata 1550B; Molecular Devices) controlled by a computer running pClamp software (Molecular Devices). Baseline PN activity was assessed by burst frequency (Hz). To perform comparisons of the tSN recordings between groups, changes in activity were expressed as percentage changes in accordance with a scale (0–100%) determined for each preparation, as previously described 58 , 59 . Briefly, the maximal level of tSN produced by carotid body stimulation was used as 100%. Respiratory sinus arrhythmia was evaluated by the peak-to-trough difference in HR between inspiration and expiration. The tSN (averaged across all respiratory phases and during expiration only) and AbN expiratory responses to KCN was assessed by the measurement of the area under the curve and expressed as percentage values relative to baseline (Δ tSN and Δ AbN in percentage). PN response to KCN was assessed by the difference between baseline PN frequency and the peak of response observed after the KCN (Δ PN in Hz). Rat groups included: Sham coronary ligation, Sham + AF-353 injected into the carotid bodies, HF and HF + AF-353 injected into the carotid bodies. The electrophysiological properties of petrosal neurons measured were: (a) baseline membrane potential; (b) baseline firing frequency, and; (c) firing response to chemoreflex activation. The baseline membrane potential was assessed using a cumulative histogram (bin width 0.5 s) from the membrane potential recordings. Their firing response to chemoreflex activation was assessed by the difference between baseline firing frequency and the peak of response observed after KCN. Note: carotid body excitability is defined as either the level of carotid sinus nerve activity recorded at baseline (after background has been subtracted) or the change in carotid sinus nerve activity to stimulation with KCN. Carotid body/chemoreflex hyperreflexia refers to the magnitude of the reflex evoked response in tSN or PN activities. Chronic AF-130 treatment Graphic timeline of the experimental protocol is displayed in the supplemental material (Fig S 9 ). Either AF-130 administration (Afferent Pharmaceuticals, San Mateo, California, USA), 30 mg/kg s.c. per day or vehicle (dimethylsulfoxide 99.9%, DMSO, Sigma-Aldrich, St. Louis, MO, USA) started three days after myocardium infarction surgery and lasted for 7-8 weeks. Rat groups included: sham coronary ligation treated with vehicle (Sham), CHF treated with vehicle (CHF + vehicle) and CHF treated with AF-130 (HF + AF-130). Respiratory and blood gases measurements in conscious rats The femoral artery was catheterized 24 h before the arterial blood gases measurements. Rats were anaesthetized with ketamine and xylazine and a catheter was inserted into the femoral artery, directed to the abdominal aorta (PE-10 connected to PE-50 tubing; Clay Adams, Parsippany, NJ, USA). The depth of anesthesia was assessed frequently by a noxious pinch to the tail or a paw to check for a withdrawal response. Supplemental doses of anesthesia were given as required. Samples of arterial blood (100 μl) were collected using the femoral catheter before and during the animals’ respiratory irregularities to analyze the PaCO 2 and PaO 2 (gas analyzer; Cobas b121; Roche Diagnostics GmbH, Germany). Tidal volume (V t ), respiratory rate (RR), and minute ventilation (V E ) were studied by whole-body plethysmography in conscious rats. Pressure oscillations caused by respiratory movements were detected by a differential pressure transducer (ML141, ADInstruments, Sydney, Australia) and were digitally recorded in an IBM/PC connected to a PowerLab System (ML866, ADInstruments, Sydney, Australia). V t was calculated using the formula described by Bartlett and Tenney 60 . RR was calculated from the excursion of the V t signal using the cyclic rate built into the computer software LabChart v7.2 (ADInstruments, Sydney, Australia). V E was calculated as the product of V t and RR. Breathing interval variability was assessed from resting breathing recordings by Poincaré plots and analysis of SD1 and SD2 61 . Apnea and hypopnea incidence, considered as cessation (for a period greater than a control respiratory cycle length at rest) or 50% reduction in V t over 3 consecutive breaths, were calculated and reported as apnea and hypopnea index (events/h). Post-sigh apneas numbers were also measured. Chemoreflex function and respiratory measurements in anesthetized animals Animals were anesthetized with urethane (1 g/kg, i.p., Sigma Chemical, St. Louis, MO) and the depth of anesthesia was assessed frequently by a noxious pinch to the tail or a paw to check for a withdrawal response. Supplemental doses of anesthesia were given as required. Rats were placed on a heating pad (ALB 200 RA; Bonther, Ribeirão Preto, Brazil), and core body temperature maintained at 37 °C via a heating blanket with feedback from a rectal thermocouple (MLT1403; Harvard Apparatus, Holliston, MA, USA). A polyethylene catheter (Intramedic, Clay Adams, Parsippany, NJ) was inserted into the femoral vein. The carotid bifurcation was exposed and the carotid sinus nerve isolated, as we previously described 57 . Briefly, the carotid sinus nerve was traced from its point of convergence with the glossopharyngeal nerve and traced caudally towards the common carotid artery bifurcation. All measurements were performed in spontaneous breathing animals with or without the trachea cannulated breathing room air and vagus nerves intact. Teflon-coated bipolar stainless steel electrodes were implanted in the Dia and the Abd muscles for EMG recordings 62 . Activation of the chemoreflex was evaluated by administration of KCN (0.05 mL, i.v., 0.05%) 58 . All recorded signals were amplified (10X; 1700 amplifier; A-M Systems, Sequim, WA, USA), band-pass filtered (0.3 Hz – 5 kHz), and acquired by a data acquisition system (5 kHz; ML866; ADInstruments) controlled by a computer running LabChart software (v.5.0; ADInstruments). The recorded signal from the carotid sinus nerve was fed to a spike amplitude discriminator and counter, which digitally counted in 1 s intervals to assess its discharge frequency (spikes per second). Changes in carotid sinus nerve in response to KCN were assessed by the difference between baseline and the peak of response observed after the stimulus (Δ CSN). EMGs were recorded in absolute units (μV) and analyses were performed off-line from rectified and integrated (∫) signals (time constant: 50 ms). Dia EMG burst frequency was assessed as RR. Changes in the Abd EMG activity during baseline condition were expressed in µV. Based upon absolute values, we determined percentage changes in order to compare their activities in each animal. At the end of the experimental procedures, blood samples were collected for further analysis of plasma N-Terminal Pro-B-Type natriuretic peptide (NT-proBNP; see below) concentration. Rats were euthanized with a high dose of pentobarbital (100 mg/kg, i.v.) and once breathing had ceased the lungs and hearts were removed, rinsed in ice-cold 0.9% NaCl solution, dried, and weighed. The heart was fixed in 3.7% formaldehyde, embedded in paraffin, and the sections were stained with Masson’s trichrome to reveal the infarct size and measured using the NIH ImageJ software (developed by National Institutes of Health and available on the internet site ). Infarct size was calculated by dividing the length of the infarcted area by the total circumference of the LV and expressed as a percentage 56 . Echocardiography The echocardiographic evaluation was performed one day before the myocardial surgery (control), and repeated three days and seven weeks after the myocardial infarction in chronic HF rats. In juvenile rats, the echocardiographic analysis was performed ten days after the myocardial infarction. Rats were anesthetized with ketamine (50 mg/kg) and Xylazine (10 mg/kg, i.m.), and the depth of anesthesia was assessed frequently by a noxious pinch to the tail or a paw to check for a withdrawal response. Supplemental doses of anesthesia were given as required. Body temperature was monitored and maintained, and cardiac parameters were obtained through a VEVO2100® (Fuji) machine using a 30 MHz transducer. Diastolic left ventricle diameter, and ventricular posterior wall thickness were evaluated in M-mode; end systolic volume, stroke volume, and ejection fraction were calculated using a bidimensional mode. Analysis of R-R wave interval variability The rats were anesthetized transiently with isoflurane (Baxter Hospitalar, São Paulo, SP, Brazil, 5% induction, maintenance 1.5–3%) and subcutaneous electrocardiogram (ECG) electrodes were implanted. After 48 h, the ECG signal was recorded for 1 h in the conscious state. R-R wave interval variability analysis was performed in the frequency domain using CardioSeries software (v2.7, ). The R-R interval time series were resampled at 10 Hz (1 value every 100 ms) by cubic spline interpolation, to regularize the time interval between beats. The R-R interval time series with 15 min duration were divided into 34 half-overlapping (Welch protocol) segments, each one with 512 values. Next, Hanning windowing was employed and each stable segment was subjected to spectral analysis using Fast Fourier Transform. Pulse interval spectra were integrated into low (LF: 0.20–0.75 Hz) and high frequency (HF: 0.75–3.00 Hz) frequency bands. LF and HF powers are expressed in normalized units (nu) and the LF/HF ratio is also shown. NT-proBNP analysis Plasma NT-proBNP concentration was measured using AssayMax™ immunoenzymatic assay kit following the manufacturers instructions (St. Charles, MO, USA, catalogue number: ERB1202-1). RT–qPCR In the single-cell RT-qPCR experiments, the pipette solution containing the cytoplasmatic material of the recorded petrosal neuron was collected from the patch pipette. The High Capacity cDNA Reverse Transcription Kit reagents (Life Technologies) and nuclease-free water were used for subsequent transcription in a thermocycler (ProFlex PCR System; Applied Biosystems, Foster City, CA, USA). cDNA pre-amplification was performed in the single-cell RT-qPCR experiments using the TaqMan PreAmp Master Mix Kit (Life Technologies) using the P2X2 (Rn04219592_g1), P2X3 (Rn00579301_m1) and β-actin (NM_031144.2) probes. The reactions for the RT-qPCR were performed in singleplex and triplicate (StepOnePlus System, Applied Biosystems) using the same probes described above and the TaqMan Universal PCR Master Mix kit (Life Technologies) according to the manufacture’s recommendations. β-actin was used as a house keeping control gene to normalize reactions. The relative quantitation was determined by the ΔΔCt method. For each sample, the threshold cycle (Ct) was determined and normalized relative to β-actin (ΔCt = Ct Unknown – Ct referencegene). The fold change of mRNA content from the petrosal ganglia chemoreceptive neurons from HF relative to the sham animals was determined by 2 − ΔΔCt, where ΔΔCt = ΔCt Unknown – ΔCt Control. Data are presented as mRNA expression relative to the sham animals. Immunocytochemical studies Carotid bifurcations from HF rats were surgically removed immediately after the in situ experiments and transferred into ice cold Ringer. Carotid bodies were dissected, fixed overnight with 4% formaldehyde, and submerged in sucrose solution (30%) for 24 h. Coronal sections (40 μm thick) were washed three times in phosphate-buffered saline (PBS 0.1 M) for 5 min and then blocked and permeabilized in PBS, 10% normal horse serum, and 0.1% Triton X-100 for one hour (room temperature). The sections were incubated in mouse anti-tyrosine hydroxylase (TH; 1:1000; Millipore, Burlington, MA, USA) and in rabbit anti-P2X3 receptor (1:500; Abcam, Waltham, MA, USA) primary antibodies overnight. In sequence, they were washed three times with PBS for 5 min, followed by incubation in goat anti-mouse Alexa Fluor 488 (1:500; Thermo Fisher Scientific, Waltham, MA, USA) and goat anti-rabbit Alexa 647 (1:500; Thermo Fisher Scientific) for 4 h. We performed negative controls to show an absence of non-specific staining from secondary antibodies (Fig S 12 ). Subsequently, sections and cells were mounted onto glass slides with Fluoromount (Sigma-Aldrich). The images were acquired using a Leica TCS SP5 (Wetzlar, Germany) confocal microscope equipped with 488 and 633 nm lasers and detection of tunable emission wavelengths. Infarct size analysis The hearts were fixed in phosphate-buffered 4% formalin and mounted in paraffin blocks. Each block was serially cut at 6 μm from the midventricular surface. The sections were stained with Masson’s trichrome, and the infarct size was measured using the NIH ImageJ software (developed by the National Institutes of Health; ). Infarct size was calculated by dividing the length of the infarcted area by the total circumference of the LV and expressed as a percentage 56 . Inflammatory cells In this protocol, rats were submitted to myocardial infarction and the administration of vehicle (DMSO) or AF-130 (2 mg/kg/h) started four weeks after the surgical procedure. For vehicle or AF-130 infusion, a polyethylene catheter was inserted in the jugular vein and connected with a programmable iPRECIO SMP-300 pump (Primetech Corporation, Tokyo, JP) placed under the skin of the back. The animals were treated for 3 weeks. At the end of the treatment, blood samples were collected from the tail vein. The cells from the experimental groups were placed in 96-well round-bottom plates for cytofluorometric analysis. Following Fc receptor blocking, cells were incubated with colour combinations of the monoclonal antibodies (BD Biosciences, San Jose, CA, USA). Stained cells were stored for analysis in PBS containing 1% paraformaldehyde, in sealed tubes held in the dark. All steps were performed at 4 °C. Analysis of these cells was performed using a Becton Dickinson FACScan flow cytometer with DIVA-BD software (Becton Dickinson Immunocytometry Systems, San Jose, CA, USA). Representative plots of gating strategy are showed in figures S13 and S14 . Cytokine measurements Plasma cytokine (TNF-α, IL-1β, and IL-10) levels were analyzed by the immune-enzymatic ELISA method, using Duo set kits (R&D Systems, Minneapolis, MN, USA) according to the manufacturers’ instructions. Drugs Two antagonists with very similar P2X3 and P2X2/3 selectivity were used. AF-353 (Afferent Pharmaceuticals) has a low polar surface area and as a result, crosses the blood-brain barrier 63 , however AF-130 (Afferent Pharmaceuticals), with a methyl sulfone substitution (Supplementary Fig. S 10 , making the overall selectivity/affinity profile similar, has a much higher polar surface area, and does not cross the blood-brain barrier 64 , 65 . The latter was used in the in vivo studies. AF-130 data were generated by Afferent Pharmaceuticals, are unpublished and include that this antagonist is a highly selective and potent inhibitor of P2X3 and P2X2/3 channels showing greater potency at P2X3 homotrimers than P2X2/3 heterotrimers by around eight-fold. The potency of AF-130 is reflected by the IC50 ranges of 126–407 nM for P2X3 receptors and 240–5670 nM for P2X2/3 receptors. AF-130 has >25-fold selectivity over other P2X channels tested (including P2X1, P2X2, P2X4, P2X5 and P2X7). It has been tested on 73 non-purinergic targets (e.g, ion channels, GPCR, transporters, and enzymes). Only when doses were 25–100 fold above the IC50 range for P2X3 and P2X2/3 receptors was a partial (20%) antagonism of some tested processes observed (e.g. adenosine 3 receptors, 5-HT6 receptors, and dopamine transporter). See Supplementary Figure 10 for additional information on AF-130. Statistical analysis Results are expressed as the mean ± standard deviation (SD). Data were tested for normality using Kolmogorov–Smirnov test and compared using unpaired t -test with Welch’s correction, One-way ANOVA or repeated measures two-way ANOVA, with Student-Newman-Keuls, Bonferroni or Tukey post hoc comparisons. Correlations were assessed using Pearson’s correlation coefficients. The type of statistical test performed is indicated in the figure legends. Differences were considered to be statistically significant with p < 0.05. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability All data which supports the findings here can be found in the manuscript and supplementary information . Source data for all experiments are provided with this study. Source data are provided with this paper. | A novel drug is showing promise for alleviating heart failure, a common condition associated with sleep apnea and a reduced lifespan. The drug, known as AF-130, was tested in an animal model at Waipapa Taumata Rau, the University of Auckland where researchers found it improved the heart's ability to pump, but, equally important, prevented sleep apnea, which itself reduces lifespan. The work is published in the journal Nature Communications. "This drug does offer benefit for heart failure, but it's two for the price of one, in that it's also relieving the apnea for which there is currently no drug, only CPAP (a breathing device), which is poorly tolerated," says Professor Julian Paton, director of the University's Manaaki Manawa, Center for Heart Research. When a person has a heart attack and subsequent heart failure, the brain responds by activating the sympathetic system, the "fight or flight" response, as a way to stimulate the heart to pump blood. However, the brain persists with this activation of the nervous system, even when it is no longer required, and this together with the consequent sleep apnea, contributes to the patient's reduced life expectancy. Most patients die within five years of a heart failure diagnosis. "This study has revealed the first drug to temper the nervous activity from the brain to the heart thereby reversing the heart's progressive decline in heart failure," says Professor Paton. The part of the brain that sends nervous impulses to the heart is also controlling respiration, so this drug has a dual function, reducing the "fight or flight" response while also stimulating breathing to stop the sleep apnea. "These findings have real potential for improving the wellness and life expectancy of almost 200,000 people living with heart disease in Aotearoa New Zealand," says Professor Paton. Another exciting factor for the scientists, who are from the University of Auckland and the University of São Paulo, Brazil, is that the drug is soon to be FDA approved, albeit for a different health issue, paving the way for human trials in the next year or two, Professor Paton says. "Over recent decades there have been several classes of drugs that have improved the prognosis of heart failure," says cardiology consultant and Associate Professor, Martin Stiles. "However, none of these drugs work in the way that this new agent does. So it is exciting to see a novel method that potentially reverses some features of heart failure." | 10.1038/s41467-023-37077-9 |
Medicine | Heart drug could significantly increase survival rates for children with an aggressive form of brain tumour | Durgagauri H. Sabnis et al. A role for ABCB1 in prognosis, invasion and drug resistance in ependymoma, Scientific Reports (2019). DOI: 10.1038/s41598-019-46700-z Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-019-46700-z | https://medicalxpress.com/news/2019-07-heart-drug-significantly-survival-children.html | Abstract Three of the hallmarks of poor prognosis in paediatric ependymoma are drug resistance, local invasion and recurrence. We hypothesised that these hallmarks were due to the presence of a sub-population of cancer stem cells expressing the multi-drug efflux transporter ABCB1. ABCB1 gene expression was observed in 4 out of 5 paediatric ependymoma cell lines and increased in stem cell enriched neurospheres. Functional inhibition of ABCB1 using vardenafil or verapamil significantly (p ≤ 0.05–0.001) potentiated the response to three chemotherapeutic drugs (vincristine, etoposide and methotrexate). Both inhibitors were also able to significantly reduce migration (p ≤ 0.001) and invasion (p ≤ 0.001). We demonstrate that ABCB1 positive patients from an infant chemotherapy-led trial (CNS9204) had a shorter mean event free survival (EFS) (2.7 versus 8.6 years; p = 0.007 log-rank analysis) and overall survival (OS) (5.4 versus 12 years; p = 0.009 log-rank analysis). ABCB1 positivity also correlated with reduced event free survival in patients with incompletely resected tumours who received chemotherapy across CNS9204 and CNS9904 (a radiotherapy-led SIOP 1999-04 trial cohort; p = 0.03). ABCB1 is a predictive marker of chemotherapy response in ependymoma patients and vardenafil, currently used to treat paediatric pulmonary hypertension in children, could be repurposed to reduce chemoresistance, migration and invasion in paediatric ependymoma patients at non-toxic concentrations. Introduction Ependymomas are the second most common malignant brain tumour in children. Ependymal tumours occur across all age groups, but the outcome for children (67% 10 year overall survival (OS) age 0–19) is lower than in their adult counterparts (85–89% 10 year OS age 20–64) 1 . The poorest survival is seen in infants, with the 5 year survival standing at a bleak 42–55% 2 . The only clinical factor consistently associated with survival is the extent of surgical resection 3 , with even histopathological grading unable to reliably predict outcome, in part due to tumour heterogeneity 4 . These tumours occur throughout the central nervous system (CNS), but the majority of paediatric ependymomas are intracranial with over two-thirds arising in the posterior fossa (PF) whereas one-third occur supratentorially (ST) 5 . Despite recent molecular classification of ependymomas there remain only 2 documented biomarkers correlated with poor outcome, namely gain of the long arm of chromosome 1 6 , 7 , and the presence of RELA driver gene fusions in a subgroup of ST ependymomas 8 , 9 . To date there are no biological marker dependent treatments for ependymoma. Current treatment protocols for paediatric ependymoma combine surgical resection with a combination of radiotherapy and chemotherapy. Radiotherapy treatment of the developing brain has been associated with unacceptable side-effects including neurocognitive deficits, potential endocrinopathies and an increased risk of secondary cancers 10 . This led to several European and North American trials aimed at avoiding or delaying radiotherapy in young children with varying degrees of success 11 , 12 , 13 , 14 , 15 . Even with standard treatment protocols including radiotherapy, approximately 50% of cases still relapse. The prognosis at relapse is dismal, with only 25% of children surviving 16 . Indeed ependymoma can become a chronically relapsing disease with shortening intervals between each relapse, ultimately resulting in death. These tumours tend to invade surrounding critical structures such as the brain stem 17 , 18 , making complete surgical resection difficult and relapse more likely 6 . Drug resistance is a major confounding factor in the treatment of paediatric brain tumours and ependymoma has been consistently described as a chemoresistant tumour 11 , 15 , 19 , 20 , although a proportion of patients with ependymoma do respond to chemotherapy regimens that achieve a high dose intensity 13 , 15 . Very few molecular mechanisms underlying this resistance to conventional chemotherapy are currently known 21 , hence we are unable to stratify treatment that either includes or excludes chemotherapy. There is increasing evidence, however, that a subpopulation of cancer stem cells underlies the recurrent, invasive and drug resistant nature of some tumour types 22 , 23 . We, and others have previously demonstrated the presence of stem-like cells in ependymomas 24 , 25 and linked their existence with increased resistance to etoposide through expression of the ABCB1 multidrug transporter in stem cell enriched cell cultures 26 . Here we set out to determine the role of the multidrug efflux transporter ABCB1/P-glycoprotein in paediatric ependymoma using a panel of ependymoma derived cell lines and analyses of two clinical trial cohorts. This approach offered a unique insight into ependymoma chemotherapy resistance and the potential to identify those patients most likely to respond to chemotherapy. Materials and Methods All methods were carried out in accordance with the relevant guidelines and regulations. Cell lines Five paediatric ependymoma derived cell lines were used in this study (Supplementary Table S1 ). Whilst the EPN1 26 , EPN7 and EPN7R were established in-house; BXD-1425EPN and DKFZ-EP1 were previously characterised by Dr Xiao-Nan Li, Baylor College of Medicine 27 and Dr Till Milde, DKFZ Heidelberg 28 respectively. All cell lines were grown as adherent monolayers in ‘tumour media’ [15% Fetal Bovine Serum (FBS-Sigma) in 1 g/l glucose Dulbecco’s Modified Eagle Medium (DMEM-Gibco)]. DKFZ-EP1 neurospheres (DKFZ-EP1NS) were cultured in ‘stem cell media’, supplemented with growth factors 26 . C11orf95-RELA fusion gene expression, was assessed using primers designed by 8 . Real time PCR Real-time PCR analysis of ABCB1 expression was performed as previously described 29 . Relative ABCB1 mRNA expression level was calculated using the ΔCt method 30 and normalised with respect to GAPDH , which had stable transcript levels in adherent and neurosphere cultures. Western blotting SDS PAGE and Western blotting were performed as previously described 29 . Blots were probed with mouse anti-ABCB1 (anti-C219 mouse monoclonal Ab; Source Bioscience 1:50), and rabbit anti-GAPDH (Cell Signaling Technology1:2000 as a loading control. Enhanced chemoluminescence (SignalFire ECL Reagent, Cell Signaling Technology) was performed according to the manufacturer’s protocol. Viability assays Clonogenic assays and MTT viability assays (Cell Proliferation Kit I, Roche) were performed in order to assess response to chemotherapy in a sub-population of cells and all cells respectively. In a clonogenic assay, 600 cells/ well of the BXD-1425EPN cell line were plated in a 6-well plate and incubated with methotrexate (GeneraMedix), etoposide (Sigma) or vincristine (Sigma) in the presence or absence of the pan-ABC transporter inhibitor verapamil (20 µM) or the phosphodiesterase-5 inhibitor vardenafil (10 µM) which selectively inhibits ABCB1 31 , 29 . The rest of the assay was performed as per Othman et al . 29 , see further references therein. In the MTT assay, 6000 cells/well of both the BXD-1425EPN and the DKFZ-EP1 cell lines were plated in a 96 well–plate and, after overnight incubation for cell attachment, were incubated with the same drugs and inhibitors for a period of 5 days. Cytotoxicity was assessed using a FLUOstar plate reader (BMG lab tech instruments). The data was analysed to produce dose –response curves using Graphpad Prism Version 6.0 (GraphPad Software, La Jolla California USA). Wound healing assay BXD-1425EPN and DKFZ-EP1NS cells were seeded at a density of 3 × 10 5 and 4 × 10 5 cells per well respectively in a 48-well plate (Corning) and cultured until the cells reached confluence before wounding 32 . Cells were treated either with vehicle (DMSO), verapamil (20 μM) or vardenafil (10 μM). Images were recorded using a Canon DS126431 camera every 4 hours for BXD-1425EPN or every 8 hours for the DKFZ-EP1 for 24 and 48 hours respectively (reflecting the different doubling times of the cell lines). The ImageJ program was used for quantifying the cell migration response by measuring the closure at 3 randomly selected positions on the wound for each condition. Wound closure curves were used to determine the t 1/2 (time required to reach 50% closure) in Graphpad Prism Version 6. A parametric unpaired student’s t-test was used to establish any significance between treated and untreated conditions. 3D spheroid invasion assay The ability of the cells to invade was assessed by carrying out a 3D spheroid assay in Cultrex Basement Membrane Extract (BME-Trevigen). 2000 cells/ well were cultured in an ultra-low attachment (ULA) 96-well round bottom plates in 100 µl tumour media, then centrifuged at 100 g for 5 minutes to encourage spheroid formation on day 4. Tumour medium was then replaced with 100 µl BME diluted to 3 mg/ml with phenol red-free RPMI-1640/1% L-glutamine on a plate warmer heated to 37 °C to facilitate BME polymerization. After 1 hour incubation 50 µl of tumour medium containing the treatment was overlaid. Images were taken daily for 4 days using a Canon DS126431 camera and were analysed using ImageJ. The relative spheroid outgrowth (R) was calculated by taking the ratio of area of the invasive edge to the area the spheroid. Trial cohorts and Immunohistochemistry (IHC) The tissue microarrays (TMAs) screened in this study comprised samples from two different intracranial ependymoma clinical trial cohorts where patients had received no previous adjuvant therapy (Supplementary Table S2 ; Supplementary Methods). The CCLG/SIOP Infant Ependymoma clinical trial cohort (CNS9204) consisted of patients aged 3 years or under at diagnosis who were treated with chemotherapy for approximately one year with radiotherapy only given at relapse 13 . The second cohort was from the SIOP Ependymoma I clinical trial (CNS9904) and consisted of patients aged over 3 and less than 21 years at diagnosis, who were primarily treated with radiotherapy with chemotherapy only given if, after a second attempt at surgery, resection remained incomplete. These studies, and the experimental protocols required, were reviewed and approved by the National Research Ethics Service Committee East Midlands – Nottingham 2 and have therefore been performed in accordance with the ethical standards laid down in an appropriate version of the 1975 Declaration of Helsinki, as revised in 1983. For all patients, informed consent was obtained from the patient, or a parent and/or legal guardian where the patient was under 18 years of age, prior to their inclusion in the study. IHC staining was performed using the mouse anti-ABCB1 monoclonal antibody (C219, Millipore) at concentration of 1:40. The Dako Chemate Envision Antigen Detection kit (Dako, UK) was used as described by Othman et al . 29 . The association between immunohistochemical status and overall survival (OS) as well as event free survival (EFS) was investigated using the Kaplan–Meier method, with differences estimated using the long-rank (Mantel–Cox) test. OS was defined as the time between the date of diagnosis and death whilst EFS was defined as the time between date of diagnosis and date of first event (recurrence/death). Patients still alive at the end of the study were censored at the date of the last follow-up. The effect of multiple confounding factors were analysed by the Cox proportional hazard regression model to determine the robustness of ABCB1 as an independent predictor of survival. Data analyses were performed with IBM SPSS 22.0 for Windows (IBM Corp. Armonk, NY, USA). Results ABCB1 inhibition in ependymoma cell lines potentiates the effect of chemotherapy ABCB1 gene expression was investigated in 3 previously published (EPN1 26 , BXD-1425EPN 27 and DKFZ-EP1 28 ) and 2 newly established (EPN7 and EPN7R) ependymoma cell lines. Sequence analysis revealed that, apart from EPN7/7R, these lines harboured a C11orf95-RELA fusion gene (Supplementary Table S1 ) making them representative of ST-EPN-RELA aggressive subgroup of ependymomas known to respond poorly to current therapies 8 . In comparison to the GAPDH housekeeping gene, 4 out of 5 of these lines demonstrated relatively low but consistent expression of ABCB1 (Fig. 1a ). In common with our previously published findings in medulloblastoma we found that enriching for stemness in neurosphere culture resulted in a 3 fold increase in ABCB1 expression (Fig. 1a ). ABCB1 expression was also shown to be maintained at recurrence in cultured cells since EPN7 and EPN7R were derived from primary and recurrent tumours from the same patient. Expression of ABCB1 protein was confirmed in BXD-1425EPN and DKFZ-EP1 whereas expression was not observed in EPN1 (Figs 1b,c and S1 ). Figure 1 ABCB1 is expressed in paediatric ependymoma derived cell lines. ( a ) ABCB1 gene expression relative to the GAPDH was calculated for each cell line using the ΔCt method 30 . 4 out 5 ependymoma cell lines expressed ABCB1. An approximately 3 fold increase in ABCB1 expression was observed when the DKFZ-EP1 cell line was enriched for stemness by growing it as neurospheres (DKFZ-EP1NS). ( b ) Expression of ABCB1 protein was analysed by western blotting in 20 µg of protein isolated from EPN1, BXD-1425EPN and DKFZ-EP1 and quantified relative to GAPDH as a loading control. c. A representative western blot showing ABCB1 and GAPDH expression. *p ≤ 0.05, **p ≤ 0.01. Full size image Downstream functional analyses were carried out on two robustly growing ABCB1 expressing cell lines; BXD-1425EPN and DKFZ-EP1. Clonogenic survival was assessed in BXD-1425EPN in response to three ABCB1 substrates that are commonly used in the treatment of paediatric ependymoma. We found that whilst these cells were sensitive to methotrexate and vincristine in the nanomolar range (IC 50 12 nM and 23.5 nM respectively), micromolar concentrations of etoposide were required to elicit a response (IC 50 = 20 μM) (Fig. 2a ). Both verapamil (a calcium channel blocker that inhibits several ABC transporters) and vardenafil (a phosphodiesterase 5 inhibitor, which specifically inhibits ABCB1) were able to significantly potentiate the cytotoxic effect of all 3 drugs in clonogenic (Figs 2b–d and S2 ) and MTT cell viability assays (Fig. S3 ). In contrast, we were unable to obtain an IC 50 value for any of the 3 drugs in MTT assays with the DKFZ-EP1 cell line (Fig. S4 ). We did observe a slight potentiation in cytotoxic response to methotrexate when these cells were co-treated with verapamil; although at extremely high (clinically unachievable) drug concentrations. Figure 2 ABCB1 inhibition potentiated response to chemotherapy in BXD-1425EPN. ( a ) Cytotoxic response of the BXD-1425EPN cell line to the chemotherapeutic drugs methotrexate, vincristine and etoposide was assessed by the stem cell relevant clonogenic assay to produce dose response curves (100% represents vehicle control). ( b–d ) The IC 50 concentrations for each drug were then recalculated in the presence of either the non-specific ABCB1 inhibitor verapamil (20 µM) or the selective ABCB1 inhibitor, vardenafil (10 µM). There was a significant potentiation of cytotoxic response represented by reduction in the IC 50 concentrations of methotrexate ( b ), vincristine ( c ) and etoposide ( d ) when combined with either inhibitor (*p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.005, **** p ≤ 0.001). Full size image ABCB1 inhibition reduced migration and invasion of ependymoma cells ABCB1 has also been proposed to play a role in cell migration 33 , 34 , 35 , 36 . BXD-1425EPN formed an invasive tumour in mouse orthotopic xenografts 27 and DKFZ-EP1 was derived from pleural ascites of a metastatic ependymoma 28 indicating that both would be good models to test in cell migration assays. In vehicle-only treated control experiments, wound closure occurred with a t 1/2 of 11 and 20 hours for the BXD-1425EPN and the DKFZ-EP1 cell line respectively. Faster wound closure for BXD-1425EPN may reflect the slightly higher ABCB1 protein levels in this cell line (Fig. 1b,c ). The t 1/2 for wound closure was significantly increased (p ≤ 0.001) in both cell lines in the presence of either verapamil or vardenafil (Fig. 3a,b ), supporting the hypothesis that ABCB1 function plays a role in cell migration. In spheroid invasion assays, BXD-1425EPN cells were able to readily invade through extracellular matrix, a process which was significantly inhibited (p ≤ 0.005) by the addition of either of the inhibitors (Fig. 3c,d ). In contrast, DKFZ-EP1 cells formed circumscribed spheroids which failed to invade through BME (Fig. S5 ). Figure 3 ABCB1 inhibition reduces migration and invasion in ependymoma cell lines. The ability of BXD1425EPN ( a , c , d ) and DKFZ-EP1 ( b & Fig. S5 ) cells to migrate and invade was measured in wound healing ( a , b ) and spheroid invasion ( c , d ) assays respectively. The time required for 50% wound closure (t 1/2 ) was significantly increased when both the BXD-1425EPN ( a ) and DKFZ-EP1 ( b ) cell lines were treated with either the non-specific ABCB1 inhibitor, verapamil (20 µM) or the specific ABCB1 inhibitor, vardenafil (10 µM). ( c ) A 3D spheroid invasion assay was performed to assess the ability of ependymoma cells to digest and invade through extracellular matrix (Cultrex BME). The spheroids formed from BXD-1425EPN cell line formed invadopodia and demonstrated multicellular streaming, which was visibly reduced after 96 hours in the presence of either ABCB1 inhibitor. ( d ) There was a significant reduction in the relative spheroid outgrowth R (ratio of the area of the invasive edge [dotted line] to the area of the area of spheroid itself) when treated with either ABCB1 inhibitor, which became more pronounced at 96 hours. Scale bars represent 100 µm. ***p ≤ 0.005, ****p ≤ 0.001. Full size image ABCB1 protein expression was independently associated with reduced overall and event free survival in a chemotherapy led trial In order to address the hypothesis that ABCB1 was functioning as a multidrug transporter in a subpopulation of ependymoma cancer stem cells, we compared outcome in paediatric patients from a primary infant chemotherapy trial (CNS9204) to those from a primary radiotherapy trial (CNS9904). Tumour samples were scored as positive or negative for membranous ABCB1 expression using immunohistochemistry (Fig. 4a,b respectively; tumours with only vascular staining were scored as negative). Positive expression, where observed, was always in less than 3% of cells (median 0.33%). In total, 27 of 85 primary tumours (32%) were scored as positive for ABCB1 by standard immunohistochemical analysis, 15/53 (28%) from the CNS9204 trial cohort and 12/32 (37.5%) from the CNS9904 cohort. In the CNS9204 trial cohort, outcome for ABCB1 positive-patients correlated with lower 5 year event free survival (EFS; 13% versus 50%) P = 0.007 (Fig. 4c ) and lower 5 year overall survival (OS; 33% versus 74%) p = 0.009 (log-rank analysis) (Fig. 4d ). The correlation with outcome held in Cox Regression multivariate analysis (Table 1 ) where ABCB1 was an independent factor, even after adjustment for resection status and grade, for EFS (Hazard Ratio 2.8 confidence interval 1.3–5.9) p = 0.007 and OS (Hazard Ratio 3.0 confidence interval 1.3–6.8) p = 0.008. In the primary radiotherapy trial CNS9904 cohort there was no correlation between ABCB1 expression and outcome for EFS or OS by log-rank analysis (Fig. S6 ). Figure 4 Membranous expression of ABCB1 was associated with poor survival and early relapse in ependymoma. Tissue microarrays from the CNS9204 clinical trial cohort were screened for ABCB1 protein expression. ( a ) An ependymoma patient sample in which a sub-population of tumour cells stained positive for membranous ABCB1 expression (boxed and magnified). ( b ) Ependymoma samples which demonstrated vascular staining (arrows) in the absence of membranous staining tumour cells were scored as negative. Scale bars represent 50 µm. ( c ) ABCB1 positive patients from the chemotherapy-led (CNS9204) trial had a significantly reduced event-free survival (5-year EFS 13% vs. 50%, p = 0.007). ( d ) Overall survival was also significantly reduced in ABCB1 positive CNS9204 patients (5-year OS 33% vs. 74%, p = 0.009) respectively. Data from both trials were combined in order to explore the potential effect modification between ABCB1 and resection status. e . Patients with an incomplete resection who were ABCB1 positive had the poorest EFS (5-year EFS- 15%, p = 0.03) in comparison to other groups. ( f ) Overall survival was not significantly correlated with resection status although patients who were ABCB1 positive had the worst prognosis. CR complete resection, IR incomplete resection, +censored. Full size image Table 1 Multivariate analysis of ABCB1 expression in the chemotherapy-led CNS9204 trial. Full size table ABCB1 protein expression was associated with early relapse in all patients with incomplete resection Whilst patients in the CNS9904 trial were primarily treated with radiotherapy, a subset of these patients, where tumour resection was deemed incomplete, were also given chemotherapy. In order to assess the prognostic value of ABCB1 in all patients who received chemotherapy and explore the potential effect modification by tumour resection status, we combined the two trial datasets. As shown in Fig. 4e , EFS curves differ across the four subgroups (log rank test p = 0.03). Compared to ABCB1 negative patients with complete resection, ABCB1 positive patients are more likely to relapse, in particular those who had incomplete tumour resection (hazard ratio 2.64, 95% CI 1.22–5.73). This effect remained significant after further adjustment for age, WHO tumour grade and whether or not patients received radiotherapy (adjusted HR 2.9, 95% CI 1.3–6.5, p = 0.008). The overall survival for all ABCB1 positive patients was poor regardless of the resection status (Fig. 4f ). Although none of the subgroups reached significance, ABCB1 positive patients in general showed an increased risk of death (HR = 1.85, 95% CI 0.74–4.59 and HR 1.82, 95% CI 0.73–4.55 for patients with complete and incomplete resection, respectively), compared with ABCB1 negative patients. ABCB1 status could be correlated with methylation subgroup for 36 of the patients included in this study. ABCB1 was expressed in 3 subtypes (EPN PFA, ST-EPN-RELA and EPN PFB), although percentages are only meaningful in the EPN PFA (30%; 8/26) and the ST-EPN-RELA subgroups (43%; 3/7). Log rank analysis indicated that although ABCB1 positive patients appeared to do less well in the PFA subgroup this did not reach significance indicating that ABCB1 expression was a confounding factor across both subgroups. Discussion The aim of this study was to investigate whether the presence of a sub-population of multidrug transporter expressing cancer cells could play a role in promoting relapse and progression of ependymomas in response to postoperative chemotherapy. Uniquely, we were able to investigate multidrug transporters in a cohort of patient samples from the same primary chemotherapy trial and compare outcome to samples from a primary radiotherapy trial. Thus, we were also able to correlate expression with the specific type of chemotherapy that the patients received. Notably, 3 out of the 5 chemotherapy drugs methotrexate 37 , vincristine 37 and cyclophosphamide 38 used to treat this cohort are ABCB1 substrates. In the primary chemotherapy-led trial cohort, ABCB1 positivity was an independent prognostic factor for EFS, with ABCB1-positive patients having significantly shorter mean EFS of just 2.7 years compared to their ABCB1 negative counterparts who took a mean of 8.6 years to experience an event. This observation comes despite tumour positivity for ABCB1 being heterogeneous; indeed ABCB1 positivity was no higher than 3% of cells. Hence, ABCB1 expression in a small sub-population of tumour cells is able to confer a drug resistant phenotype in paediatric ependymoma patients in CNS9204. ABCB1 expression was just as prevalent in the radiotherapy cohort as the primary chemotherapy cohort (37.5% and 28% positive tumours respectively) suggesting that ABCB1 associated drug resistance is not unique to very young children (those under 3 years) but that it is the clinical preference for radiotherapy avoidance that makes it a strong prognostic factor in the younger age group. This was supported by our finding that ABCB1 expression correlated with poorer event free survival across all patients who received chemotherapy. Together, these novel prognostic associations have two implications for ependymoma therapy for the 0–19 age group, stratified according to ABCB1 status. Firstly, there is the possibility that radiotherapy, with its associated increased risk of secondary tumours and effects on the developing brain at all ages of childhood, could be avoided in ABCB1 negative tumours. Furthermore, ABCB1 inhibition may increase the efficacy of adjuvant chemotherapy in almost a third of paediatric ependymomas (in total 32% of patients were ABCB1 positive over both trials). Another potential advantage of ABCB1 inhibition would be increased drug uptake at the blood-tumour barrier, since we have also been able to detect ABCB1 expression in the blood vessels supplying ependymomas (Fig. 4b ). ABCB1 expression in endothelial cells also restricts drug access to the brain at the blood brain barrier, limiting the exposure of brain tumours to systemic chemotherapy 39 . Thus, inhibition of ABCB1 alongside chemotherapy may be of benefit to all patients, by increasing drug uptake as well as rendering previously therapy-resistant tumours sensitive to ABCB1 substrates. This may also be beneficial in allowing drugs to be administered at lower concentrations with reduction in associated toxicity. As as well as its role in drug resistance, ABCB1 has been proposed to play a role in cell migration 33 , 34 , 35 , 36 . Notably, local invasion remains a clinical marker of poor survival in ependymoma 6 , 18 and therefore the inhibition of ABCB1 in a non-toxic, specific manner may be relevant in both contexts. The paucity of representative cell line models that retain a sub-population of ABCB1 expressing cancer stem cells and are amenable to pre-clinical assays has hampered progress towards this goal. To this end, we have been able to demonstrate that 4 out of the 5 ependymoma derived cell lines tested expressed ABCB1 by quantitative PCR. ABCB1 expression levels could be enriched by neurosphere culture thus supporting its expression in a subpopulation of cancer stem cells. Using ABCB1 inhibitors alone we were able to significantly reduce the migration of ependymoma cells in a wound healing assay (p ≤ 0.001) in both cell lines. Invasion of BXD1425-EPN spheroids into basement membrane extract was also significantly impeded by ABCB1 inhibition (p ≤ 0.001). BXD-1425EPN showed a response to methotrexate and vincristine within a clinically achievable range and this sensitivity was heightened by inhibition with the ABCB1 specific inhibitor vardenafil or the non-specific ABC transporter inhibitor verapamil. In contrast the high ABCB1 expressing cell line DKFZ-EP1 proved highly resistant to all three chemotherapeutics. DKFZ cells were derived from a highly aggressive supratentorial anaplastic grade III ependymoma, which has previously been demonstrated to be highly resistant to chemotherapeutic agents including vincristine, cisplatin and temozolomide 28 . The inability to reverse this resistance indicates that other mechanisms in addition to ABCB1 are operating in this highly aggressive cell line. In summary, we have shown that, ABCB1 expression in a sub-population of cells is correlated with poorer outcome and survival, presumably reflecting a sub-set of tumours that are both chemoresistant and locally invasive across the two clinical trial cohort assessed. On this basis ABCB1 is one of the key biomarkers being evaluated in the ongoing SIOP ependymoma II trail through the BIOMECA consortium. In cell line models that recapitulate ABCB1 expression, drug resistance and invasion can be decreased by ABCB1 inhibitors. ABCB1 expression might therefore provide a mechanism by which a patient’s likelihood of responding to chemotherapy is assessed so that chemotherapy can be reserved for those most likely to respond. In addition, the ability to potentiate chemotherapy and to reduce local tumour invasion would be a significant advance and this work implicates ABCB1 inhibition with vardenafil, a repurposed paediatric-compatible drug 40 as being a potential mechanism to achieve this. Data Availability All data generated or analysed during this study are included within the article (and its Supplementary Information files). | Researchers at the University of Nottingham have discovered that repurposing a heart drug could significantly increase the survival rate for children with ependymoma—a type of brain tumor. The findings, published in Scientific Reports and led by experts in the University's Schools of Medicine and Life Sciences, suggest that co-treatment with a drug normally used to treat cardiac hypertrophy can overcome chemotherapy resistance and increase survival in over a third of ependymoma patients. Ependymoma are the second most common malignant brain tumors in children. They can occur across all age groups, but the outcome for children is lower than in their adult counterpart. The poorest survival is seen in infants, with the five year prognosis at just 42-55 percent. The use of chemotherapy in children with ependymomas has had variable levels of success, leading to the frequent belief that ependymomas are chemoresistant tumors, since over half of tumors cannot be cured by chemotherapy alone. The study was led by Dr. Beth Coyle from the University of Nottingham's School of Medicine and Dr. Ian Kerr from the School of Life Sciences. The Ph.D. student who undertook the research, Durgagauri Sabnis, was a recipient of a University of Nottingham Vice Chancellor's Research Excellence Scholarship and the British Federation of Women Graduates (BFWG) foundation grant. Dr. Coyle said: "We are hopeful that by combining this repurposed drug with current treatments we can give new hope for long term survival to patients with these devastating brain tumors." In this study the authors set out to determine the nature of this chemoresistance. They show that, in patients treated with chemotherapy alone, the presence of a chemotherapy drug-pumping protein called ABCB1 was associated with a significantly poorer outcome. tumors that expressed ABCB1 were less likely to respond to chemotherapy and more likely to be locally invasive. The authors then used a heart drug to inhibit ABCB1 function in cells taken from patient's tumors. The heart drug was able to stop ABCB1 pumping chemotherapy drugs out of the tumor cells making them more sensitive to chemotherapy and less able to migrate. ABCB1 is expressed in over one third of patient's tumors, all of whom could potentially benefit from repurposing of this heart drug in future clinical trials. | 10.1038/s41598-019-46700-z |
Earth | Farmland management changes can boost carbon sequestration rates | "Emerging land use practices rapidly increase soil organic matter" Nature Communications 6, Article number: 6995 DOI: 10.1038/ncomms7995 Journal information: Nature Communications | http://dx.doi.org/10.1038/ncomms7995 | https://phys.org/news/2015-05-farmland-boost-carbon-sequestration.html | Abstract The loss of organic matter from agricultural lands constrains our ability to sustainably feed a growing population and mitigate the impacts of climate change. Addressing these challenges requires land use activities that accumulate soil carbon (C) while contributing to food production. In a region of extensive soil degradation in the southeastern United States, we evaluated soil C accumulation for 3 years across a 7-year chronosequence of three farms converted to management-intensive grazing. Here we show that these farms accumulated C at 8.0 Mg ha −1 yr −1 , increasing cation exchange and water holding capacity by 95% and 34%, respectively. Thus, within a decade of management-intensive grazing practices soil C levels returned to those of native forest soils, and likely decreased fertilizer and irrigation demands. Emerging land uses, such as management-intensive grazing, may offer a rare win–win strategy combining profitable food production with rapid improvement of soil quality and short-term climate mitigation through soil C-accumulation. Introduction Soils represent the largest terrestrial C reservoir, containing 2,300 Pg of soil organic carbon (SOC) down to 3 m (ref. 1 ). Agricultural cultivation has likely decreased soil C stocks by one-half to two-thirds, with a cumulative loss near 30–40 Mg C ha −1 (ref. 2 ) that has likely contributed 78±12 Pg of C to the atmosphere as carbon dioxide (CO 2 ) 3 . Lower SOC abundance also has a direct impact on soil productivity. Soil organic matter (SOM) enhances soil nutrient-retention, water-holding capacity (WHC) and facilitates healthy microbial communities essential for sustaining high levels of food production 4 . Land-management strategies that sequester C and improve soil quality can potentially reverse soil degradation 5 ; if they can also sustain food production then they deserve careful consideration. The accumulation of C through land management has the potential to offset up to one-third the annual increase in atmospheric CO 2 (ref. 3 ). The most rapid C accumulation is achieved in (sub)tropical aggrading forests or plantations, which have aboveground C accumulation rates as high as 30 Mg ha −1 yr −1 (ref. 6 ) that can be sustained through wood harvest and conversion to building materials. The highest rates of belowground C accumulation occur when land is converted to grassland ecosystems 7 , 8 , 9 , 10 , 11 . Grasses allocate substantial C belowground and can extend their root structures to great depths, leading to thick soil horizons rich in organic matter. Many studies have quantified SOM increases following conversion to grasslands, native prairies and through implementation of conservation tillage, cover crops or perennialization of agroecosystems 3 . Accumulating C in the soil has additional benefits of higher nutrient retention and water-use efficiency—hence improving the sustainability of soil-based agriculture and forestry. Converting tilled cropland to grazed pasture can drive substantial SOC accumulation 12 , 13 while providing food and economic return for a landowner. Intensive grazing systems, such as those studied here, have been tested and used extensively in temperate regions of Australia and New Zealand, and the increases in soil C content and benefits of higher soil nutrient status are well documented 14 , 15 , 16 , 17 . Optimizing either pasture fertilization, irrigation or grazing intensity can each improve C accumulation rates by up to 0.5 Mg C ha yr −1 (ref. 9 ). Intensive pasture management has yielded rates of C accumulation as high as 2.9 Mg C ha −1 yr −1 following the conversion from cropland 13 ; the intensive management required is most economical for high-value end products, such as milk and dairy products. The potential SOM and CO 2 -sequestration benefits of investing in pasture-based land management are likely greatest in highly degraded soils in warm subtropical climates that facilitate long pasture-growing seasons. Highly weathered soils of the subtropics are particularly sensitive to SOM amounts since they often lack minerals, such as 2:1 layer aluminosilicates, with high-nutrient-retention capacity 18 . A century of intensive tillage techniques has exacerbated SOM loss and soil erosion on the highly weathered subtropical soils in the southeastern United States 19 , 20 . However, the long-growing seasons, relatively inexpensive land, regional milk supply shortfalls and high return on investment have fuelled an expanding dairying farming boom 21 , 22 with ∼ 4,000 ha of row crop converted to management-intensive grazing farms in Georgia alone since 2005 (co-author Hancock, personal communication). We sought to determine how fast and how much soil C accumulates following conversion of row crop agriculture to management-intensive grazed pastures in the southeastern United States. We quantified the magnitude and direction of SOC stocks by sampling across grazed pastures established in 2006, 2008 and 2009, defining a chronosequence of land-use change. These pastures were established on land farmed for cotton and peanuts using conventional tillage techniques for the last 50 years, resulting in a SOC content of ∼ 0.5% in the surface horizons ( Supplementary Fig. 1 ). We sampled soil pits and replicated soil cores over 2 years across all pastures and measured soil C, δ 13 C, and soil bulk density. Results Rapid soil C accumulation Here we report a rapid rate of soil C accumulation accompanying conversion of row crop agriculture land to intensively grazed pastures ( Fig. 1 ) that is on par with the global record for soil C accumulation rate associated with tropical grasslands in Brazil 23 . Fisher, et al. 23 illustrated that the introduction of deep-rooting African grasses to lowland savannas in tropical South America drives C accumulation rates of 7.1 Mg C ha −1 yr −1 and suggested that other fields may have rates as high as ∼ 13 Mg C ha −1 yr −1 (refs 23 , 24 , 25 ). In our pastures, we find that peak C accumulation occurs 2–6 years after pasture establishment with a gain of 8.0±0.85 Mg C ha −1 yr −1 ( r 2 =0.88, P <0.0001) in the upper 30 cm of soil ( Fig. 2 ). Following an apparent lag in the first 2 years, the most recently established farm (converted in 2009) accumulated C at 4.6 Mg C ha −1 yr −1 ( r 2 =0.79, P =0.0184), the middle-established farm (converted in 2008) accumulated C at an average rate of 9.0 Mg C ha −1 yr −1 ( r 2 =0.80, P =0.0395), while the earliest-established farm (converted in 2006) accumulated C at 2.9 Mg C ha −1 yr −1 ( r 2 =0.97, P =0.1125) before an apparent decline in the accumulation rate at 6.5 years following conversion. In all cases, detectable increases in C accumulation were limited to the upper 30 cm ( Supplementary Figs 1 and 2 ), with variability below 30 cm yielding an integrated C accumulation rate of 7.1±2.7 Mg C ha −1 yr −1 in the top metre ( Supplementary Fig. 3 ). This suggests that accumulation at depth may require a longer timeframe or a shift in management practices. Figure 1: Land use shift from row crop to management-intensive grazing. Aerial photographs of the Wrens Farm (Site ID: 2006) taken in 2006 ( a ) before conversion to management-intensive grazing system and ( b ) in 2013, ∼ 7 years after land conversion. Full size image Figure 2: Soil carbon rapidly increases with conversion of row crop to intensive grazing. Soil carbon (Mg C ha −1 ) content shown for the top 30 cm of farms converted in 2006 (green symbols), 2008 (blue symbols) and 2009 (black symbols) and a control farm currently in row crop (grey symbols). Samples from soil pits and soil cores are distinguished by circles and triangles, respectively; open versus closed black circles are from different locations on the 2009 farm. The linear regression (solid line: r 2 =0.88, P <0.0001) and 95% confidence intervals (dashed lines) are for data between 2 and 6 years since conversion only. The grey-shaded arrows represent our interpretation of soil carbon change in this system on the basis of current data. Full size image Why we observe high rates of soil C accumulation Soil δ 13 C values are an efficient tracer of this land-use change. The 50+ years of C3 cotton and peanut cultivation established soil δ 13 C values much lower than the major C inputs from grazed pasture grasses, including manure ( δ 13 C=−14.88) and roots ( δ 13 C=−20.03) from the C4 bermudagrass that comprises the bulk of warm season forage on these farms ( Supplementary Fig. 4 ). Consistent with this, the farm exhibiting the highest C accumulation rate also exhibited large changes in δ 13 C, while the farm established in 2006 showed little to no change in δ 13 C over the 2-year study period. However, the farm established in 2006 had root mass abundance (standing stocks) two to three orders of magnitude higher than the more recently established farms ( Supplementary Table 1 ), suggesting high belowground C cycling from root turnover. The transition from crop to pasture systems results in an average 19% increase in soil C stocks 26 . In our intensively grazed systems, we report an ∼ 75% increase in C stocks within 6 years of conversion. This high C accumulation rate stems from year round intensive forage/grazing management techniques on sandy soils with an initially low soil C content due to past conventional-till row crop agriculture. The pastures in this study are managed for maximum forage production; employing N fertilization, irrigation and selective rotational grazing designed to optimize forage digestibility and protein content. These forage-management techniques are precisely those suggested to increase SOM in pasture systems 13 and when they are applied to soils with degraded SOC content, such as soils in the southeastern United States, rapid C accumulation ensues. Consequences for soil quality In addition, the establishment of intensively grazed pastures improved soil quality by increasing soil cation exchange capacity (CEC) and WHC during the 5 year of transition documented by our chronosequence ( Fig. 3 , Supplementary Figs 5 and 6 and Supplementary Table 2 ). CEC increased by 95% in the top 30 cm for an average rate of 44 meq g −1 yr −1 , while WHC increased by 34%. These soil improvements should reduce the need for fertilizer and water inputs and may also mitigate nitrogen (N) losses from the agricultural system. In a companion study on the oldest chronosequence farm, only half of the 620 kg ha −1 of N inputs (silage, hay, grain and mineral fertilizer) were accounted for as N outputs (milk, N 2 O, NH 3 and leached NO 3 ) with the balance presumably sequestered in the SOM pool or lost via denitrification 27 . Figure 3: The relationship between percent soil C and cation exchange capacity. The relationship is shown at different soil depths along the chronosequence. Soil CEC increases with soil C accumulation. The dashed line is a linear regression of the data ( P <0.001). Full size image Six years after conversion, our data suggest that an apparent plateau in SOC accumulation occurs at ca. 38 Mg C ha −1 in the top 30 cm, which is consistent with peak SOC stocks in the region 13 , 20 , 28 . C stocks as high as 51 Mg C ha −1 (top 20 cm) have been measured in a bottomland Piedmont forest soil (ref. 20 and references within), suggesting that C may continue to accumulate in these intensively grazed pastures over the millennial scales at slower rates 29 . In New Zealand pastures, soil C stocks have been estimated as high as 109 and 138 Mg C ha −1 (refs 14 , 17 ), and once these soils reach a higher SOC level they can become susceptible to C loss if management changes 15 . Total C stock will be determined by grass productivity, soil physical and biological attributes, and the degree of physical disturbance, which can all change with future management 11 , 20 , 25 . Extrapolation of results The C-accumulation benefits of adopting management-intensive grazing practices are notable. We estimate that if just 10% of the 9 million hectares 30 of cropland in the southeastern United States (average C stock: 10 Mg C ha −1 ) were converted to management-intensive grazing land, ∼ 4.5 Tg of C would be accumulated per year. Assuming peak SOC stocks in the top 30 cm were reached at 40 Mg C ha −1 , this accumulation would continue for a minimum of 6 years, reaching a regional accumulation of 27 Tg C. On the basis of a whole farm C-cycle analysis, C accumulation appears to offset methane emissions 31 during the rapid soil C accumulation phase yielding net C-sequestration rates of ∼ 1.6 Mg C-CO 2 equivalents ha −1 yr −1 or ∼ 5,805 kg CO 2 equivalents ha −1 yr −1 (see Methods section for calculation details). As the C accumulation rate declines these farms will become net C-emitting—similar to all dairy production—because of ruminant methane emissions. However, the substantial soil-quality benefits of higher organic matter remain and will likely increase the sustainability of dairy production using management-intensive grazing. Discussion The expansion of this emerging land use practice on previously tilled, row crop land may improve soil quality regionally and could mitigate climate change via rapid increases in soil C. Two daunting challenges facing humanity are feeding a growing population and curtailing the impacts of climate change 5 . Alternative land use activities that reduce environmental degradation (for example, erosion, excess fertilizer and water demand) as well as provide economically viable food can provide a win–win scenario. There are considerable economic incentives for converting irrigated row crop land to grazed pastures used for dairy 22 , especially in warm climates where rainfall is sufficiently high. This practice could vastly improve soil quality 4 and serve as a C sink; however, the magnitude of SOC changes will depend on farm inputs (fertilizer, irrigation and pasture species), pasture management techniques 32 and specific soil conditions. Our results demonstrate that pasture-based intensively grazed dairy systems may provide a near-term solution for agricultural lands that have experienced soil-C loss from previous management practices. Emerging land uses, such as management-intensive grazing, offer profitable and sustainable solutions to our needs for pairing food production with soil restoration and C sequestration. Methods Site description The three farms used in this study were within 40 km of each other on the eastern coastal plain in Burke and Jefferson Counties, Georgia, USA. Soils in this region are deep, well-drained, fine or fine-loamy Kandiudults formed on Coastal Plain sediments and are common to the geographic region. The average annual temperature in this region is 17 °C and annual precipitation is 1,224 mm yr −1 (University of GA Environmental Monitoring Network, 1915–2003). From April to November, evapotranspiration exceeds rainfall and farms irrigate with high-quality aquifer water to meet pasture needs. Forage species in the pasture systems include a hybrid of bermudagrass and stargrass ( Cynodon dactylon (L.) Pers. × C. nlemfuensis Vanderyst; cv. ‘Tifton 85’) as a perennial forage base (grown May through October) that is over-seeded in the fall with annual ryegrass ( Lolium multiflorum Lam.; cv. ‘Big Boss’, ‘Marshall’ and/or ‘Feast’). Individual paddocks received stock densities ranging from 75 to 150 animal units per hectare during any given 12-h period. Dairy cattle were rotated among 45–60 paddocks (depending on farm layout) twice daily, completing a cycle of grazing all paddocks within 15–45 days, depending on time of year, growth rate and quality of the forage, and the nutritional needs of the herd. Soil characteristics and sampling procedures Soils are classified in the Orangeburg (Fine-loamy, kaolinitic, thermic Typic Kandiudults), Dothan (Fine-loamy, kaolinitic, thermic Plinthic Kandiudults) or Faceville (Fine, kaolinitic, thermic Typic Kandiudults) series and the dominant soil minerals are kaolinite, hematite and quartz. Soils were sampled in April, August and December 2011 and December 2012 by genetic horizon from 1 m pits for soil analyses and bulk density ( Supplementary Tables 2–4 ). Soils were sampled from the bottom of the pit to the top to reduce contaminating subsurface horizons with surface litter. Stainless steel cores were pushed horizontally into pit horizons, removed and dried at 105 °C to determine bulk density. CEC and WHC were determined on pit sample horizons within the top 30 cm. To assess spatial variability, replicate soil cores (five cores composited and replicated six times) were sampled (0–30 cm) at each site on December 2011 and December 2012 ( Supplementary Fig. 7 , Supplementary Table 5 ). A row crop (for example, cotton, corn and peanut rotations) farm on similar soil to grazing dairies was sampled to approximate pre-conversion soils. The row crop farm used in this study was adjacent to the middle-aged (2008) farm and representative of row crop practices in the region. Soil analyses Soils were sieved (2 mm) and fine roots and plant material that passed through the sieve were handpicked and static electricity was used to remove abundant fine roots in the 0–5 cm horizon. Soil C and δ 13 C was assessed by finely grinding samples with a zirconium mortar and pestle and analysed with a coupled continuous-flow elemental analyser—isotope ratio mass spectrometer (Carlo-Erba 1108 EA interfaced to a Thermo-Finnigan Delta Plus XP IRMS). Percent carbon was calculated to an areal basis using bulk density measurements. Bulk density was determined on soils collected in August 2011, December 2012 and July 2013 (youngest farm only) using open-core steel rings (7.5 cm depth × 7.5 cm diameter). For the April 2011 sampling, bulk density was collected using 3.4 cm depth × 8.5 cm diameter cores with one end sealed, which gave abnormally high values in some horizons. Because of this, for the middle-aged farm and the youngest farm, we used the August 2011 bulk density values in our April 2011 calculations ( Supplementary Table 3 ). A homogenized soil sample (air-dried and 2-mm sieved) was sent to the University of Georgia Soil, Plant, and Water Laboratory for chemical analysis. For each sample, P, K, Ca, Mg, Mn, Fe and Zn were extracted using the Mehlich I procedure 33 , 34 , 35 and simultaneously determined using an Inductively Coupled Plasma (Thermo Jarrell-Ash Enviro I ICAP, Fraklin, MA). Soil pH was measured in a 1:1 soil:CaCl 2 (0.01 M) suspension. Particle size analysis was performed using the rapid hexametaphosphate method 36 . Briefly, 3% hexametaphosphate was added to air-dried soils (2-mm sieved) and used as a dispersal agent, followed by fractionation and quantification of each particle size by sieving. CEC was measured using a modified, unbuffered salt extraction method 37 , and exchanged NH 4 + was analysed colorimetrically (Alpkem RFA-303) 38 . WHC was measured by applying the appropriate pressure for field capacity (0.1 bar) and wilting point (15 bar) for sandy soil textures 39 . Belowground root standing stock estimates Coarse roots were defined as those that were greater than 2 mm and fine roots were less than 2 mm. All coarse roots were handpicked from >2 mm sample, washed in DI water, lyophilized and quantified by mass. This value was then divided by the total mass of soil it was sieved from and reported on a per 100-g soil basis. Fine roots were handpicked from a 100-g subsample for no more than 45 min, with static electricity and flotation on DI water used to remove abundant fine roots in the 0- to 5-cm horizons at the termination of the time period. All samples were then washed, lyophilized and massed. Values of coarse and fine roots were combined to give a total root mass per 100-g soil sample. Whole farm C sequestration calculation Belflower et al. 31 compared the whole-farm C-cycle balance between a Georgia conventional dairy farm and one of the intensively grazed dairies used in our study (Farm Site ID-2008 reported here) using the Integrated Farm System Model. They reported an emission rate of ∼ 2,350,700 kg CO 2 equivalents per year for this farm. By factoring in our estimates of soil C accumulation (8.0 Mg C ha −1 yr −1 ), we calculate a net soil sequestration rate of ∼ 1.58 Mg C-CO 2 equivalents ha −1 yr −1 or ∼ 5,805 kg CO 2 equivalents ha −1 yr −1 . Further, we estimate that a soil C accumulation rate of 6.4 Mg C ha −1 yr −1 or higher is required for these systems to serve as a net C sink. While such a high rate cannot be maintained indefinitely following conversion, intensively grazed pasture does meet this criterion for at least an initial 5-year period following land use change. Statistics Linear regression models were performed in JMP 10.0 (SAS Institute, Cary, NC). An alpha level of 0.05 was used to determine significant effects. Additional information How to cite this article: Machmuller, B. M. et al. Emerging land use practices rapidly increase soil organic matter. Nat. Commun. 6:6995 doi: 10.1038/ncomms7995 (2015). | Well-maintained pastures prevent erosion, protect water and, as it turns out, can restore the soil's organic matter much more quickly than previously thought, according to a team of researchers from the University of Georgia and the University of Florida. Soil contains the largest terrestrial reservoir of carbon. Tilling fields every year to plant crops releases soil carbon into the atmosphere. It's been known for a long time that transitioning cropland to pastureland where livestock grazes replenishes the soil's carbon, but their study showed that the process can be much more rapid than scientists previously thought. "What is really striking is just how fast these farms gain soil organic matter," said Aaron Thompson, associate professor of environmental soil chemistry and senior author on the study. "In less than a decade, management-intensive grazing restores these soils to levels of organic matter they had as native forests. These farms accumulate soil carbon at rates as fast as ever measured globally." The rate of carbon increase was so high for the first six years that capturing carbon in the soil could also help offset the planet's rise in atmospheric carbon dioxide. Converting to pastures managed using intensive grazing principles can capture up to 8 metric tons of carbon per hectare, or 3.6 tons per acre, per year in the soil. This makes the soils more nutrient-rich and allows them to hold more water. The study, funded by the National Institute of Food and Agriculture and published in the May edition of the journal Nature Communications, tracked changes in soil organic matter on Georgia farms that had changed within the last six years from growing row crops to producing milk as grass-fed dairies. On most North American dairies, hay and silage crops are cultivated in fields separated from the cows' pasture and then fed to the herd as needed. But in management-intensive grazing, the cows spend 90 percent of their time out on pasture. "We found that converting cropland to rotational grazing systems can increase soil organic matter and improve soil quality at rates much faster than previously thought possible in a system that sustains food production," said the study's lead author, Megan Machmuller, who worked on the three-year project as a doctoral student in UGA's Odum School of Ecology. She is now a postdoctoral fellow at Colorado State University. Management-intensive grazing, a practice growing in popularity among Southeastern dairy farmers and pasture-based beef cattle farmers, allows producers to efficiently use the nutrition provided in their pastures. In addition to emphasizing pasture quality and quantity for the cattle, these management-intensive grazing practices also feed the biological activity within the soil. This fosters the development of organic matter, thus capturing larger quantities of carbon that would be otherwise released into the atmosphere. "These systems are proliferating throughout sub-tropical regions that allow year-round grazing—which increases their profitability. They could offer a rare win-win in land management—providing profitable food production with rapid soil restoration and short-term climate mitigation," said study co-author Nick Hill, a professor of crop physiology at UGA. "In Georgia, the number of pasture-based dairies has expanded rapidly since 2005. Many of these farmers are using pastureland that was once devoted to row crops," said study co-author Dennis Hancock, an associate professor and UGA Extension forage specialist. "Once their pasture-based operations were up and running, they began reporting that they were seeing less need for fertilizer and irrigation in order to maintain their forage crops. "The carbon accumulation in soils under pasture-based dairy production in Georgia has major implications in the Southeast, as it shows the 'carbon footprint' of these dairy systems is far more positive than previously thought." The team made additional soil quality measurements after hearing the farmers' anecdotal evidence. They also found that after six years of management intensive grazing, the soil could retain 95 percent more nutrients and 34 percent more water. The impacts of this system on soil fertility and quality is potentially greatest for heavily degraded soils, like those in the Southeast. Dairymen who farm sandy soils like we have in the coastal plain of the Southeastern U.S. need all the help that they can get with these soil properties, according to Hancock. Often, having good soil organic matter and the benefits that come from it can be the difference between losing and making money. Most future land use change is expected to take place in existing agricultural and pastoral lands, said study co-author Marc Kramer, an associate professor in the soil and water science department at the University of Florida. "Emerging land use activities such as intensive grazing show what is achievable in terms of profitable farming with clear carbon cycle and soil fertility benefits," he said. "It is the tip of the iceberg really." | 10.1038/ncomms7995 |
Computer | A diffractive neural network that can be flexibly programmed | Che Liu et al, A programmable diffractive deep neural network based on a digital-coding metasurface array, Nature Electronics (2022). DOI: 10.1038/s41928-022-00719-9 Journal information: Nature Electronics | http://dx.doi.org/10.1038/s41928-022-00719-9 | https://techxplore.com/news/2022-03-diffractive-neural-network-flexibly.html | Abstract The development of artificial intelligence is typically focused on computer algorithms and integrated circuits. Recently, all-optical diffractive deep neural networks have been created that are based on passive structures and can perform complicated functions designed by computer-based neural networks. However, once a passive diffractive deep neural network architecture is fabricated, its function is fixed. Here we report a programmable diffractive deep neural network that is based on a multi-layer digital-coding metasurface array. Each meta-atom on the metasurfaces is integrated with two amplifier chips and acts an active artificial neuron, providing a dynamic modulation range of 35 dB (from −22 dB to 13 dB). We show that the system, which we term a programmable artificial intelligence machine, can handle various deep learning tasks for wave sensing, including image classification, mobile communication coding–decoding and real-time multi-beam focusing. We also develop a reinforcement learning algorithm for on-site learning and a discrete optimization algorithm for digital coding. Main The development of artificial intelligence (AI) is focused on two main technical areas: computer-based machine learning methods such as deep learning 1 , extreme learning 2 and reinforcement learning 3 , and integrated circuits and optical chips for specific functions 4 , 5 , 6 , 7 . AI is typically created by using hierarchical artificial neural networks (ANNs) to imitate the structure of neurons 1 and simulate intelligent actions in human decision-making processes, and it has found widespread applications in face recognition 8 , 9 , automatic driving 10 , 11 , language processing 12 , 13 and medical diagnostics 14 , 15 . Beyond computer-based and circuit-based AI technologies, all wave-based ANNs have also recently been developed using three-dimensional (3D) printed optical lens arrays 16 . Such diffractive optical platforms (also known as diffractive deep neural networks; D 2 NN) take advantage of the wave property of photons to realize parallel calculations and simulate different interconnection structures at the speed of light 16 , 17 . However, wave-based D 2 NNs are still passive devices and thus have fixed network architectures once fabricated and cannot be re-trained for other targets and tasks, which limits their functionalities and applications. A tunable D 2 NN modulated by reflection-type spatial light modulators (termed a reflection-type D 2 NN) was recently developed 18 , but the reflection light path limits the number of layers of the neural networks, and only a two-layer neural network was created. In another reflection-type D 2 NN, an optoelectronic structure was used to increase the number of layers 19 . In this approach, the connection between layers was implemented with electronic circuits and thus a time delay was introduced, breaking the light-speed calculation capabilities. A transmission-type D 2 NN would be the preferred scheme to achieve both multi-layer neural networks and light-speed computing 16 , but a D 2 NN hardware with real-time programmable nodes and quick-response learning ability is necessary. Furthermore, current D 2 NN platforms still rely on a computer to optimize the parameters and require prior information about the environment 16 , 17 , 18 , 19 . In this Article we report an active and reprogrammable transmission-type D 2 NN structure that can directly process electromagnetic (EM) waves in free space for wave sensing, identification and wireless communications. The weight-reprogrammable nodes, which are necessary to create a programmable and re-trainable wave-based D 2 NN structure, are achieved using programmable metasurfaces 20 and information metasurfaces 21 . We term our system a programmable artificial intelligence machine (PAIM). Programmable artificial intelligence machine In the past decade, sophisticated metamaterials and metasurfaces have been developed for manipulating light and EM waves 22 , 23 , 24 , 25 . The digital-coding representation of meta-atoms allows an information metasurface to have reprogrammable abilities and provides a direct connection between the physical world and the digital world 20 , 21 , 26 , 27 . With the aid of controllable active components, programmable metasurfaces can manipulate reflected or transmitted EM waves in real time under digital instructions from field-programmable gate arrays (FPGAs). Various functions and applications have been achieved using such information metasurfaces, including wireless communications 28 , 29 , 30 , 31 , computational imaging 32 , 33 , space–time modulation 34 and self-adaptively smart devices 35 , 36 . To fabricate our PAIM we use an array of information metasurfaces in which the multi-layer metasurfaces act as the programmable physical layers of D 2 NN. We design the PAIM to be a real-time re-trainable system, with parameters that can be digitally set to realize artificial neurons. In the physical layer, the PAIM can hierarchically manipulate the energy distribution of transmitted EM waves using a five-layer information metasurface array 37 , 38 , from which the amplitude of the transmitted wave through each artificial neuron (meta-atom) could be enhanced or attenuated by controlling the amplifier chips through FPGAs (Fig. 1a–c ). The phase modulation of the transmitted wave is coupled with the amplitude modulation, and both of them together constitute the complex-valued gain factors of the programmable artificial neuron. The complex-valued gain factor is a one-to-one response corresponding to bias voltage, which is modulated by a customized FPGA circuit (Figs. 1c and 4b , and Supplementary Note 1 ). Fig. 1: A reprogrammable D 2 NN platform. a , An array of programmable metasurfaces is used to construct the PAIM, in which several FGPAs are installed to control the gain factor of each artificial neuron, making PAIM a real-time and re-trainable intelligent machine. b , Schematic diagram of PAIM. An artificial neuron in the learning layer will receive the waves radiated from all artificial neurons in the former layer, making the PAIM structure a fully connected network. The transmission coefficient of each artificial neuron can be trained by using supervised/unsupervised learning or even reinforcement learning methods to achieve various functions. The first layer acts as the input layer by using preset transmission coefficients to encode the input information into the spatial distribution of the EM energy. c , The assembled FPGA modulation network for PAIM. The receiving antenna array will receive the field distribution of the EM waves, which is processed by FPGAs. FPGAs also guide the update of the bias voltages of the artificial neurons. d , The transmitted wave of an artificial neuron, multiplied by propagation factors \({{{W}}}_{i}\) , illuminates all artificial neurons in the next layer. The EM wave is then multiplied by the complex-valued transmission coefficient \({{{T}}}_{i}\) to act as the secondary source of the wave. e , The radiation pattern of an artificial neuron. Full size image When the incident beam passes through a programmable artificial neuron in the first-layer metasurface, the amplitude and phase of the transmitted wave are determined by the product of the incident electric field and the complex-valued transmission coefficient of the artificial neuron, and the transmitted wave will act as a secondary source and illuminate all programmable artificial neurons in the second-layer metasurface (Fig. 1b,d ), based on the Huygens–Fresnel principle 16 . The transmitted waves (from all directions) illuminating an artificial neuron in the second layer are then added up and the whole acts as the wave incident on the artificial neuron in the second-layer metasurface (Fig. 1b ). This process is continued until the last metasurface layer. The reprogrammable interconnection architecture in the PAIM is the factor fundamental to achieving the dynamically artificial neurons. According to the radiation pattern of the artificial neuron (Fig. 1e ), the power transmitted by the artificial neuron presents a certain gain distribution on the next-layer metasurface (Fig. 1d ). Accordingly, the forward propagation model (see Forward propagation model section) of the PAIM can be regarded as a fully connected network (Supplementary Fig. 1 ). However, compared with the traditional fully connected network constructed from real numbers 1 , the PAIM parameters have complex values and the trainable parts are complex-valued transmission coefficients of the artificial neurons. Hence, we have fewer trainable parameters. The traditional error backpropagation method could be used to train the PAIM parameters (see Error backpropagation section). Also, owing to its fast parameter-switching ability and direct feedback from receivers (Fig. 1c ), the PAIM enables self-learning capability by using the data gained from direct interaction with environment, and does not need any prior knowledge. Thus, the PAIM possesses reinforcement learning capacity 39 . We note that the artificial neurons work in their linear range in this study, so the PAIM is a linear system in the complex domain. Because we measure the amplitude of the output electric fields as the final output results of the PAIM, and this requires calculation of the modulus of complex numbers (complex electric fields), with the modulus calculation process being a nonlinear activation function, it is interesting that the PAIM can be equivalent to a linear neural network connected to a nonlinear activation function and can handle some nonlinear classification problems such as exclusive-OR classification problems (Supplementary Note 2 ). Nevertheless, technically speaking, the PAIM is still a linear system from the viewpoint of photonic neuromorphic computing. We also note that nonlinear modulation of the PAIM in the complex domain could be explored in the future by making the artificial neurons work in the nonlinear range 40 , which has the potential to further increase the fitting quality of our PAIM system. When processing the given data, we make the first-layer programmable metasurface a digital-to-analogue converter to modulate the given data into an amplitude distribution of the EM wave when illuminated by plane waves (Fig. 1a,b ). The transmitted EM waves then carry the information of the given data and will be processed by the remaining metasurface layers. Therefore, without using independent and complicated input modules, the PAIM is more flexible and compact than all-optical D 2 NN platforms 16 , 17 . In fact, the PAIM can directly receive and process the EM waves radiated by radars, communication base stations and wireless routers, making it more environmentally compatible. (The relationship between the PAIM and current digital hardware is discussed in Supplementary Note 3 .) Image classification tasks based on the PAIM To verify the powerful capabilities of the PAIM, we first used it to deal with two image classification tasks: oil-painting (Fig. 2 ) and handwritten-digit (Supplementary Fig. 2 ) classifications. In the first classification task, using two kinds of oil painting (portraiture and landscape painting) we simulated the PAIM with six-layer metasurfaces, each consisting of 25 × 25 programmable artificial neurons. The input image was greyed and reshaped to 25 × 25 pixels (corresponding to the size of the metasurface) and then input to the PAIM by configuring the first-layer metasurface, in which the transmission coefficient of each artificial neuron was set as the corresponding pixel value of the image. Thus, the EM wave would carry the information of the input image when going through the first layer. The remaining five layers constituted the recognition network. At the end of the PAIM we assigned two receivers to receive the two kinds of oil painting. The receiving energy at each receiver represents the level of possibility that the input image has been classified. The receiver with the maximum receiving energy corresponds to the kind of classification result (Fig. 2 ). After training with 500 oil-painting images and testing with 100 images, the mean accuracy rate of recognizing the two oil painting styles was 97%. The number of trainable parameters was only 5 × 25 × 25, much fewer than that required in traditional fully connected ANN networks. A similar architecture was used for the handwritten-digit classification, with a layer size of 40 × 40 and with the handwritten digits classified into ten different kinds; this reached a 90.76% classification accuracy after training (Supplementary Fig. 2 ). Fig. 2: The simulation results for oil-painting recognition. Before inputting to the PAIM, the original images (first column) are greyed and reshaped to 25 × 25 pixels (second column). According to the output energy distribution (third column), the receiver region with the maximum receiving energy corresponds to the kind of classification result (fourth column). The results indicate that the PAIM has the potential to imitate high-level human intelligence, such as art appreciation. Credits: (top to bottom) World History Archive/Alamy Stock Photo; Sowa Sergiusz/Alamy Stock Photo; Lebrecht Music & Arts/Alamy Stock Photo; Martin Shields/Alamy Stock Photo; Artefact/Alamy Stock Photo; classicpaintings/Alamy Stock Photo. Full size image To demonstrate the versatility of the PAIM for use in the real world, we designed and fabricated a PAIM sample with five-layer programmable metasurfaces controlled by five FPGA modules, with each layer consisting of 8 × 8 artificial neurons (Figs. 1 a and 4b ). Each artificial neuron, integrated with two amplifiers (Supplementary Fig. 3 ), can individually modulate more than 500 different grades of transmitted gain of the EM wave (Supplementary Fig. 4 ) under the control of bias voltage by means of FPGAs. We use the programmable transmitted gain to represent the network weight in the ANN, and hence the artificial neuron can be regarded as a dynamic neuron in the fully connected network. The support structure of the PAIM sample is presented in Supplementary Fig. 5 ; the first layer (that is, the input layer) is illuminated by microwaves at 5.4 GHz radiated by a horn antenna. The measurement was performed in a standard microwave chamber (Supplementary Fig. 6c ,d). To test the real experimental performance of the PAIM in image classification, we designed two example cases, to classify simple patterns and to recognize binary images of four digits (1 to 4). As mentioned above, the first-layer metasurface of the PAIM acts as a digital-to-analogue converter to convert the input image into the corresponding spatial distribution of EM waves. More specifically, different pixel values in the input image correspond to different transmission coefficients of the artificial neurons in the first layer, radiating the EM waves with different spatial distributions onto the second layer. The remaining four layers act as the recognizer, and several receiving antennas are placed at the end of the PAIM. The training process is run in a computer to obtain the appropriate transmission coefficients for the artificial neurons in the recognizer. A gradient backpropagation algorithm could be used to train the PAIM. However, because the adjustable parameters of the digital coding metasurfaces are discrete, we specially designed a discrete optimization algorithm (see Discrete optimization algorithm section) to make the training results more practical. Meanwhile, we calibrate the wave propagation coefficients between two adjacent metasurface layers using a gradient descent method to make our training results more precise. (More details on the calibration approach are provided in Supplementary Note 11 .) For our first experimental case, image classification, we used two simple patterns, the letter ‘I’ and brackets ‘[ ]’. The positions of the patterns could be altered to make the input images more varied. The pixels belonging to the pattern areas and background were allocated different bias voltages of the artificial neurons in the first PAIM layer. The experimental results show that the PAIM can classify the two patterns with an accuracy of 100% (Fig. 3a–d and Supplementary Fig. 7 ). In the second case, digit recognition, we chose binary images of four digits (1 to 4). We discretized the original image into an 8 × 8 binary pixel matrix and used different bias voltages of the artificial neurons in the first PAIM layer to represent different pixel values. The bias-voltage configurations of layers 2–5 (corresponding to the recognition part of the PAIM) for this case are shown in Fig. 3I . A recognition accuracy of 100% was also achieved for these experimental tests (Fig. 3e–h ). Besides the aforementioned two image classification cases, an extra case (game props recognition), designed to demonstrate the ability of the PAIM to recognize colourful images, is provided in Supplementary Note 8 (Supplementary Fig. 8 ). Fig. 3: Experimental results of image classifications using PAIM. a – d , Two kinds of pattern (letter ‘I’ and brackets ‘[ ]’) are represented by the distributions of bias voltages of 8 × 8 artificial neurons in the first PAIM layer. The input image consists of 8 × 8 blocky squares of colour, and each square represents the bias voltage of an artificial neuron. The artificial neurons in the remaining four layers are assigned the bias voltages designed during the training process and can recognize the two patterns by ranking the receiving energies from the two receiving antennas. The patterns of letter ‘I’ in a and b and of ‘[ ]’ in c and d have different locations. e – h , The original images of four digits (1, 2, 3 and 4 in e , f , g and h , respectively) are discretized into 8 × 8 blocky squares of colour, representing different levels of bias voltages of 8 × 8 artificial neurons in the first PAIM layer. The images of the output distributions indicate the expected testing results. i , The bias-voltage configurations of layers 2 to 5, obtained by a training process run on a computer. Full size image Code division multiple access based on PAIM Besides image classification, we used the same PAIM for a mobile communication codec, which can perform coding and decoding tasks in the code division multiple access (CDMA) scheme and transmit four kinds of orthogonal user code simultaneously or separately in one channel. Each user code is a string of binary numbers with length of 64. As shown in Fig. 4a , the first-layer PAIM metasurface is set as an encoder, on which each artificial neuron sequentially corresponds to one bit in the binary number string. When a high or low bias voltage is sent to the artificial neuron, it will correspond to the ‘1’ or ‘0’ bit in the 64-bit user code, respectively. We located four receiving antennas at the end of the PAIM, and each antenna represents a user code. When one of the antennas receives high-energy EM waves, this means that the corresponding user code related to this antenna is transmitted (Fig. 4a ). Fig. 4: Experimental results for the encoder and decoder in the CDMA task using the PAIM. a , The energy distributions of all layers when radiating the coding information of the fourth user. The input layer acts as an encoder to transform the user code to the energy distribution in free space. The yellow and black dots in the input layer represent the binary digits ‘1’ and ‘0’, respectively. The PAIM receives the spatial energy distribution and decodes it via four metasurface layers to judge which user code has been transmitted. The distance between the input layer and the receiving plane (output layer) is 1.6 m. b , Photograph of one of the fabricated metasurface layers, which is controlled by 64 channel voltages with the corresponding FPGA. c – f , The energy distributions of user codes 1 to 4 ( c – f , respectively) on the output plane when transmitting one user code, showing that only the correct receiver corresponding to the transmitted user code can collect high energy. g , h , The output energy distributions when simultaneously transmitting two (3 and 4, g ) or four (1, 2, 3 and 4, h ) user codes. Only the receivers corresponding to the transmitted user codes collect high energies, indicating that all user codes could be decoded correctly and simultaneously. Full size image We use \(\left\{ {{C_1},{C_2},{C_3},{C_4}} \right\}\) to represent the four user codes and \(\left\{ {{E_1},{E_2},{E_3},{E_4}} \right\}\) to represent the receiving energies at the corresponding antennas. The remaining four-layer PAIM metasurfaces are trained as a decoder. When \({C_1}\) is transmitted by the first layer, the values of \(\left\{ {{E_1},{E_2},{E_3},{E_4}} \right\}\) will be \({f}\left( {C_1} \right) = \left\{ {{\rm{high}},{\rm{low}},{\rm{low}},{\rm{low}}} \right\}\) , in which the function f represents the linear forward propagation function of the PAIM, and the term ‘low’ indicates that the receiving energy is much less than that of ‘high’. Similarly, when \({C_3}\) is transmitted, the receiving energy values would be \({f}\left( {C_3} \right) = \left\{ {{\rm{low}},{\rm{low}},{\rm{high}},{\rm{low}}} \right\}\) . In a more complicated situation, when two user codes \({C_1}\) and \({C_3}\) are transmitted simultaneously, the receiving energy values will be \({f}\left( {{C_1} + {C_3}} \right) = {f}\left( {C_1} \right) + {f}\left( {C_3} \right) = \left\{ {{\rm{high}},{\rm{low}},{\rm{high}},{\rm{low}}} \right\}\) . The same is true when three or four user codes are transmitted simultaneously. Owing to the independence of wave propagation, the transmission of one user code has little influence on the transmission of others, and hence each user code can be transmitted independently in one channel. This property allows simplification of the training process: we do not need to train all combinations of the four user codes. Instead, when the output EM wave distribution of each user code conforms to the designed distribution, the combinations will satisfy the expectation automatically. The total loss function for training is the sum of mean square errors (m.s.e.s) between the designed output energy distribution and the one generated by inputting each of the four user codes. Random Gaussian noise is added to the training input to make our system more robust. The experimental results show that our PAIM can decode the user codes correctly even when they are overlapped (Fig. 4c–h and Supplementary Fig. 9 ). To demonstrate the strong capability of the PAIM for space–time telecommunications, we used it to transmit a picture of our laboratory’s emblem as 100 × 100 pixels, which could be represented as a binary sequence of 10,000 bits. The receiver was a patch antenna array that could detect the energy at each receiving domain corresponding to the relevant user code (receiving user domain). The modulation scheme for communication is amplitude modulation. In detail, when the receiving user domain has high energy at the current clock, the transmitted binary signal is ‘1’, and vice versa. Because we have four user codes corresponding to four user channels, we can transmit four images simultaneously. Equivalently, in this case, we chose to use different user channels to transmit different parts of the binary image, making the transmission speed four times faster than when using only one channel. Working at 5.5 GHz, our experimental communication system was placed in a noisy environment fouled by a commercial 5G Wi-Fi router. The distance between the transmitter and receiver was 1.6 m. We first performed a contrast experiment in which we let the space–time amplitude modulation signal (modulated by a single layer of PAIM) propagate in free space (Fig. 5b ,c). The received decoding image is shown in Fig. 5d . The error rate of transmission is 49.02%, making the received image indecipherable. Fig. 5: Space–time telecommunication system with and without the decoding part of PAIM. a , The space–time telecommunication system with the decoding part of the PAIM. The first layer of the PAIM acts as a transmitter, radiating space–time EM waves at 5.5 GHz. The remaining four layers of the PAIM are trained as a denoising and decoding processor located in the transmission channel. The EM environment is fouled by a commercial 5G Wi-Fi router. However, with the help of the PAIM decoder, the transmission error rate declines to 0.52% along with no extra time delay for signal preprocessing. b , The energy distribution on the receiving plane when code 3 and code 4 are transmitted into free space without the decoding part of the PAIM. c , The energy distribution on the receiving plane when all four codes are transmitted into free space without the decoding parts of the PAIM. d , The space–time telecommunication system without the decoding part of the PAIM. The first layer of the PAIM acts as a transmitter, radiating space–time EM waves (according to four user codes) at 5.5 GHz, which reach the receiver through free-space propagation. The EM environment is fouled by a commercial 5G Wi-Fi router, causing a high error rate of transmission (49.02%) when transmitting a binary image of our laboratory’s emblem. Full size image We then trained the remaining four layers of the PAIM as a denoising and decoding processor and located it in the transmission channel, as shown in Fig. 5a . In this case, the received image becomes legible, with an error rate of 0.52%, verifying that the PAIM can directly denoise and decode the information in wave space during the EM propagation process, with nearly no extra time delay. The speed of transmission is 1,000 bit s −1 . However, we remark that the transmission speed is only determined by the code-switching rate of the transmitting terminal and the signal-sampling rate of the receiving terminal, which could be remarkably accelerated by using mature communication devices. After training, the configuration parameters of the denoising and decoding parts of the PAIM are constants, meaning that its denoising and decoding functions still work even when the transmission speed is extremely high. Unlike the traditional CDMA scheme, the PAIM performs the encoding, decoding and denoising functions directly in the wave domain (Supplementary Fig. 10 ), so the PAIM has the advantage of reducing the time delay in wireless communications. Furthermore, its strong capability to process distributed space EM waves makes the PAIM a good candidate for realizing space division multiplexing and thus increase channel capacity. The decoding function of the PAIM is operated as an independent system and is able to deal with the signals from distributed communication base stations (Supplementary Fig. 11 ). As spectrum resources have become exhausted, the space division multiplexing technique has received increasing attention and is becoming one of the key technologies in the fifth and sixth generations of wireless communications 41 . Reinforcement learning function of PAIM Finally, we turned our PAIM into a dynamic multi-beam focusing lens that could focus the EM energy onto multiple points with arbitrary positions. Unlike the aforementioned cases, in which the training process is executed on a computer in advance, here we directly perform on-site training of PAIM using a reinforcement learning method in real time, which completely overcomes the limitation of requiring prior knowledge as in previous optical D 2 NN platforms 16 , 17 , 18 , 19 . Reinforcement learning based on integrated photonic platforms has been demonstrated using a reflected spatial light modulator to manipulate training weights, and has successfully predicted the chaotic Mackey–Glass sequence 42 . Similarly, benefitting from the real-time programmable ability of the PAIM, we can train the parameters through continuous interaction with the unknown and complicated EM environments. Figure 6a presents a schematic diagram of the reinforcement learning process, in which the bias voltages of artificial neurons are randomly changed and controlled by an FPGA. The same FPGA also receives feedback signals from the receivers and calculates the trend of the error function to determine whether the change in bias voltages is reserved or eliminated. In this case, the m.s.e. is again used as the error function, and an extra function is added to restrain the redundant EM waves (see Reinforcement-learning process section). This could be regarded as a real-time optimization procedure, and the optimal objective is minimizing the distance between the desired pattern and the EM wave distribution generated by the PAIM. The output EM wave distributions along with the updated times of parameters are presented in Fig. 6b ,c. After automatic training, the PAIM can transform the output EM waves into target point(s) with more than 90% concentrated energy, and the focusing point(s) can shift with a frequency of 100 Hz to realize multi-beam scanning. In this application, by benefitting from the real-time updates of parameters in the PAIM, no extra training dataset is needed. The experimental results indicate that the PAIM could be applied in different kinds of EM environment. Fig. 6: Experimental results of dynamic multi-beam focusing by the on-site reinforcement learning process using the PAIM. a , The on-site reinforcement learning process of PAIM, in which the transmission coefficients of each PAIM layer are continually controlled by an FPGA according to real-time feedback signals. b , c , The evolution of the output energy distributions along with the updated time in the reinforcement learning process. Here, the distributions of input fields in b and c are randomly generated but remained unchanged during the reinforcement learning process. We observe that the energies of the output fields are gradually focused on the target points when the updating procedure goes on. Full size image Conclusions Our PAIM is an on-site programmable D 2 NN platform that operates via real-time control of EM waves and can perform computations based on the parallelism of EM wave propagations at the speed of light. It can deal with traditional deep learning tasks such as image recognition and feature detection, and can also provide an on-site way to manipulate spatial EM waves for use in multi-channel encoding and decoding in the CDMA scheme and dynamic multi-beam focusing. Our PAIM could thus be of use in applications such as wireless communications, signal enhancement, medical imaging, remote control and the Internet of Things. The nonlinear version of the PAIM could also be developed further by introducing stable nonlinear amplifiers in the artificial neurons, potentially leading to a variety of new applications. Methods Forward propagation model Figure 1b demonstrates the 2D structure of the PAIM model, and a more detailed version is provided in Supplementary Fig. 1 . We use \({{{E}}}^{i},\,{i} = {0,\,1,\,2,\, \ldots, \,M}\) to represent the complex electric field illuminating on the i th PAIM layer, which is an N -dimensional vector ( N is the number of meta-atoms in a layer), and each element indicates the field received by the corresponding meta-atom. \({{{E}}}^{M + 1}\) is the complex field on the output plane (output layer). Thus, the length of \({{{E}}}^{M + 1}\) depends on the number of receiving antennas or the sampling numbers of the moving probe. \({{{T}}}^{i},\,{i} = {0,\,1,\,2,\, \ldots ,\,M}\) represents the complex transmission coefficients of the i th layer, which is also an N -dimensional vector, and each element in this vector corresponds to the complex transmission coefficient of the meta-atom in the i th layer. Then the forward propagation formula can be written as $${{{E}}}^{i + 1} = {{{W}}}^{i}\left( {{{{E}}}^{i} \odot {{{T}}}^{i}} \right),\quad {i} = {0,\,1,\,2,\, \ldots ,\,M}$$ (1) in which \({\odot}\) is the Hadamard product and \({{{W}}}^{i}\) represents the space attenuation coefficients from the i th layer to the ( i + 1)th layer. In fact, \({{{W}}}^{i},\,{i} = {0,\,1,\,2,\, \ldots ,\,M}\) is an N × N -dimensional matrix, and its element in the p th row and q th column represents the space attenuation coefficient from the q th meta-atom in the i th layer to the p th meta-atom in the ( i + 1)th layer. \({{{W}}}^{M}\) connects the last PAIM layer and the output layer, and hence it is a K × N matrix, where K is the number of receiving antennas or the sampling number of a moving probe. A schematic diagram of EM-wave propagations between adjacent layers and the radiation pattern of one meta-atom are demonstrated in Fig. 1d ,e. The symbols \({{{W}}}_{n},\,{n} = {1,\,2,\,3,\,4,\, \ldots}\) in Fig. 1e are different from the matrix \({{{W}}}^{i}\) in equation ( 1 ). In fact, \({{{W}}}_{n},\,{n} = {1,\,2,\,3,\,4,\, \ldots}\) constitute one of the columns in \({{{W}}}^{i}\) . Using the hierarchical equation ( 1 ), we obtain the final output field \({{{E}}}^{M + 1}\) , once the input field \({{{E}}}^0\) is given. Error backpropagation Equation ( 1 ) presents the forward propagation model of PAIM, which is a linear model. We use \({{{a}}}^{i}\) and \({{{{\varPhi}}}}^{i}\) to represent the amplitude and phase parts of \({{{T}}}^{i},\,{i} = {0,\,1,\,2,\, \ldots ,\,M}\) , respectively. The gradients of \({{{a}}}^{M - L}\) and \({{{{\varPhi}}}}^{M - L},\,{L} = {0,\,1,\,2,\, \ldots ,\,M}\) can be easily obtained by matrix multiply operations using the chain rule for the derivative: $$\begin{array}{l}\frac{{\partial {{{E}}}^{M + 1}}}{{\partial {{{{\varPhi}}}}^{M - L}}} = \frac{{\partial {{{E}}}^{M + 1}}}{{\partial {{{T}}}^{M - L}}}\frac{{\partial {{{T}}}^{M - L}}}{{\partial {{{{\varPhi}}}}^{M - L}}} = {{{A}}}^{M - L}{\rm{diag}}\left( {j{{{T}}}^{M - L}} \right)\\ \frac{{\partial {{{E}}}^{M + 1}}}{{\partial {{{a}}}^{M - L}}} = \frac{{\partial {{{E}}}^{M + 1}}}{{\partial {{{T}}}^{M - L}}}\frac{{\partial {{{T}}}^{M - L}}}{{\partial {{{a}}}^{M - L}}} = {{{A}}}^{M - L}{\rm{diag}}\left( {{\rm{exp}}\left( {j{{{{\varPhi}}}}^{M - L}} \right)} \right)\\ {{{A}}}^{M - L} = {{{W}}}^{M}{\rm{diag}}\left( {{{{T}}}^{M}} \right){{{W}}}^{M - 1}{\rm{diag}}\left( {{{{T}}}^{M - 1}} \right){{{W}}}^{M - 2}{\rm{diag}}\left( {{{{T}}}^{M - 2}} \right) \ldots {{{W}}}^{M - L}{\rm{diag}}({{{E}}}^{M - L})\end{array}$$ (2) The matrix partial derivatives in equation ( 2 ) apply for the numerator format, and diag means matrix diagonalization. For different tasks, we define different error functions, symbolized by \({\rm{Error}}({{{E}}}^{M + 1})\) . For the classification tasks we use the cross-entropy error function, and for other tasks we use the m.s.e. function. Applying the chain rule, the final gradient formula can be written as $${\frac{{\partial {\rm{Error}}({{{E}}}^{M + 1})}}{{\partial {{{a}}}^{M - L}}}} = {\frac{{\partial {\rm{Error}}({{{E}}}^{M + 1})}}{{\partial {{{E}}}^{M + 1}}}\frac{{\partial {{{E}}}^{M + 1}}}{{\partial {{{a}}}^{M - L}}}}$$ (3) $${\frac{{\partial {\rm{Error}}({{{E}}}^{M + 1})}}{{\partial {{{{\varPhi}}}}^{M - L}}}} = {\frac{{\partial {\rm{Error}}({{{E}}}^{M + 1})}}{{\partial {{{E}}}^{M + 1}}}\frac{{\partial {{{E}}}^{M + 1}}}{{\partial {{{{\varPhi}}}}^{M - L}}}}$$ (4) When all gradients are calculated, the method of gradient descent is used to optimize \({{{a}}}^{i}\) or \({{{{\varPhi}}}}^{i}\) independently or simultaneously. Various kinds of deep learning optimizer can be used, such as Adam 43 . Discrete optimization algorithm In practice, the complex transmission coefficient of the meta-atom cannot be controlled continuously, and can only be set by several discrete values. Thus, it is necessary to develop a discrete optimization algorithm to optimize the complex transmission coefficients of the meta-atom. The aim of the discrete optimization algorithm is the same as the aforementioned gradient descent optimization algorithm: minimizing the error function. We use \({a_p^i},\,{i} = {1,\,2,\, \ldots ,\,M};\,{p = 1,\,2,\, \ldots ,\,N}\) to present the complex transmission coefficient of the p th meta-atom in the i th layer, the complex transmission coefficients of which are represented as \({{{T}}}^{i}\) . The procedure of the optimization algorithm is given as follows (optimizing \({a_p^i}\) as an example): Step 1: Initialize all parameters of \({{{T}}}^{i},\,{i} = {0,\,1,\,2,\, \ldots ,\,M}\) by discretizing the uniform random distribution. Step 2: Calculate the output electric fields \({{{E}}}^{M + 1}\) by using the forward propagation equation ( 1 ) and the current error \({{\rm{Error}}0} = {{\rm{Error}}({{{E}}}^{M + 1})}\) of the self-defined error function. Step 3: Initialize the update step value of \({a_p^i}\) randomly, expressed by \({{{\mathrm{{\Delta}}}}}{d_p^i}\) . Here, \({{{\mathrm{{\Delta}}}}}{d_p^i}\) is usually the difference of adjacent discrete values. Because of the linearity of our network, if the value \({a_p^i}\) increases by \({{{\mathrm{{\Delta}}}}}{d_p^i}\) , the output field will increase by \({{{\mathrm{{\Delta}}}}}{d_p^i}\frac{{\partial {{{E}}}^{M + 1}}}{{\partial {a_p^i}}}\) . Thus, the output field becomes \({{{E}}}^{M + 1} + {{{\mathrm{{\Delta}}}}}{d_p^i} \frac{{\partial {{{E}}}^{M + 1}}}{{\partial {a_p^i}}}\) . Calculate the auxiliary error by $${{\rm{Error}}_p^i} = {\rm{Error}}\left( {{{{E}}}^{M + 1} + {{{\mathrm{{\Delta}}}}}{d_p^i} \frac{{\partial {{{E}}}^{M + 1}}}{{\partial {a_p^i}}}} \right)$$ (5) Step 4: Compare the values of \({{\rm{Error}}0}\) and \({{\rm{Error}}_p^i}\) . If \({{\rm{Error}}0} \le {{\rm{Error}}_p^i}\) , the value of \({a_p^i}\) remains unchanged; otherwise, the value of \({a_p^i}\) is updated and replaced by \({a_p^i} + {{{\mathrm{{\Delta}}}}}{d_p^i}\) . For all \({a_p^i},\,{i} = {{1,\,2,\, \ldots ,\,M};\;\;{p = 1,\,2,\, \ldots ,\,N}}\) , we execute steps 2 to 4 in loops until the current error is less than the preset value, and then save the corresponding \({{{T}}}^{i},\,{i} = {0,\,1,\,2,\, \ldots ,\,M}\) as the final optimization outcome. Several methods could be used to accelerate the optimization process, such as organizing the computational process in matrix form or using parallel algorithms. Reinforcement-learning process To fully demonstrate the real-time re-trainable advantage of the PAIM, we specifically designed a reinforcement learning task to realize dynamic multi-beam focusing. The reinforcement learning process does not need pre-prepared training data, but updates the configuration parameters according to feedback by interacting with the environment. It can thus guarantee the training result to be adaptive to the actual application scenario. For the multi-beam focusing task, the error is calculated by $${\rm{Error}} = {\mathop {\sum }\limits_{k} {y_k}\left( {{s_k} - {\theta _k}} \right)^{2} + {aS^2}}$$ (6) where k represents the number of points on which we want to concentrate the EM energy, \({s_k}\) and \({\theta _k}\) indicate the measured energy and desired energy at the k th point, respectively, S represents the total leaking energy (which is the sum of energies radiated outside the target point) and \({y_k}\) and a are constant scale factors. In one iteration of the training process, we randomly choose 20% trainable meta-atoms and change their bias voltages by small random values, one by one. Instantly after the bias voltage of a meta-atom is changed, the change in output energy distribution is measured by the receiving antennas, and, at the same time, the change in error is calculated by equation ( 6 ), as shown in Fig. 1c . If the error decreases, which means that the bias voltage change of the meta-atom can make the current output EM distribution closer to the desired one, the current bias voltage will be reserved, otherwise the bias voltage will restore to the value before it was just changed. In our multi-beam focusing task, the PAIM could focus the EM energies to the desired positions after ~500 iterations. One advantage of reinforcement learning is the result-oriented strategy, with which we do not need to worry about the accuracy of simulations or other factors that could make the designed parameters deviate from the measurement results. Hence, it enables the PAIM to deal with very complicated and unknown environments, broadening its application range. Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request. Code availability All the mathematical algorithms we used are provided in the Methods and Supplementary Information. The codes that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request. | In recent decades, machine learning and deep learning algorithms have become increasingly advanced, so much so that they are now being introduced in a variety of real-world settings. In recent years, some computer scientists and electronics engineers have been exploring the development of an alternative type of artificial intelligence (AI) tools, known as diffractive optical neural networks. Diffractive optical neural networks are deep neural networks based on diffractive optical technology (i.e., lenses or other components that can alter the phase of light propagating through them). While these networks have been found to achieve ultra-fast computing speeds and high energy efficiencies, typically they are very difficult to program and adapt to different use cases. Researchers at Southeast University, Peking University and Pazhou Laboratory in China have recently developed a diffractive deep neural network that can be easily programmed to complete different tasks. Their network, introduced in a paper published in Nature Electronics, is based on a flexible and multi-layer metasurface array. "Our hope was to realize a diffractive neural network in which each unit can be independently and flexibly programmed," Tie Jun Cui, one of the researchers who carried out the study, told TechXplore. "Drawing inspiration from our past research on digital programmable metasurfaces, information metasurfaces and experience in electromagnetic regulation, we created a programmable diffractive neural network machine by constructing multilayer programmable transmissive metasurfaces, which we named programmable artificial intelligence machine (PAIM)." The researchers' diffractive neural network performs calculations that closely resemble those performed by optical diffractive neural networks. More specifically, their network performs matrix network operations when electromagnetic waves propagate through its multi-layer metasurfaces, which resemble the light speed calculations performed by optical diffractive networks. "The key breakthrough of our work is that each neuron is independently reprogrammable," Cui explained. "Therefore, the entire neural network can be trained and programmed on-site and can also be repeatedly trained according to different task requirements. We design a metasurface unit (neuron) with reprogrammable transmission coefficients that integrates a power amplifier chip." Space–time telecommunication system with and without the decoding part of PAIM. a, The space–time telecommunication system with the decoding part of the PAIM. The first layer of the PAIM acts as a transmitter, radiating space–time EM waves at 5.5 GHz. The remaining four layers of the PAIM are trained as a denoising and decoding processor located in the transmission channel. The EM environment is fouled by a commercial 5G Wi-Fi router. However, with the help of the PAIM decoder, the transmission error rate declines to 0.52% along with no extra time delay for signal preprocessing. b, The energy distribution on the receiving plane when code 3 and code 4 are transmitted into free space without the decoding part of the PAIM. c, The energy distribution on the receiving plane when all four codes are transmitted into free space without the decoding parts of the PAIM. d, The space–time telecommunication system without the decoding part of the PAIM. The first layer of the PAIM acts as a transmitter, radiating space–time EM waves (according to four user codes) at 5.5 GHz, which reach the receiver through free-space propagation. The EM environment is fouled by a commercial 5G Wi-Fi router, causing a high error rate of transmission (49.02%) when transmitting a binary image of our laboratory’s emblem. Credit: Liu et al. Every amplifier in the researchers' network can be digitally controlled through a field-programable gate array (FPGA). FPGAs are systems that contain an array of different programmable logic blocks or units. In the system created by Cui and his colleagues, every individual unit can be independently controlled, which allows engineers to program the entire neural network and allow it to perform well in specific tasks. "The biggest highlight of our study was the realization of a programmable diffractive neural network in a convenient and efficient way," Cui said. "In the past, optical diffraction deep neural networks were mainly composed of optical media such as silicon dioxide, which were non-adjustable materials. Therefore, this type of optical neural network needs to be trained with the help of a computer, and the final network distribution is obtained before processing." Most previously developed diffractive optical neural networks remained fixed once they are trained. As a result, they can only complete a set number of tasks, those they were originally trained on. In contrast, as it is based on digital metasurfaces, the network created by Cui and his colleagues can be programmed to complete different tasks. "Programmable metasurfaces can control electromagnetic waves with a simple architecture, low cost and high efficiency, which is a potential choice for building programmable neural networks," Cui added. "Our PAIM can also directly modulate free-space electromagnetic waves with light speed, which is a potential low-latency signal processing unit used for 5G and 6G wireless communications." In initial evaluations, the diffractive neural network introduced by this team of researchers achieved very promising results, as it was found to be highly flexible and applicable across a wide range of scenarios. In the future, it could thus be used to solve a variety of real-world problems, including image classification, wave sensing and wireless communication coding/decoding. Meanwhile, Cui and his colleagues will work on improving its performance further. "The prototype implemented in this work is based on a 5-layer diffractive neural network, each layer has 64 programmable neural networks, and the total number of nodes in the network is relatively low," Cui added. "At the same time, the operating frequency band of this network is lower, resulting in a larger size of the physical network. In our next studies, we plan to further increase the scale of the programmable neurons of the network, improve the network integration, reduce the size and form a set of intelligent computers with stronger computing power and more practicality for sensing and communications." | 10.1038/s41928-022-00719-9 |
Medicine | Timing of brain cell death uncovers a new target for Alzheimer's treatment | Hikari Tanaka et al, YAP-dependent necrosis occurs in early stages of Alzheimer's disease and regulates mouse model pathology, Nature Communications (2020). DOI: 10.1038/s41467-020-14353-6 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-14353-6 | https://medicalxpress.com/news/2020-02-brain-cell-death-uncovers-alzheimer.html | Abstract The timing and characteristics of neuronal death in Alzheimer’s disease (AD) remain largely unknown. Here we examine AD mouse models with an original marker, myristoylated alanine-rich C-kinase substrate phosphorylated at serine 46 (pSer46-MARCKS), and reveal an increase of neuronal necrosis during pre-symptomatic phase and a subsequent decrease during symptomatic phase. Postmortem brains of mild cognitive impairment (MCI) rather than symptomatic AD patients reveal a remarkable increase of necrosis. In vivo imaging reveals instability of endoplasmic reticulum (ER) in mouse AD models and genome-edited human AD iPS cell-derived neurons. The level of nuclear Yes-associated protein (YAP) is remarkably decreased in such neurons under AD pathology due to the sequestration into cytoplasmic amyloid beta (Aβ) aggregates, supporting the feature of YAP-dependent necrosis. Suppression of early-stage neuronal death by AAV-YAPdeltaC reduces the later-stage extracellular Aβ burden and cognitive impairment, suggesting that preclinical/prodromal YAP-dependent neuronal necrosis represents a target for AD therapeutics. Introduction The ability to diagnose AD at an early stage is eagerly anticipated, especially after clinical trials of anti-Aβ antibodies 1 , 2 and γ-/β-secretase inhibitors 3 , 4 in post-onset patients proved disappointing. A deeper understanding of MCI could play a pivotal role in the development of new therapeutic strategies for AD. Despite the importance of MCI, the pathological and molecular evaluation remains insufficient especially from the aspect of chronological change of neuronal function and cell death. Accordingly, no efficient single biomarker directly reflecting disease activity in MCI has yet been reported. Cutting-edge techniques, including comprehensive analyses, have identified molecules in addition to Aβ and tau that could be targeted for therapeutic intervention at the early stage of AD. For instance, comparison of neuroimaging and transcriptome data revealed that a genetic profile of lipid metabolism centered by APOE affects propagation patterns of both Aβ and tau in the brain 5 . In another study, a meta-analysis of functional genomic data from AD showed that YAP, a co-transcriptional factor that regulates cell death and survival by binding to the different transcription factors p73 and TEA domain family member 1 (TEAD) 6 , 7 , 8 , 9 , is positioned at the center of the molecular network of AD 10 . Elevated activity of TEAD mediated by YAP has been implicated in cell proliferation, differentiation, and survival 11 , 12 , 13 , whereas elevated p73 14 , 15 , 16 activity and reduced TEAD 17 , 18 , 19 activity promote apoptosis and necrosis, respectively. Previously, we performed a comprehensive phosphoproteome analysis of four strains of AD model mice and human postmortem AD brains, and discovered three proteins whose phosphorylation state is altered at a very early stage before extracellular amyloid aggregates 20 . One such protein is MARCKS, which anchors the actin cytoskeleton to the plasma membrane and plays a critical role in stabilizing the post-synaptic structure of dendritic spines 21 . Phosphorylation of MARCKS at Ser46 decreases its affinity for actin and destabilizes dendritic spines 22 . High mobility group box-1 (HMGB1) contributes to the MARCKS phosphorylation via Toll-like receptor 4 (TLR4) since blockade of HMGB1–TLR4 binding with monoclonal anti-HMGB1 antibodies suppresses the phosphorylation of MARCKS at Ser46, stabilizes dendritic spines, and rescues cognitive impairment in AD model mice 22 . Given that HMGB1 is released from necrotic cells 23 , 24 , it remains unclear how MARCKS phosphorylation, which occurs at the early stage of AD pathology, is connected to neuronal cell death, which is believed to occur at a relatively late stage. In this study, we found that HMGB1 levels were remarkably elevated in CSF of MCI, but not so elevated in AD patients. Consistent with this, active neuronal necrosis revealed by pSer46-MARCKS increased to the greatest extent during preclinical stages of AD mouse models and human MCI patients. In addition, we showed that the observed necrosis was caused by a deficiency of YAP, resulting in suppression of the transcriptional activity of TEAD, the final effector molecule of the Hippo pathway 11 , 12 , 13 , in mouse AD models, human AD iPS neuron models and human postmortem MCI brains. These findings unravel the occurrence of cell death at the early stage in AD, which could be a therapeutic target that prevents progression of AD. Results HMGB1 is elevated in CSF of human MCI patients CSF samples were collected by lumbar puncture from 34 normal controls, 14 disease controls, 26 MCI patients, and 73 AD patients (Supplementary Tables 1 , 2 ). MCI and AD were diagnosed by ICD-10, and the patients were categorized as having amnestic MCI. There was no significant difference in age between the different patient groups, but the proportion of female patients was slightly higher in the AD group than in the other groups (Supplementary Table 1 ). ApoE subtyping was performed in 19 MCI and 18 AD patients (Supplementary Table 1 ). In the disease control patients, CSF samples were taken because neurological diseases were suspected; therefore, there was some bias in the types of disease present in this patient group (Supplementary Table 2 ). To verify the accuracy of MCI/AD diagnosis, we compared the levels of Aβ42, pTau, and pTau/Aβ42 between the normal control group and the MCI or AD group. In support of the clinical diagnoses, Aβ42 levels were reduced, and pTau/Aβ42 levels were elevated, in the MCI and AD groups (Supplementary Fig. 1 ). APP/Aβ ratio was increased in AD group in comparison to other groups (Supplementary Fig. 1 ), as reported previously 25 . Expecting elevation of HMGB1 in symptomatic AD, we evaluated HMGB1 concentrations in CSF by ELISA. However, the CSF-HMGB1 level was significantly elevated in the clinically diagnosed MCI group, but not the AD group, relative to the normal or disease controls (Fig. 1a ). The CSF-HMGB1 level was also significantly higher in the MCI group than in the AD group (Fig. 1a ). In receiver operating characteristic (ROC) analysis of the comparisons between MCI and the normal or disease controls, the area under the curve (AUC) was 0.861 or 0.931, respectively (Fig. 1b ). In addition, AUC was 0.809 in comparison between MCI and AD, suggesting the CSF-HMGB1 value may assist clinical diagnosis of the two phenotypic states (Fig. 1b ). Interestingly, we observed no significant correlations between CSF-HMGB1 and Mini-Mental State Examination (MMSE) score in the MCI, AD, or MCI + AD group (Supplementary Fig. 2 ). Fig. 1: HMGB1 levels are elevated in the CSF of MCI and AD patients. a CSF-HMGB1 levels in the normal control (nc) ( N = 19 persons), disease control (dc) ( N = 11 persons), MCI ( N = 21 persons), and AD ( N = 56 persons) groups were evaluated by high-sensitivity ELISA. The box plot shows the median and quartiles. Statistical differences among groups were evaluated using the Wilcoxon rank–sum test with post-hoc Bonferroni correction. b Receiver operating characteristic (ROC) curves for the MCI or AD group versus the normal control (nc) and disease control (dc) groups. Area under the ROC curve (AUC) values are shown in the graphs. Source data are provided as a “Source Data file”. Full size image In the MCI group, we observed a positive relationship between levels of CSF-HMGB1 and levels of Aβ42, Aβ40, and tau (Supplementary Fig. 3 ). Levels of pTau were not related to levels of CSF-HMGB1 in MCI patients (Supplementary Fig. 3 ). Moreover, we detected no relationship between CSF-HMGB1 and Aβ42, Aβ40, tau, or pTau in the AD group (Supplementary Fig. 3 ). The number of patients in which both Aβ42 and Aβ40 could be analyzed was small, so this result is not informative (Supplementary Fig. 3 ). We observed no relationship between ApoE4 allele copy number and CSF-HMGB1 in the MCI group. However, ApoE4 was negatively correlated with CSF-HMGB1 in the AD group (Supplementary Fig. 3 ). This finding may be of interest, assuming that the summative pathology linked to CSF-HMGB1 and ApoE4 allele copy number reflects cognitive impairment. Necrosis occurs most actively in the MCI stage HMGB1 is a representative damage-associated molecular patterns (DAMPs) molecule released from necrotic cells 23 , 24 . Our findings in human CSF suggested that neuronal necrosis might occur more frequently in preclinical MCI than in symptomatic AD. Evaluation of cell death in vivo has been technically difficult because intensities of cell death markers diminish rapidly after cell death or are cleared by phagocytes in the brain. To overcome the difficulty, we employed anti-pSer46-MARCKS antibody whose reactivity was characterized by western blot 22 . The specificity to pSer46-MARCKS was further confirmed by ELISA using phospho- and non-phospho peptides matching to the 14 amino acid sequence of MARCKS around Ser46 (Supplementary Fig. 4a ). Anti-pSer46-MARCKS antibody was purified with affinity columns of non-phosphorylated antigen peptide and the phosphorylated antigen peptide (Supplementary Fig. 4b ). Therefore, we compared the reactivity of anti-pSer46-MARCKS antibody and anti-non-phosphorylated MARCKS antibody in immunohistochemistry of cerebral cortex from 5xFAD mice at 6 months (Supplementary Fig. 4c ). Obviously the patterns were different, and anti-pSer46-MARCKS antibody, but not anti-non-phosphorylated MARCKS antibody, stained structures around extracellular Aβ aggregates (Supplementary Fig. 4c ). We also performed western blot to examine chronological change of pSer46-MARCKS in the cerebral cortex of 5xFAD mice from 1 to 12 months (Supplementary Fig. 4d ). Interestingly, pSer46-MARCKS formed high molecular weight smear (HMW), suggesting that the character of MARCKS as an intrinsically disordered/denatured protein (IDP) 22 , 26 (Supplementary Fig. 4e ) was enhanced by phosphorylation at pSer46. HMW smear, 80 kD and 50 kD bands were all increased during pathological progression (Supplementary Fig. 4d , right graph). In addition, pSer46-MARCKS was increased also during normal aging of non-transgenic sibling mice (Supplementary Fig. 4d , right graph). Consequently, the ratio of pSer46-MARCKS between 5xFAD mice and non-transgenic sibling mice were declined after 3 months, consistently with our previous result of the similar ratio in mass analysis 22 . pSer46-MARCKS reactivity increased in neurons surrounding dying cells, enabling us to detect active neuronal necrosis at the moment of dissection 22 (Fig. 2a ). Neurons under such active necrosis were marked by deformed and/or shrinking nuclei, sometimes with faint DAPI staining surrounded by degenerative neurites reactive for pSer46-MARCKS (Fig. 2a ). Consistent with this, immunoelectron microscopy (IEM) confirmed that degenerative neurites reactive to pSer46-MARCKS antibody and full of autophagosomes surrounded intracellular Aβ plaques (Supplementary Fig. 5a, b ). Beyond the borders of degenerative neurites (Supplementary Fig. 5a , yellow dot lines), amyloid plaques included cytoplasmic organelles (Supplementary Fig. 5a , white arrows). Immunohistochemistry also revealed that the similar degenerative neurites surrounded non-apoptotic dying neurons (no chromatin condensation) with a deforming and shrinking nucleus (Supplementary Fig. 5c ). These findings indicated that neurons died by necrosis at the center of degenerative neurite clusters, Aβ persisting after cell death served as a seed for further extracellular amyloid aggregation, and that such necrotic neurons released DAMPs such as HMGB1, tau, and Aβ. Moreover, by using primary mouse cortical neurons, we confirmed that neurons under α-amanitin-induced necrosis 17 but not glutamate-induced apoptosis 27 , 28 , 29 induced reactive increase of pSer46-MARCKS in surviving neurons in neighborhood (Supplementary Fig. 6a ). Western blot also supported induction of pSer46-MARCKS in neurons by α-amanitin-induced necrosis but not glutamate-induced apoptosis (Supplementary Fig. 6b ). These findings further supported that reactive pSer46-MARCKS in neighboring cells could be used as a marker specifically indicating necrotic change of the central neuron which they surrounded. Fig. 2: Necrosis occurs most actively at the preclinical stage in mouse and human AD brains. a Morphological definition of active necrosis, secondary necrosis and ghost of cell death. Active necrosis is specified by a single degraded nucleus detected by DAPI surrounded by pSer46-MARCKS-positive degenerative neurites. Secondary necrosis is a cluster of multiple dying cells with residual Aβ in extracellular space and surrounding pSer46-MARCKS stains. Ghost of cell death is an extension of secondary necrosis in which DAPI and pSer46-MARCKS stains have faded out. b Upper graphs show time course of active necrosis, secondary necrosis and ghost of cell death in 5xFAD mice, in retrosplenial dysgranular cortex. Representative images from each time point are shown below the graph. Yellow arrow indicates a single degraded nucleus surrounded by reactive pSer46-MARCKS stains. N = 3 mice, n = 30 fields. c Time course of active necrosis in human mutant APP knock-in mice. Time course of necrosis ( N = 3 mice, n = 30 fields) and representative images are shown. d Representative images of active necrosis in human MCI (MCI) and non-neurological disease control (NC). Rupturing or deformed nucleus undergoing necrosis is surrounded by Aβ and pSer46-MARCKS-positive degenerative neurites (white arrow). e The box plot shows the number of active necrosis per visual field in the median, quartiles and whiskers that represent 1.5× the interquartile range. ** p < 0.01, Tukey’s HSD test ( n = 10 images/person). f Simulation of active necrosis. A formula was generated by assuming that cell death occurs at a constant rate in the residual neurons and in regular time interval (top). Modulation of each parameter changed simulation curves (graphs). g Numerical simulation program generated the optimized curve (red line) based on observed values of active necrosis in occipital cortex of 5xFAD mice (black line) and predicted parameter values and active necrosis at an unmeasured time point (2 months). h The number of active necrosis observed afterwards with samples at 2 months (60 days) matched exactly with the predicted number. Values in each group are summarized by mean ± S.E.M. Source data are provided as a “Source Data file”. Full size image In this work, we strictly defined “active necrosis” as a single dying cell surrounded by reactive pSer46-MARCKS signals (Fig. 2a ). Since necrotic cells or apoptotic cells not removed by phagocytes are known to trigger secondary necrosis 30 , 31 , 32 , we defined “secondary necrosis” as a cluster of multiple cells with reactive pSer46-MARCKS signals (Fig. 2a ). Most extracellular Aβ aggregates in 5xFAD mice were associated with pSer46-MARCKS and DAPI signals. However, in aged mice, a small part of Aβ aggregates show disappearance or weakening of pSer46-MARCKS and DAPI signals, which we named as “ghost of cell death”. We found that the proportion of active necrosis increased during the preclinical stage of 5xFAD mice 33 , from 1 to 6 months, and then decreased from 12 to 18 months after the onset of cognitive impairment (Fig. 2b ). A similar relationship between clinical stage and active necrosis was confirmed in human mutant APP knock-in mice (APP-KI mice) 34 (Fig. 2c ). This finding that cell death precedes extracellular Aβ aggregate appearance at 3 months in 5xFAD mice 33 and at 4 months in APP-KI mice 34 was unexpected. However, the early-stage appearance of active necrosis in mouse models (Fig. 2b, c ) explains the elevation in the CSF-HMGB1 level in human MCI (Fig. 1 ) and presumably agrees with results of previous clinical trials. Immunostaining of pSer46-MARCKS with postmortem human brains of MCI and non-neurological disease control patients confirmed that cortical neurons underwent morphologically similar necrosis in MCI patient brains (Fig. 2d ). The active necrosis revealed by pSer46-MARCKS were present at significantly higher frequencies in all MCI patients than in AD patients (Fig. 2d, e ), and neurons surrounded with reactive pSer46-MARCKS, even though they did not match the strict criteria of active necrosis because the nuclear DAPI stain remained intact, were increased in MCI (Supplementary Fig. 7 ). The number of neurons by itself was decreased in AD in comparison to that in MCI (Supplementary Fig. 7 ), and neurons surrounded with reactive pSer46-MARCKS were remarkably decreased in symptomatic AD. Mathematical simulation of active necrosis The chronological change of active necrosis motivated us to mathematically simulate chronological changes of active necrosis (Fig. 2f ). If total number of neurons at the initial time point is N , and cell death (active necrosis) occurs constantly at the rate of r , residual number of neurons at current time ( N k ) is calculated as follows. $$N_k = N\left( {1-r} \right)^{k - 1}$$ (1) Here, k is the number of cell death cycles, and k is calculated by the period necessary for a single turn of cell death and the time from the initial time point when cell death starts to the current time point. Then, active cell death is calculated as following. $${\mathrm{Active}}\;{\mathrm{cell}}\;{\mathrm{death}} = N_k-N_{k - 1} = N\;r\;\left( {1-r} \right)^{k - 1}$$ (2) The simulation curve changed when the parameters, N , r , p and the initial detection time point ( p days later than initiation time point of cell death) were modulated (Fig. 2f ). As the graph shows, chronological change of actually observed active necrosis was precisely simulated (Fig. 2g ). The consistency between theoretical and experimental data was surprising. The parameter deduced from observed number of active necrosis suggested that cell death period is 31 days and cell death ratio is 0.141 (14.1% of cell death die in 31 days). Initial number of neurons (30.3 cells) matched exactly with the neuronal number actually observed (30.6 cells) (Fig. 2g ). In addition, the mathematical simulation predicted that active necrosis process initiates from 1 month when intracellular Aβ begins to be detected in immunohistochemistry 22 and it should reach to 3.706 cells per area (143 μm × 143 μm) at 2 months (Fig. 2g ). Therefore, we examined again the brains of 5xFAD mice at 2 months, and surprisingly found that the actual frequency active necrosis (3.766 cells/area) matched exactly with the expected value (Fig. 2h ). These consistencies in the mathematical induction and deduction further supported our theory for dynamics of active necrosis. ER enlargement is a morphological feature of necrosis in MCI To characterize necrosis in vivo, we employed two-photon microscopy 19 and analyzed dynamic changes of the ER in cortical neurons of 5xFAD mice from 1 (pre-symptomatic/preclinical stage) to 6 months (symptomatic/clinical stage) (Fig. 3a, b ). The ER and Aβ were visualized using ER-Tracker™ and BTA1, respectively. At 1 month, ER volume was larger and less stable in 5xFAD than in non-transgenic sibling mice (B6/SJL) (Fig. 3a, b ), and this tendency persisted at later time points (Fig. 3b , Supplementary Fig. 8 ). Moreover, these mice had a higher standard deviation or quartile deviation of ER volume, indicating that the ER was unstable in 5xFAD mice from 1 to 6 months (Fig. 3c ). After two-photon microscopy, the mouse brains were investigated by electron microscopy. ER enlargement was confirmed at high frequencies in neurons of 5xFAD mice but rarely in non-transgenic sibling mice (B6/SJL) (Fig. 4a ). Fig. 3: Extreme instability of ER in AD model mice revealed by in vivo ER imaging. a In vivo ER and Aβ images were acquired by two-photon microscopy from 1-month-old 5xFAD mice into which ER-tracker and BTA1 had been injected in one shot 4 h before observation. ER and Aβ image sets were taken in tandem every 20 min. 3D images of ER and Aβ stains were merged by IMARIS (Bitplane, Zurich, Switzerland). Dot-line indicates a single neuron. b Total volumes of ER puncta belonging to a single cell were quantified by IMARIS (Bitplane, Zurich, Switzerland), and time courses are shown in the graph. Changes were more pronounced in 5xFAD mice than in non-transgenic sibling mice (Non-Tg sibling). N = 3 mice, n = 9 cells. c To verify the finding in b , standard deviation (SD) and quartile deviation of ER volumes from a single cell at multiple time points were compared between groups of 5xFAD and non-transgenic sibling mice. Box plots show the median, quartiles and whiskers that represent 1.5× the interquartile range. P -values were determined by Welch’s test, ** p < 0.01 ( N = 3 mice, n = 9 cells). Source data are provided as a “Source Data file”. Full size image Fig. 4: ER enlargement of neurons in 5xFAD mice and human MCI/AD patients. a Electron microscopy of neurons in control (B6/SJL) and 5xFAD mice. Marked ER enlargement was observed in 5xFAD mice at 5 months of age before the onset. b Electron microscopy of neurons in non-neurological disease control, MCI (Braak stage III by Gallyas-Braak staining) and AD (Braak stage V) patients. Higher magnification in two subcellular fields reveal ribosomes attached to vacuoles (arrow). Amyloid plaque in AD patient is shown (asterisk). Quantitative analysis of three groups is shown in the graph. The bar graph indicates average and mean ± S.E.M., together with the corresponding data points. P-values were determined by Tukey’s HSD test, ** p < 0.01 ( N = 3 persons, n = 100 cells). c A high magnification image of ballooned ER. Ribosomes on ER membrane help identification of the origin of the medium-size ERs (black ER), while ribosomes were detached when ER lumen was further enlarged and only a few ribosomes were remained (white ER). Mt: mitochondria. Nuc: nucleus. d Calnexin and MAP2 or KDEL and MAP2 co-staining of human neurons of non-neurological disease (control), MCI and AD patients. Remarkable enlargement of ER was detected (arrow) in MCI more frequently than in AD. e Calnexin and MAP2 or KDEL and MAP2 co-staining of mouse neurons of 5xFAD mice at 3 and 6 months of age. Remarkable enlargement of ER was detected (arrow) at 3 months more frequently than at 6 months. Source data are provided as a “Source Data file”. Full size image We extended electron microscopic analysis to human brains of non-neurological disease, MCI and AD patients (Fig. 4b ). Remarkable enlargement of ER equivalent to the finding in 5xFAD mice was detected in MCI at a higher frequency than in AD patients (Fig. 4b ). Instead, the frequency of extracellular aggregates was increased in AD (Fig. 4b , asterisk). Larger magnification revealed ribosomes on ER membrane confirming the origin of the ballooned organelles (Fig. 4c ). A few ribosomes remained on the surface of extremely enlarged vacuoles (Fig. 4b , arrows in #1 and #2 of MCI) indicating that they originated from rough ER. Consistently, immunohistochemistry with anti-MAP2 and anti-calnexin (ER membrane marker) antibodies or with anti-MAP2 and anti-KDEL (ER content marker) antibodies revealed ER enlargement in cortical neurons of MCI patients (Fig. 4d ) and of pre-symptomatic 5xFAD mice (Fig. 4e ). Intracellular Aβ deprives YAP from the nucleus The ER enlargement and instability we observed in 5xFAD mice were reminiscent of transcriptional repression-induced atypical cell death (TRIAD), the Hippo pathway-dependent necrosis 17 , 18 , 19 . Hence, we investigated key molecules in the Hippo pathway in human postmortem brains of MCI (amnestic MCI with AD pathology) and symptomatic AD patients. First, we discovered that intracellular Aβ aggregates deprived YAP from the nucleus, ultimately causing a decrease in nuclear YAP levels in the cortical neurons of AD and MCI patients (Fig. 5a ). This remarkable finding was observed in three MCI (amnestic MCI with AD pathology) and three symptomatic AD patients (Fig. 5b , upper graph). In MCI, DAPI signal intensities were decreased in neurons where intracellular Aβ aggregates deprived YAP from the nucleus (Fig. 5b , middle graph). Comparison among control, MCI and AD also confirmed decrease of DAPI signal intensities in cortical neurons with cytoplasmic YAP/Aβ-colocalization (Fig. 5b , lower graph). Immunoprecipitation of cerebral cortex tissues (temporal lobe) from human AD patients who had been pathologically diagnosed as pure AD (Fig. 5c ) also supported the interaction between YAP and Aβ. Consistent with the reduced nuclear YAP in immunohistochemistry, western blot revealed a similar decrease of YAP in temporal and occipital tip tissues from AD patients (Fig. 5d ). YAP and Aβ levels were inversely correlated in cortical tissues from the occipital tip and temporal tip of AD patients and non-neurological disease controls (Fig. 5e ). Fig. 5: Aβ sequesters YAP from the nucleus to intracellular aggregates. a Immunohistochemistry of YAP and Aβ of human postmortem brains revealed sequestration of YAP into cytoplasmic Aβ aggregates and the resultant decrease of nuclear YAP in MCI and AD patients. b Quantification of signal intensities of nuclear YAP staining in neurons confirmed similar findings in three MCI patients and three symptomatic AD patients. A total of 160 neurons ( N = 8 persons) in the occipital lobe were selected at random from each patient, and nuclear YAP signal intensities were quantified by confocal microscopy (FV1200IXGP44, Olympus, Tokyo, Japan). Signal intensity of DAPI per a nucleus was quantified in cortical neurons of occipital lobe of non-neurological disease controls and MCI or AD patients. The DAPI signals were compared in MCI between cytoplasmic YAP-positive and Aβ-positive neurons ( n = 60) and cytoplasmic YAP-negative and Aβ-negative neurons ( n = 60). In addition, signal intensity of DAPI per a nucleus of cytoplasmic was compared between YAP-positive and Aβ-positive neurons ( n = 180) of MCI or AD patients ( N = 3) and normal neurons ( n = 180) of non-neurological disease controls ( N = 3). Box plots show the median, quartiles and whiskers that represent 1.5× the interquartile range. The bar graph indicates average and mean ± S.E.M., together with the corresponding data points. P -values were determined by Tukey’s HSD test or student’s t -test, ** p < 0.01 c Immunoprecipitation reveals the interaction between YAP and Aβ in human postmortem brain. Upper panels: anti-Aβ antibody (82E1) co-precipitated YAP from cerebral cortex tissues of AD patients, but not from non-neurological disease control. Lower panels: reverse co-precipitation of Aβ protofibrils with YAP. d Western and dot blots of Aβ and YAP. Temporal tip and occipital tip tissues from pathologically diagnosed AD patients or controls were immunoblotted with anti-Aβ antibody (82E1, dot blot) or anti-YAP antibody (sc-15407). e Inverse correlation between Aβ burden and YAP in human patient/control brains. P-values were determined by Pearson’s correlation coefficient (AD: N = 8 persons, Control: N = 6 persons). Source data are provided as a “Source Data file”. Full size image In addition, LATS1 kinase, which prevents nuclear translocation of YAP 35 , was activated in human cortical neurons of MCI and in those of AD to a lesser extent (Supplementary Fig. 9a, b ). On the other hand, PLK1, which switches necrosis to apoptosis 19 , was not activated in either MCI or symptomatic AD (Supplementary Fig. 10a, b ). The decrease of nuclear YAP due to cytoplasmic co-segregation with Aβ was confirmed in cortical neurons of 5xFAD mice (Supplementary Fig. 11a, b ) and human mutant APP-KI mice (Supplementary Fig. 11c, d ) at 3 months, prior to the onset of cognitive impairment. In these neurons DAPI signal intensities were decreased (Supplementary Fig. 11b, d ). LATS1 activation in cortical neurons was also confirmed by immunohistochemistry in both mouse strains (Supplementary Fig. 9c, d ). These findings support that Hippo pathway–dependent necrosis (TRIAD) 17 , 18 , 19 occurs from the pre-symptomatic to post-symptomatic stages in both human and mouse AD pathology. Moreover, essential transducers of necroptosis, RIP1/3 and the downstream pathways were not activated in the pathway analysis based on comprehensive phosphoproteome data (Supplementary Fig. 12a ), in western blot (Supplementary Fig. 12b ) and immunohistochemistry (Supplementary Fig. 12c ) of cerebral cortex tissues of 5xFAD mice from 1 to 3 months when YAP-dependent necrosis occurred at high frequencies. These results further support that the necrosis at the early stage of AD pathology is distinct from necroptosis, which had been implicated in neuronal loss at the late stage after extracellular Aβ aggregation 36 . In human postmortem brain of MCI due to AD, RIP1/3 were not also activated in cortical neurons possessing intracellular Aβ (Supplementary Fig. 12d ). YAP deprivation by intracellular Aβ induces Hippo pathway-dependent necrosis To further uncover the mechanism of intracellular Aβ induced-necrosis, we employed human induced pluripotent stem cells (iPSCs) carrying heterozygous or homozygous APP mutations (KM670/671NL) generated by genome editing 37 , differentiated them into neurons, and performed time-lapse imaging to elucidate the chronological relationship among amount of intracellular Aβ, transcriptional activity of TEAD-YAP, and ER ballooning. ER ballooning and rupture in heterozygous and homozygous AD-iPSC–derived neurons occurred at a higher frequency than in normal iPSC-derived neurons (Fig. 6a, b , Supplementary Movies 1 – 3 ), consistently with the observation in vivo (Fig. 3 , Supplementary Fig. 8 ) and TRIAD 17 , 18 , 19 . Fig. 6: Time lapse imaging of multiple neurons suggests pathological cascade. a Experimental protocol to evaluate the relationship among intracellular Aβ, TEAD/YAP transcriptional activity and ER ballooning. b High magnification revealed that the membrane of the cytoplasmic balloon was reactive to ER-tracker in iPSC-derived neurons carrying APP mutations (left panels). ER ballooning occurs frequently in iPSC-derived neurons carrying APP mutations (heterozygous and homozygous mutants carrying APP KM670/671NL) (right graph). P -values were determined by Wilcoxon’s rank sum test with post-hoc Bonferroni correction, ** p < 0.01 ( N = 3 wells, n = 10 visual fields). c Accumulation of BTA1-stained intracellular Aβ occurs before ER ballooning in iPSC-derived neurons carrying APP mutations (left panels). Alignment of BTA1 signals changes in multiple neurons to the time point of ER ballooning initiation revealed intracellular Aβ began to accumulate 10 h before ER ballooning (right graphs). d Construction of a plasmid vector to monitor TEAD/YAP-transcriptional activity. e TEAD/YAP transcriptional activity was decreased in iPSC-derived neurons carrying APP mutations in comparison to normal iPSC-derived neurons (left panels). Alignment to ER ballooning time point revealed TEAD/YAP transcriptional activity started to decrease 8 h before ER ballooning. These results suggest a pathological cascade, increase of intracellular Aβ → decrease of TEAD/YAP transcriptional activity → ER ballooning. f Protocol of YAP-knockdown in normal human iPSC-derived neurons. g Time lapse imaging of siRNA-transfected neurons revealed the increase of ER ballooning by YAP-siRNA. Right graph shows quantitative analysis. P-values were determined by Wilcoxon’s rank sum test, ** p < 0.01 ( N = 3 wells, n = 10 visual fields). h Conformation of siRNA-mediated YAP-knockdown by immunohistochemistry. Green and white arrows indicate siRNA-mediated and non-transfected cells, respectively. i Conformation of siRNA-mediated YAP-knockdown by western blot. j HMGB1 concentration in culture medium was quantified by using ELISA, and the increase of HMGB1 from the initial concentration after siRNA transfection was compared between scrambled control siRNA and YAP-siRNA. P -values were determined by Student’s t -test ( N = 3 wells). The box plot shows the median, quartiles and whiskers that represent 1.5× the interquartile range. The bar graph indicates average and mean ± S.E.M., together with the corresponding data points. Source data are provided as a “Source Data file”. Full size image Interestingly, BTA1 signals reflecting intracellular Aβ were increased nearly 10 h before initiation of ER ballooning (Fig. 6c ). In addition, a TEAD-reporter vector 38 , which is composed of the TEAD-responsive element flanked to mCherry gene to monitor YAP co-transcriptional activity and the CMV-promoter flanked to EGFP to detect transfected cells (Fig. 6d ), revealed that TEAD-YAP transcriptional activity was declined 8 h before ER ballooning in accordance with the increase of BTA1-stained intracellular Aβ (Fig. 6e ). We also confirmed that siRNA-mediated knockdown of YAP (Fig. 6f ) directly induced ER ballooning in human normal iPSC-derived neurons (Fig. 6g ). Since YAP-siRNA decreased YAP protein in immunocytochemistry (Fig. 6h ) and western blot (Fig. 6i ) at the time point of 0 min, the duration from the decrease of TEAD/YAP transcriptional activity to the initiation of ER ballooning was estimated to be 2–4 h. YAP-siRNA significantly increased HMGB1 released from necrotic iPSC-derived neurons (Fig. 6j ). Injection of YAP-siRNA into cerebral cortex of normal control mice (B6/SJL) promptly induced ER instability of transfected cortical neurons under in vivo imaging by two-photon microscopy (Fig. 7a, b ). Knockdown of YAP protein in siRNA-transfected neurons was confirmed in immunohistochemistry (Fig. 7c ) and western blot analyses of cortex tissues (Fig. 7d ). Intriguingly, we found patchy stains of pSer46-MARCKS (Fig. 7e ) induced by HMGB1, a DAMPs molecule released from necrotic cells 23 , 24 . A high magnification of such a patchy stain revealed a single or few YAP-siRNA-transfected cells with extremely weak stains of DAPI surrounded by pSer46-MARCKS (Fig. 7e ) that matched well with the criteria of nuclear morphology to define active necrosis. Decreased nuclear volume revealed by quantitative analysis with 30 μm sections of cortex tissues after YAP-knockdown also supported TRIAD necrosis (Fig. 7f ). Fig. 7: YAP knockdown induces ER ballooning in normal mice. a Protocol of YAP knockdown in vivo. Labeled YAP-siRNA or scrambled control RNA was transfected into normal mice (B6/SJL), and ER images of siRNA-positive neurons were obtained by two-photon microscopy from 18 h later for 4 h. b In vivo time-lapse imaging of normal mice after siRNA transfection by two-photon microscopy. Right graph shows quantitative analysis of ER volume of siRNA-positive neurons. c Knockdown of YAP was confirmed by immunohitochemistry after in vivo imaging of two-photon microscopy. Green arrow indicates siRNA-transfected cell, and white arrow indicates non-transfected cell. d Knockdown of YAP was confirmed by western blot of mouse cortex tissues dissected after observation by two-photon microscopy. e Immunohistochemistry of pSer46-MARCKS after siRNA transfection. Patty immunostains were observed by transfection of YAP-siRNA but not scrambled control siRNA (low magnification). High magnification revealed DAPI-negative cell death surrounded by pSer46-MARCKS signals. Quantitative analysis confirmed the increase of pSer46-MARCKS immunostain signals by YAP-siRNA (right graph). The bar graph indicates average and mean ± S.E.M., together with the corresponding data points. P -values were determined by Student’s t -test, ** p < 0.01 ( N = 3 mice, n = 30 fields). f Quantitative analysis of nuclear volume after YAP-knockdown supported TRIAD necrosis. Left panels: representative nuclei in 3D imaging, right graph: quantitative analysis of a cell nucleus. The box plot shows the median, quartiles and whiskers that represent 1.5× the interquartile range. P -values were determined by Wilcoxon’s rank sum test, ** p < 0.01 ( N = 3 mice, n = 50 cells). Source data are provided as a “Source Data file”. Full size image Moreover, we observed the whole processes from Aβ accumulation to ER ballooning via repression of TEAD-YAP transcriptional activity in a single iPSC-derived neuron with heterozygous or homozygous AD mutations (APP KM670/671NL) (Fig. 8a ). EGFP-YAPdeltaC61, the neuronal isoform YAP that has the similar dynamics and roles to full-length YAP in TRIAD 19 , was electroporated into neurospheres and differentiated into neurons. During the process, EGFP-YAPdeltaC61 was co-segregated to cytoplasmic Aβ (Fig. 8b , magenta arrow), and deprived from the nucleus (Fig. 8b ). On ER ballooning, YAP was further shifted to the ER ballooning protrusion (Fig. 8b , green arrow) and released by rupture (Fig. 8b , white arrow), while cytoplasmic Aβ remained as aggregates (Fig. 8b , blue arrow). All the processes are also shown in movie (Supplementary Movies 4 – 6 ). We quantitatively confirmed in each neuron that the increase of BTA1 signal intensity was followed by the decrease of YAPdeltaC in the nucleus (Fig. 8c , Supplementary Movies 4 – 6 ). Fig. 8: Time lapse imaging of a single neuron elucidates pathological cascade. a Time lapse imaging protocol of a single neuron to analyze relationship between intracellular localization of YAP, intracellular Aβ and ER ballooning. Representative images of the same cells (similar to the cells in b ) are shown in right panels. b Time lapse imaging of an iPSC-derived neurons carrying APP mutations (homozygous mutants carrying APP KM670/671NL). Nuclear YAP was shifted to intracellular Aβ in the cytoplasm (magenta arrow) and further to ballooned ER (green arrow). YAP was released to extracellular space via leakage of ballooned ER (white arrow) while intracellular Aβ remains as aggregates (blue arrow). The details were described in the text. c Chronological change of nuclear YAPdeltaC intensity and cytoplasmic BTA intensity in three iPSC-derived neurons carrying APP mutations. d Immunohistochemistry of human cerebral cortex with anti-calnexin, an ER membrane marker, and anti-YAP antibodies. Abnormal localizations of YAP in the cytoplasm or ballooned ER were observed frequently observed in MCI patients and at a low frequency in AD patients (white arrow), consistently with the findings in iPSC-derived neurons carrying APP mutations. Source data are provided as a “Source Data file”. Full size image Moreover, immunohistochemistry with anti-YAP and anti-calnexin antibodies revealed ER enlargement in YAP-deficient neurons of human MCI patients (Fig. 8d ). Consistently with iPSC-derived neuron carrying APP mutations (Fig. 8b ), YAP was aggregated in the cytoplasm or translocated into ER ballooning (Fig. 8d ). The similar ER ballooning was also observed in postmortem human brains of AD patients but at a lower frequency (Fig. 8d ). Timelapse imaging by two-photon microscopy revealed that a small part of neurons possessing intracellular Aβ underwent TRIAD necrosis and the residual intracellular Aβ after neuronal rupture might become seed for extracellular Aβ aggregation (Supplementary Fig. 13 ). Since observation of the ER rupture in vivo was far more difficult technically, we could not detect the whole processes in a single neuron in vivo. However, these data in vivo and in vitro collectively suggested the sequential pathological processes of intracellular accumulation of Aβ, deprivation of YAP from the nucleus linked with suppression of TEAD-YAP transcriptional activity, and ER ballooning. S1P and YAPdeltaC rescue ER instability, necrosis, and cognitive impairment in vivo Next, we investigated whether sphingosine-1-phosphate (S1P) and YAPdeltaC61, a neuron-specific isoform of YAP possessing a similar rescue effect to that of full-length YAP on Hippo pathway–dependent necrosis 17 , 18 , 19 , rescue the pathology of 5xFAD mice. Regarding S1P, continuous intrathecal administration (40 nM, 0.15 µL/h) into the CSF space was initiated at either 1 or 5 months and continued until 6 months (Fig. 9a ). Regarding AAV-YAPdeltaC, one-shot injection (1 × 10 10 vector genomes/mL × 1 µL) into the CSF space between the dura and brain parenchyma was performed at 1 or 5 months, and the same series of examinations was performed (Fig. 9a ). When administered from or at 1 month, S1P and AAV-YAPdeltaC restored the alteration rate in the Y-maze test in 5xFAD mice (Fig. 9b ), although their therapeutic effects were somewhat smaller by administration from or at 5 months (Fig. 9b ). Fig. 9: S1P and YAPdeltaC rescue ER instability and cognitive impairment in AD model mice. a Experimental protocol for the rescue effect of S1P or YAPdeltaC (YAPdC) on ER instability and cell death in 5xFAD mice. Two protocols, administration before symptomatic onset (1 month) or administration just after onset (5 months), were used. AAV-NINS: AAV-CMV-no insert. b Alteration rate in Y-maze test of 6-month-old 5xFAD mice that had been treated with S1P from 1 month (upper left panel) or 5 months (lower left panel) or injected with AAV-YAPdeltaC at 1 month (upper right panel) or 5 months (lower right panel). P -values were determined by Tukey’s HSD test, * p < 0.05, ** p < 0.01. N : shown below graphs. c ER instability was rescued at 6 months following treatment from 1 month with S1P or YAPdeltaC. P -values were determined by Welch’s test, ** p < 0.01 ( N = 3 mice, n = 9 cells). d Pathological examination at 6 months following treatment from 1 month with S1P or YAPdeltaC. Staining of YAP with the sc-15407 antibody detecting YAP-FL and YAPdeltaC, intracellular/extracellular Aβ, and pSer46MARCKS is shown. Right graphs show quantification of the four stains before and after the treatment. P-values were determined by Welch’s test, ** p < 0.01 ( N = 3 mice, n = 30 or 60). e Western blot with anti-Aβ antibody (82E1) confirmed the effect of S1P and AAV-YAPdC on Aβ burden in 5xFAD mice. f ELISA of Aβ1-40 and Aβ1–42 consistently showed the effect of S1P and AAV-YAPdC reducing Aβ burden in 5xFAD mice. P -values were determined by Tukey’s HSD test, * p < 0.05, ** p < 0.01 ( n = 4 mice). g Western blot confirming the increase of YAPdeltaC protein in cerebral cortex tissue of 5xFAD mice after AAV-YAPdeltaC infection. S1P also increased YAPdeltaC. P -values were determined by Tukey’s HSD test, # p < 0.05, ## p < 0.01 ( n = 3 tests). h Immunohistochemistry of cerebral cortex tissue of 5xFAD mice supported the increase of total YAP after S1P and AAV-YAPdeltaC treatments. P -values were determined by Tukey’s HSD test, ## p < 0.01 ( n = 50 cells). Box plots show the median, quartiles and whiskers that represent 1.5× the interquartile range. Bar graphs indicate average and mean S.E.M., together with the corresponding data points. Source data are provided as a “Source Data file”. Full size image Consistently, two-photon microscopy revealed stabilization of ER volume in 5xFAD mice by S1P and AAV-YAPdeltaC (Fig. 9c ). Immunohistochemistry revealed that S1P and AAV-YAPdeltaC decreased the extracellular Aβ burden (Fig. 9d ) in addition to the increase of nuclear YAP/YAPdeltaC (Fig. 9d ). The decrease in the abundance of extracellular Aβ plaques (Fig. 9d ) was further confirmed by western blot (Fig. 9e ) and ELISA (Fig. 9f ). YAPdeltaC and total YAP were increased after the S1P and AAV-YAPdeltaC treatments in cortex tissues by western blot (Fig. 9g ) and in cortical neurons by immunohistochemistry (Fig. 9h ). The decrease of extracellular Aβ aggregation could be explained by assuming that intracellular Aβ accumulation serves as a seed for extracellular Aβ aggregation after cell death 22 , 39 , 40 However, further investigation is necessary to elucidate the relationship between intracellular Aβ accumulation and extracellular Aβ aggregation, as well as the relationship between Aβ metabolism and the Hippo pathway. Given that S1P and AAV-YAPdeltaC inhibit necrosis by increasing the effector molecule YAP, it is clear why intracellular Aβ levels were unchanged despite a reduction in necrosis (Fig. 9d ). Normal sibling mice (B6/SJL) after the similar treatments of S1P and AAV-YAPdeltaC were also examined for YAP expression, intracellular Aβ and extracellular Aβ levels (Supplementary Fig. 14 ). S1P and YAPdeltaC rescue ER instability in AD-iPS-cell–derived neurons To further evaluate S1P and AAV-YAPdeltaC as a candidate therapeutic strategy in human AD, we employed human iPSCs-derived neurons carrying heterozygous or homozygous AD mutations (APP KM670/671NL) introduced by genome editing (Fig. 10a ). As mentioned in previous experiments (Fig. 6 ), we detected ER ballooning and rupture occurred in heterozygous and homozygous AD-iPSC–derived neurons (Fig. 10b ; Supplementary Movies 1 – 3 ). BTA1 barely stained normal iPSC–derived neurons, but stained >75% of AD-iPSC–derived neurons, reflecting intracellular Aβ accumulation (Fig. 10b, c ). The frequency of ER ballooning and resultant cell death identical to the TRIAD reported in Huntington’s disease 17 were obviously higher in non-treated AD-iPSC–derived neurons than in non-treated normal iPSC-derived neurons (Fig. 10b, d ). AD-iPSC–derived neurons accumulating intracellular Aβ underwent TRIAD at a higher frequency as aforementioned (Fig. 10d, e ; Supplementary Movies 7 – 9 ). As expected, 20 nM S1P did not affect intracellular Aβ accumulation (Fig. 10c ) but significantly suppressed the frequency of ER ballooning and resultant cell death both in total neurons and in neurons with intracellular Aβ accumulation (Fig. 10d, e ; Supplementary Movies 10 – 12 ). Fig. 10: S1P and YAPdeltaC rescue ER instability and necrosis in genome-edited iPSC-derived AD neurons. a Protocol for the rescue effect of S1P on ER instability and cell death. b Time-lapse images of non-treated iPSC-derived AD neurons exhibited ER ballooning (white arrows) and rupture. S1P treatment stabilized the ER and decreased necrosis. Aβ accumulation in ER (yellow area) and the leak to cytoplasm (green arrow). Representative images are shown in Supplementary Movie 7 - 12 . c S1P did not change %BTA1-positive iPSC-derived AD neurons. d , e Suppressive effect of S1P on ER ballooning and necrosis of iPSC-derived AD neurons d and BTA1-positive neurons e . f Protocol for the rescue effect of YAPdeltaC on ER instability and cell death. AAV-NINS: AAV-CMV-no inset. g Time-lapse images showed ER ballooning (white arrows) and rupture of iPSC-derived AD neurons, which were suppressed by YAPdeltaC expression. Aβ was mainly accumulated in ER (yellow area), but a portion leaked into the cytoplasm (green arrow). Representative images are shown in Supplementary Movie 13 - 18 . h YAPdeltaC did not change %BTA1-positive iPSC-derived AD neurons. i Suppressive effect of YAPdeltaC on ER ballooning and necrosis in iPSC-derived AD neurons i and BTA1-positive neurons j . k , l . Rescue of transcriptional function, as determined using TEAD-responsive element reporter plasmid in iPSC-derived neurons. S1P k and YAPdeltaC l restored TEAD-YAP/YAPdeltaC-dependent transcription, which was suppressed in iPSC-derived AD neurons. ** p < 0.01 ( N = 6 wells), Tukey’s HSD test. m Time-lapse images of shrinkage apoptosis and ballooning necrosis (arrow). n The ratio of each type of cell death (shrinkage apoptosis, rupture necrosis and ballooning necrosis) to total neurons that occurred naturally during 24 h of time-lapse imaging in normal and iPSC-derived AD neurons. ## p < 0.01 ( N = 3 wells, n = 10 visual fields), Tukey’s HSD test. o . The ratio of each type of cell death to total neurons that occurred during 24 h after transfection of YAP-siRNA or scrambled-siRNA. ** p < 0.01 ( N = 3 wells, n = 10 visual fields), Wilcoxon’s rank sum test. c , d , e , h , i , j ** p < 0.01 ( N = 3 wells, n = 10 visual fields) by Wilcoxon’s rank–sum test with post-hoc Bonferroni correction. Box plots show the median, quartiles and whiskers that represent 1.5× the interquartile range. Source data are provided as a “Source Data file”. Full size image Similarly, we tested the effect of AAV-YAPdeltaC on ER ballooning (Fig. 10f ). In this independent experiment, BTA1 stained >75% of AD-iPSC–derived neurons (Fig. 10g ). AAV-YAPdeltaC remarkably suppressed the frequency of ER ballooning and resultant cell death in AD-iPSC–derived neurons (Fig. 10g, h, i, j ; Supplementary Movies 13 – 18 ). We also confirmed that S1P increased the level of nuclear YAP protein (Supplementary Fig. 15a ), and that AAV-YAPdeltaC increased the level of nuclear YAPdeltaC protein (Supplementary Fig. 15b, c ) in AD-iPSC–derived neurons. Interestingly, this overexpression of YAPdeltaC also restored nuclear endogenous full-length YAP (Supplementary Fig. 15d ), presumably because overexpressed YAPdeltaC occupied intracellular Aβ, enabling endogenous YAP to undergo nuclear translocation. Consistent with this, TEAD-YAP/YAPdeltaC–mediated transcription, which was suppressed in heterozygous and homozygous AD-iPSC–derived neurons due to sequestration of YAP into intracellular Aβ aggregates, was rescued by S1P or AAV-YAPdeltaC, as evaluated by luciferase assay using a TEAD-responsive element reporter plasmid (Fig. 10k, l ). Meanwhile, S1P and YAPdeltaC did not affect the amount of intracellular Aβ, as determined by BTA1 (Fig. 10c, h ) or anti-Aβ antibody (Supplementary Fig. 15e, f ), supporting that intracellular Aβ accumulation occurs upstream of TEAD-YAP/YAPdeltaC–mediated transcription, ER instability and cell death. BTA1-mediated amyloid labeling suggested that Aβ was mostly localized to the ER in AD-iPSC–derived neurons (Figs. 6 , 10 ), consistent with the scenario outlined above. Higher magnification of BTA1-stained live neurons revealed that a small portion of Aβ shifted from ER to cytosol (green arrow, Supplementary Fig. 16 ), which could be developed to intracellular Aβ aggregates (Figs. 6 , 8 , 10 ). Interestingly, before necrosis processes initiate, a very small part of Aβ appeared to be excreted from cells as a small vesicle at several parts of the cell membrane (white arrow, Supplementary Fig. 16 ). Although the ER signals were weak in Z-stack images, ER components were sometimes co-located at such vesicles in single-slice confocal microscopy images (white arrow, Supplementary Fig. 16 ). Such Aβ secretion was also detected by immunocytochemistry with anti-Aβ antibody after fixation (Supplementary Fig. 15e, f , right panels). These results suggested that Aβ is secreted from intracellular Aβ–accumulating neurons by the exosome pathway via multivesicular bodies (MVBs). Other types of cell death in mouse AD models and human AD patients Finally, we summarize our data about co-existence of other types of cell death in the brains of AD model mice and human MCI/AD patients. Though a previous paper 36 suggested increased necroptosis in postmortem human AD brains, they used antibodies against non-phosphorylated RIP1/3 and did not show co-activation of RIP1/3 and MLKL. It is not non-phosphorylated RIP1/3 and non-phosphorylated MLKL but phosphorylated RIP1/3 and phosphorylated MLKL that forms the signal transducing complex necrosome executing necroptosis 41 , 42 , 43 , 44 . In our immunohistochemistry of pRIP1/3 and pMLKL, with brain samples of AD model mouse and human AD patient, we could not detect co-localization of phosphorylated RIP1/3 and phosphorylated MLKL, which is essential for signal transduction of necroptosis, in any single neuron of 5xFAD or APP-KI mice at 3 months of age or of human MCI and AD patient (Supplementary Fig. 17 ). In parallel experiments of cerebral cortex tissues after ischemia as a positive control of necroptosis, co-localization of pRIP1/3 and pMLKL was confirmed almost in all neurons (Supplementary Fig. 17 ). In our human AD-iPSC–derived neurons, morphological classification according to a previous report 19 revealed that <20% of neurons shrunk without cytoplasmic ballooning, mimicking apoptosis (Fig. 10m ). However, the percentage of such apoptotic shrinkage or necrotic rupture was not significantly different among normal, heterozygous and homozygous APP-mutant neurons (Fig. 10n ). Moreover, YAP-siRNA increased the shrinkage type of cell death while the extent of increase was not different between scrambled control siRNA and YAP-siRNA (Fig. 10o ). Collectively, these data supported that YAP-dependent TRIAD necrosis is a dominant form of cell death at the early stage of AD pathology, and could be a therapeutic target to cease the progression. Discussion Morphological detection of neurons under the dying process is difficult because the cells lose both chemical and immunohistological staining. However, a sensitive marker (pSer46-MARCKS) of degenerative neurites surrounding dying neurons enabled us to detect necrosis efficiently. This technique revealed that the frequency of necrosis reaches a peak during the preclinical stage of AD pathology in two types of AD mouse models. Moreover, the technique revealed that active necrosis is more abundant at the prodromal stage of MCI than the clinical stage of AD in human patients (Fig. 2d, e ). To the best of our knowledge, only the Herrup group has investigated cell death in MCI by using cell cycle markers while their focus was other than the chronological change of cell death 45 . Regarding the dynamics of active necrosis, we generated a formula based on the hypothesis that cell death occurs at a constant rate in the residual neurons and in regular time interval. Predicted number of active necrosis declined immediately after the onset of cell death, and explained well the actual chronological change (Fig. 2f, g ). Moreover, multiple expected parameters also matched very well with the observed data (Fig. 2h ) verifying the formula. In addition, we determined that neuronal cell death in the early stage of AD is Hippo pathway-dependent necrosis, similar to that induced by RNA polymerase II inhibitor 17 , 18 or YAP sequestration by mutant Htt 19 . In the case of AD, YAP is sequestered to cytoplasmic Aβ, eventually impairing the function of YAP in the nucleus (Figs. 5 – 8 ). It remains unclear why YAP interacts with multiple causative proteins of neurodegenerative diseases. However, YAP is a member of IDPs 46 , a family that includes TDP43, FUS, tau, α-synuclein, and so on, which mutually interact and are involved in neurodegenerative diseases 47 . We found that low-complexity sequences are distributed throughout mouse and human YAP (Supplementary Fig. 18 ), supporting that full-length YAP and YAPdeltaC could interact with Aβ via intrinsically denatured regions. Interestingly, a recent study implicated YAP as a hub molecule in AD pathology 10 . Xu and colleagues performed a meta-analysis of functional genomic data of AD and concluded that YAP is the most important hub molecule in the molecular network of AD 10 . Their subsequent experiments showed that YAP-KD increased the levels of Aβ 10 , consistent with our results. Thus, the increase in the YAP mRNA level that they observed 10 could represent a protective transcriptional response aimed at compensating for the reduced level of YAP protein. Although their results did not reveal the direct relevance of YAP to neuronal cell death in AD, their findings match very closely with our observations and hypothesis. Although cell death has been generally suspected as a terminal-stage pathology in AD, the evidence in support of this idea remains weak. Our experimental results suggest an alternative view regarding the timing and roles of cell death in AD (Supplementary Fig. 19 ). Intriguingly, Hippo pathway-dependent TRIAD necrosis occurs at an early stage and plays some critical roles in the progression of AD pathology. First, as a cell-autonomous process of degeneration, intracellular Aβ-induced necrosis decreases the number of cerebral neurons via sequestration of YAP. Second, as a non–cell-autonomous process necrotic neurons release alarmins/DAMPs that trigger secondary cell damage in surrounding neurons. This process could expand degeneration to bystander neurons that contain only a low level of intracellular Aβ. Third, after necrosis, intracellular Aβ becomes the seed for extracellular Aβ aggregation, representing another non-cell-autonomous means of expanding degeneration. Fourth, prionoid transmission of Aβ and tau proteins could be also promoted by TRIAD necrosis, as shown by live images of AD-iPSC-derived neurons (Supplementary Figs. 15 , 16 ). Restoration of the YAP protein level using an AAV vector successfully inhibited necrosis during the early stage of AD. More importantly, the treatment efficiently prevented cognitive impairment and extracellular Aβ aggregation in AD model mice. Paired experiments using AD-iPSC-derived neurons further supported the therapeutic effects of YAPdeltaC. Given that no extracellular Aβ aggregates existed under our culture condition, the experiment directly indicated that the necrosis was not derived from extracellular Aβ aggregation but from intracellular Aβ accumulation. Early-stage intervention in molecules regulating Hippo pathway-dependent necrosis, or in triggering of necrosis by intracellular Aβ, could suppress progression to the late-stage pathological changes, possibly including extracellular Aβ aggregation. Long-term follow-up of AAV-YAPdeltaC-treated mice for up to 6 months did not reveal tumors in systemic organs or the brain. However, further investigation at the GMP level vector would be necessary to finally confirm the safety of AAV-YAPdeltaC as a human therapeutic vector. Type III cell death with cytoplasmic changes in human AD brains that is homologous to TRIAD has been repetitively described in old historical papers of neuropathology. For instance, Hirano and colleagues 48 reported granulovacuolar body, which is a homologous large vacuole found in pyramidal neurons in Sommer’s sector of senile dementia, AD and Pick’s disease (now a form of FTLD). Another examples is the paper by Dickson and colleagues 49 , which described ballooned neurons in AD, Pick’s disease, cortico-nigral degenration, and pigment-spheroid degeneration. Our previous work revealed TRIAD in Huntington’s disease pathology 50 , a number of papers have reported a cell death morphologically homologous to TRIAD 51 , 52 , 53 , 54 , 55 , 56 . The current definition of MCI is largely based on subjective complaints by patients who have insufficient cognitive decline to be diagnosed with dementia and who remain adequately socially adjusted. No objective markers are available to support the subjective diagnosis or to evaluate the pathological state during MCI stage. Therefore, in combination with amyloid PET to quantify the extracellular Aβ burden, the use of CSF-HMGB1 to detect the amount of on-going cell death could serve as a sensitive quantitative marker for evaluating disease progression and also the effect of candidate drugs. In conclusion, we have provided evidence that neuronal necrosis induced by YAP deprivation occurs most actively in the early stages of AD, including preclinical AD, MCI or ultra-early stage of AD before extracellular Aβ aggregation. In addition, we showed that CSF-HMGB1 is a powerful tool for evaluating the activity of cell death in such stages. We also proposed therapeutic approaches targeting the change in the level of nuclear YAP in neurons, i.e., targeting the Hippo pathway–dependent necrosis. Methods Patient cohort A summary of all patient information is provided in Supplementary Table 1 . Cohort 1 consists of four normal controls, one patient without dementia but with another neurological disease (disease control), 19 patients with MCI, and 18 patients with AD. Cohort 2 comprised 13 disease controls, seven MCI patients, and 17 AD patients. Cohort 3 comprised 30 normal controls and 30 AD patients. Cohort 4 comprised eight AD patients. Informed consent for the use of all human CSF was obtained and approved by the appropriate ethics committee at each institution and by Tokyo Medical and Dental University. Mini-Mental State Examination The Japanese version of the Mini-Mental State Examination (MMSE) was performed by the corresponding physician of each patient. CSF sampling All CSF samples were obtained by lumbar puncture before meal times and collected into polypropylene tubes. The CSF samples were centrifuged (1000 × g for 10 min at 4 °C) to remove any debris, and then stored in small aliquots at −80 °C. Aβ and tau measurement CSF-Aβ1–40 and -Aβ1–42 were measured by enzyme-linked immunosorbent assay (ELISA) using a human β amyloid (1–40) ELISA kit (292-62301, Wako Chemical Co., Saitama, Japan) and human β amyloid (1–42) ELISA kit (298-62401, Wako Chemical Co., Saitama, Japan). CSF-pTau proteins were measured using INNOTEST Phospho-tau (181 P, Innogenetics, Ghent, Belgium). High-sensitivity HMGB1 ELISA Polystyrene microtiter plates (152038, Nunc, Roskilde, Denmark) were coated with 100 μL of anti-human HMGB1 monoclonal antibody (1 mg/L, Shino-Test, Tokyo, Japan) in PBS and incubated overnight at 2–8 °C. The plates were washed three times with PBS containing 0.05% Tween 20, and then incubated for 2 h with 400 μL/well PBS containing 1% BSA to block remaining binding sites. After the plates were washed again, 100 μL of each dilution of the calibrator and CSF samples (1:1 dilutions in 0.2 M Tris pH 8.5, 0.15 M NaCl containing 1% BSA) was added to the wells. The plates were then incubated for 24 h at 37 °C. The plates were washed again, and then incubated with 100 μL/well peroxidase-conjugated anti-human HMGB1,2 monoclonal antibody (Shino-Test, Tokyo, Japan) for 2 h at room temperature. After another washing step, the chromogenic substrate 3,3′,5,5′-tetra-methylbenzidine (T022, Dojindo Laboratories, Kumamoto, Japan) was added to each well. The reaction was terminated with 0.35 M Na 2 SO 4 , and absorbance at 450 nm was read on a microplate reader (Model 680, Bio-Rad Laboratories, Hercules, CA, USA). A standard curve was obtained using purified pig thymus HMGB1 (Shino-Test, Tokyo, Japan). CSF samples with HMGB1 concentrations of 300 and 1000 pg/mL were analyzed to assess intra-assay ( n = 10) and inter-assay precision ( n = 10). The coefficients of variation in the intra- and inter-assay were 4.8–6.1% and 4.1–9.1%, respectively. The working range for the assay was 100–5000 pg/mL. Recovery of purified pig thymus HMGB1 added to pooled CSF was 80–105% ( n = 10). Human tissue samples Paraffin sections and frozen brain tissues were prepared from human MCI/AD brains and disease control brains without dementia (non-neurological disease controls). Informed consent for the use of human tissue samples was obtained, after approval of the ethics committee at each institution and Tokyo Medical and Dental University. AD model mice 5xFAD transgenic mice overexpressing mutant human APP (770) with the Swedish (KM670/671NL), Florida (I716V), and London (V717I) familial Alzheimer’s disease (FAD) mutations and human PS1 with FAD mutations (M146L and L285V) were purchased from The Jackson Laboratory (34840-JAX, Bar Harbor, ME, USA). Both the APP and PS1 transgenes were under the control of the mouse Thy1 promoter 33 . The backgrounds of the mice were C57BL/SJL, which was produced by crossbreeding C57BL/6 J female and SJL/J male mice. APP-KI mice possess a single human APP gene with the Swedish (KM670/671NL), Arctic (E693G), and Beyreuther/Iberian (I716F) mutations 34 . Behavioral analysis Exploratory behavior was assessed using a Y-shape maze consisting of three identical arms with equal angles between each arm (YM-3002, O’HARA & Co., Ltd., Tokyo, Japan). Mice at the age of 2 months were placed at the end of one arm and allowed to move freely through the maze during an 8 min session. The percentage of spontaneous alterations (indicated as an alteration score) was calculated by dividing the number of entries into a new arm different from the previous one by the total number of transfers from one arm to another arm. Re-evaluation of anti-pSer46-MARCKS antibody Purification protocol, ELISA, and Immunohistochemistry: [ENGHVKVNGDA(pS)PA] and [ENGHVKVNGDASPA] peptides were synthesized, added cysteine at N terminus, and conjugated with KLH (keyhole-limpet hemocyanin). Two rabbits were immunized eight times during nine weeks by the phospho-peptide, and the serum collected at one week after the final immunization was loaded onto non-phospho-peptide column made by [ENGHVKVNGDASPA] peptide and then onto phospho-peptide column made by [ENGHVKVNGDA(pS)PA] peptide. Antibodies bound to each column were eluted by 0.1 M glycine-HCl buffer (pH 2.5). ELISA was employed to examine the specific reactivity of anti-pSer46-MARCKS antibody to pSer46-MARCKS as follows. In 50 μL of 20 μg/mL phosphorylated peptide [ENGHVKVNGDA(pS)PA] and non-phosphorylated peptides [ENGHVKVNGDASPA] in PBS were added to each well of the plates (Elisa Plate strips, #FEP-100-008-1, Guangzhou JET Bio-Filtration Products, Co., Ltd, Guangzhou, China) and left for 2 h at room temperature. After washing three times with PBS, 200 μL of 2% BSA in PBS was added to each well and incubated for 2 h at room temperature. After washing the plate twice with PBS, 100 μL of diluted anti-pSer46-MARCKS antibody in PBS was added to each well of the plates and incubated 2 h at room temperature. After washing the wells four times with PBS, 100 μL of HRP-conjugated secondary antibody (ab150077, Abcam, Cambridge, UK) diluted in PBS was added to each well of the plates and incubated 1 h at room temperature. After washing the plate four times with PBS, 100 μL of TMB (EL-TMB Chromogenic Reagent kit, C520026, Sangon Biotech, Shanghai, China) was added to start the reaction and incubated after sufficient color development, and the reaction was stopped with 100 μL of 2 M H 2 SO 4 . Absorbance was measured using a plate reader (iMark, 681130J1, Bio-Rad Laboratories, Hercules, CA, USA) at 450 nm. Immunohitochemistry were performed as described in the following. Brain tissue sections of 6-month old 5xFAD mice were incubated with the antibodies against Ser46-phosphorylated- or non-phosphorylated-MARCKS at a dilution of 1:2000 overnight at room temperature. Immunohistochemistry For immunohistochemistry, mouse or human brains were fixed with 4% paraformaldehyde and embedded in paraffin. Sagittal or coronal sections (5 μm thickness) were obtained using a microtome (REM-710, Yamato Kohki Industrial Co., Ltd., Saitama, Japan). Immunohistochemistry was performed using the following primary antibodies: rabbit anti-pSer46-MARCKS (1:2000, ordered from GL Biochem Ltd., Shanghai, China); mouse anti-amyloid β (1:5000, clone 82E1, #10323, IBL, Gunma, Japan); rabbit anti-calnexin (1:200, ab58503, Abcam, Cambridge, UK); mouse anti-KDEL (1:100, ADI-SPA-827, Enzo Life Sciences, NY, USA); rabbit anti-MAP2 (1:200, ab32454, Abcam, Cambridge, UK); rabbit anti-pSer909-LATS1 (1:100, #9157, Cell Signaling Technology, Danvers, MA, USA); rabbit anti-pThr210-PLK1 (1:5000, ab155095, Abcam, Cambridge, UK); rabbit anti-YAP (1:20, sc-15407, Santa Cruz Biotechnology, Dallas, TX, USA); mouse anti-RIP1 (1:200, #610459, BD bioscience, CA, USA); rabbit anti-RIP3 (1:250, ab56164, Abcam, Cambridge, UK); rabbit anti-pSer166-RIP1 (1:400, #44590, Cell Signaling Technology, Danvers, MA, USA); rabbit anti-pSer232-RIP3 (1:100, ab195117, Abcam, Cambridge, UK); rabbit anti-pSer345-MLKL (1:2000, ab196436, Abcam, Cambridge, UK);.Secondary antibodies were as follows: donkey anti-mouse IgG Alexa488 (1:1000, A-21202, Molecular Probes, Eugene, OR, USA), and donkey anti-rabbit IgG Alexa568 (1:1000, A-10042, Molecular Probes, Eugene, OR, USA). Nuclei were stained with DAPI (0.2 μg/mL in PBS, D523, Dojindo Laboratories, Kumamoto, Japan). For multi-labeling, antibodies were labeled by Zenon Secondary Detection-Based Antibody Labeling Kits as follows: anti-calnexin, anti-RIP3, anti-pSer166-RIP1 and anti-pSer232-RIP3 (Zenon™ Alexa Fluor™ 555 Rabbit IgG Labeling Kit, Z-25305, Thermo Fisher Scientific, Waltham, MA, USA); anti-MAP2 (Zenon™ Alexa Fluor™ 647 Rabbit IgG Labeling Kit, Z-25308 and Zenon™ Alexa Fluor™ 488 Rabbit IgG Labeling Kit, Z-25302, Thermo Fisher Scientific, Waltham, MA, USA); anti-pSer345-MLKL (Zenon™ Alexa Fluor™ 647 Rabbit IgG Labeling Kit, Z25308 Thermo Fisher Scientific, Waltham, MA, USA); anti-amyloid β (Zenon™ Alexa Fluor™ 488 Mouse IgG 1 Labeling Kit, Z-25002, Thermo Fisher Scientific, Waltham, MA, USA). All images were acquired by fluorescence microscopy (Olympus IX70, Olynpus, Tokyo, Japan) or confocal microscopy (FV1200IXGP44, Olympus, Tokyo, Japan). ELISA evaluation of Aβ levels in mouse brains We performed sandwich ELISA using Human βAmyloid (1–42) ELISA Kit or Human βAmyloid (1–40) ELISA Kit (298-62401 and 292-62301, FUJIFILM Wako PureChemical Corp., Osaka, Japan). Total proteins extracted from 20 mg of mouse cortex tissues by 1 mL of RIPA buffer (10 mM Tris-HCl pH7.5, 150 mM NaCl, 1 mM EDTA, 1% Triton X-100, 0.1% SDS, 0.1% sodium deoxycholate) were ultra-centrifuged at 100,000 × g at 4 °C for 1 hour. The supernatants were diluted to 100 µL and applied to Aβ1–42 (extract from 100 µg cortex/well) and Aβ1–40 (extract from 500 µg cortex/well) ELISA plates and measured following the manufacturer’s instructions. The ELISA plates were incubated at 4 °C overnight, washed five times with 1x wash solution, added with 100 µL of HRP-conjugated antibody solution, and incubated at 4 °C for 1 or 2 h in Aβ1–42 or Aβ1–40 ELISA kits, respectively. After washed five times by 1x wash solution, the ELISA plates were added with 100 µL of TMB solution, incubated at room temperature for 30 min, and added with 100 µL of stop solution. Absorbance was measured at 450 nm using a plate reader (SPARK 10 M, TECAN, Grodig, Austria). Immuno-electron microscopy The tissues were fixed with 4% paraformaldehyde for 12 h, followed by the cryo-protective treatment with 30% sucrose. The frozen tissue blocks in the cryo-compound were sliced with 20 μm thickness with cryostat. Sections were incubated with the 5% Block Ace (UKB80, DS Pharma Biomedical, Osaka, Japan) solution with 0.01% Saponin in 0.1 M PB for an hour, and stained with primary rabbit anti-pSer46-MARCKS (1:1000, ordered from GL Biochem Ltd., Shanghai, China) for 72 h at 4 °C, followed by the incubation with nanogold conjugated goat anti-rabbit secondary antibody (1:100, N-24916, Thermo Fisher Scientific, Waltham, MA, USA) for 24 h at 4 °C. After 2.5% glutaraldehyde fixation in PB, nanogold signals were enhanced with R-Gent SE-EM Silver Enhancement Reagents (500.033, Aurion, Wageningen, Netherlands) for 40 min at 25 °C. Stained sections were post-fixed with 1.0% OsO4 for 90 min at 25 °C, dehydrated through graded series of ethanol and embedded in Epon. Ultrathin sections (70 nm) were prepared with ultramicrotome (UC7, Leica, Wetzlar, Germany) and stained with uranyl acetate and lead citrate. The sections were observed under a transmission EM (JEOL model 1400 plus, JEOL Ltd., Tokyo, Japan). Immunoprecipitation Mouse cerebral cortex was lysed in a homogenizer with RIPA buffer (10 mM Tris–HCl, pH 7.5, 150 mM NaCl, 1 mM EDTA, 1% Triton X-100, 0.1% SDS, 0.1% DOC, 0.5% protease inhibitor cocktail (539134, Calbiochem, San Diego, CA, USA)). Lysates were rotated for 60 min at 4 °C, and then centrifuged (12,000 g × 1 min at 4 °C). Supernatant (250 μg) was incubated with a 50% slurry of protein G–Sepharose beads (100 μL, 17061801, GE Healthcare,, Chicago, IL, USA), followed by centrifugation (2000 g × 3 min at 4 °C). Supernatants were incubated with 1 μg of antibody for 16 h at 4 °C with rotation. Antibodies were as follows: rabbit anti-YAP (1:100, #14074, Cell Signaling Technology, Danvers, MA, USA); mouse anti-amyloid β (1:5000, clone 82E1, #10323, IBL, Gunma, Japan). Protein G–Sepharose was added to samples and rotated for 4 h at 4 °C, and then the beads were washed three times with RIPA buffer. Equal volume of sample buffer (125 mM Tris–HCl, pH 6.8, 4% SDS, 10% glycerol, 0.005% BPB, 5% 2-mercaptoethanol) was added, and the samples were boiled at 100 °C for 10 min before SDS–PAGE. Dot blot Protein concentrations in samples were measured using the BCA Protein Assay Regent (Micro BCA Protein Assay Reagent kit, 23225, Thermo Fisher Scientific, Waltham, MA, USA). After the membranes (Immobilon-P, IPVH00010, Merck Millipore, Burlington, MA, USA) were washed by TBS (20 mM Tris-HCl/pH 7.5, 500 mM NaCl), samples of 2.5 μg/25 μL were dropped on membranes using Bio-Dot Apparatus (1706545, Bio-Rad Laboratories Laboratories, CA, USA) and left to stand overnight. Next, the membranes were blocked with 5% skim milk in TBST (10 mM Tris/HCl pH 8.0, 150 mM NaCl, 0.05% Tween-20), and reacted with the following primary and secondary antibodies diluted in Can Get Signal solution (NKB-101, Toyobo, Osaka, Japan). ECL prime (RPN2232, GE Healthcare, Chicago, IL, USA) or ECL select (RPN2235, GE Healthcare, Chicago, IL, USA) were used to detect the bands using LAS500 (29005063, GE Healthcare, Chicago, IL, USA). Primary and secondary antibodies were diluted as follows: mouse anti-amyloid β, (1:5000, clone 82E1, #10323, IBL, Gunma, Japan); HRP-conjugated anti-mouse IgG (1:5000, NA931VA, GE Healthcare, Chicago, IL, USA). Western blot Protein concentration of samples was measured using the BCA Protein Assay Regent (Micro BCA Protein Assay Reagent kit, 23225, Thermo Fisher Scientific, Waltham, MA, USA). After samples were separated by SDS-PAGE, they were transferred onto polyvinylidene difluoride membranes (Immobilon-P, IPVH00010, Merck Millipore, Burlington, MA, USA) using the semi-dry method. Next, the membranes were blocked with 5% skim milk in TBST (10 mM Tris/HCl pH 8.0, 150 mM NaCl, 0.05% Tween-20), and reacted with the following primary and secondary antibodies diluted in Can Get Signal solution (NKB-101, Toyobo, Osaka, Japan). Bands were visualized using ECL prime (RPN2232, GE Healthcare, Chicago, IL, USA) or ECL select (RPN2235, GE Healthcare, Chicago, IL, USA). Primary and secondary antibodies were diluted as follows: rabbit-anti-YAP (H-125) (1:3000, sc-15407, Santa Cruz Biotechnology, Dallas, TX, USA); β-actin(C-4) (1:3000, sc-47778, Santa Cruz Biotechnology, Dallas, TX, USA); mouse anti-amyloid β, (1:1000, clone 82E1, #10323, IBL, Gunma, Japan); rabbit-anti-YAPdeltaC (1:9000) 17 ; mouse anti-RIP1 (1:1000, 610459, BD bioscience, CA, USA); rabbit anti-RIP3 (1:1000, ab56164, Abcam, Cambridge, UK); anti-pSer166-RIP1 (1:1000, #44590, Cell Signaling Technology, Danvers, MA, USA); anti-pSer232-RIP3 (1:1000, ab195117, abcam, Cambridge, UK); anti-pSer46-MARCKS antibody (1:100000, ordered from GL Biochem Ltd., Shanghai, China); anti-histone H3 antibody (1:1000, 630767, Merck, Darmstadt, Germany) HRP-conjugated anti-mouse IgG (1:3000, NA931VA, GE Healthcare, Chicago, IL, USA); and HRP-conjugated anti-rabbit IgG (1:3000, NA934VS, GE Healthcare, Chicago, IL, USA). Immunocytochemistry iPS-derived neurons were fixed in 4% PFA, and then permeabilized by incubation with 0.1% Triton X-100 in PBS for 10 min at room temperature (RT). After blocking with blocking buffer (50 mM Tris-HCl pH 6.8, 150 mM NaCl, and 0.1% Triton X-100) containing 5 mg/mL BSA for 60 min at RT, sections were incubated with primary antibody for 60 min or 180 min (only for 6E10), and finally with secondary antibodies for 60 min at RT. The antibodies used for immunocytochemistry were diluted as follows: rabbit-anti-YAP (1:100, #14074 S, Cell Signaling Technology, Danvers, MA, USA), which was raised against amino acids around Pro435 of human YAP isoform 1; rabbit-anti-YAP (1:200, sc-15407, Santa Cruz Biotechnology, Dallas, TX, USA), which was raised against amino acids 206–330 of human YAP; rabbit-anti-YAPdeltaC (1:2000) 17 ; mouse-anti-Aβ (1:250, clone 6E10, SIG-39300, Covance, NJ, USA); anti-pSer46-MARCKS antibody (1:2000, ordered from GL Biochem Ltd., Shanghai, China); Cy3-conjugated anti-mouse IgG (1:500, 715-165-150, Jackson Laboratory, Bar Harbor, ME, USA); and Alexa Fluor 488–conjugated anti-rabbit IgG (1:1000, A11008, Molecular Probes, Eugene, OR, USA). TEAD-YAP transcriptional activity Neurospheres differentiated from human iPS cells (with or without APP mutations) 37 were dissociated in TrypLE™ Select (12563-011, Thermo Fisher Scientific, Waltham, MA, USA) containing 10 μM Y27632 (253-00513, Wako, Osaka, Japan). In total 4 × 10 5 cells, centrifuged, and suspended in 20 μL nucleofector solution (P3 Primary Cell 4D-Nucleofector™ X Kit, V4XP-3012, LONZA, NJ, USA). In total 1 μg of pLL3.7-ires-GFP-TEAD-responsive-H2B-mCherry plasmid 38 (generous gift from Prof. Yutaka Hata, Tokyo Medical and Dental University) was added to the cell suspension and electroporated into cells by 4D-Nucleofector (pulse program: CV-110) (4D-Nucleofector Core Unit, #AAF-1002B, LONZA, NJ, USA). The electroporated cells were cultured on Lab-Tek II chambered coverglass coated with poly-L-ornithine (P3655, Sigma-Aldrich, St. Louis, MO, USA) and laminin (23016015, Thermo Fisher Scientific, Waltham, MA, USA) in DMEM/F12 (D6421, Sigma-Aldrich, St. Louis, MO, USA) supplemented with B27 (17504044, Thermo Fisher Scientific, Waltham, MA, USA), Glutamax (35050061, Thermo Fisher Scientific, Waltham, MA, USA), and penicillin/streptomycin (15140-122, Thermo Fisher Scientific, Waltham, MA, USA). After differentiation to neurons for seven days, time-lapse images of pTEAD-driven mCherry and pCMV-driven EGFP were acquired by Olympus FV10i-W (Olympus, Tokyo, Japan) at 30 min intervals for 36 h. Cells were co-stained with ER-Tracker™ Blue-White DPX (#E12537, Thermo Fisher Scientific, MA, USA) and ER signals were obtained in parallel. Measurement of signal intensity was performed using Fluoview (Olympus, Tokyo, Japan) software. The chamber was kept at 37 °C with 5% CO 2 during the experiment. siRNA-mediated knockdown of YAP in iPS-derived neurons in vitro In total 5 pmol (67.6 ng) of human YAP-siRNA (sc-38637, Santa Cruz Biotechnology, Dallas, TX, USA) or 5 pmol of Trilencer-27 universal scrambled negative control siRNA duplex (#SR30004, OriGene, Rockville, MD, USA) was transfected into human neurons differentiated from normal iPS cells (ASE-9203, Applied StemCell, Milpitas, CA, USA) using Viromer® BLUE (TT100300, OriGene, Rockville, MD, USA). Before transfection, siRNA was labeled with Label IT® siRNA Tracker™ Fluorescein Kit without Transfection Reagent (MIR7216, Mirus, WI, USA) according to the manufacturer’s procedures. Eighteen hours later, cells were stained with ER-Tracker™ Red (BODIPY™ TR Glibenclamide) (E34250, Thermo Fisher Scientific, MA, USA) and NucRed™ Live 647 ReadyProbes™ Reagent (R37106, Thermo Fisher Scientific, MA, USA) for 60 min at 37 °C. Time-lapse images of iPS-derived neurons were acquired at ×60 magnification on an Olympus FV10i-W (Olympus, Tokyo, Japan) at 15 min intervals for 24 h. The chamber was kept at 37 °C with 5% CO 2 . The ratio of cell death patterns was counted cells transfected labeled-siRNA. To validate knockdown efficiency of siRNA, cells were fixed and collected. Each sample was used for Immunocytochemistry and western blot. siRNA-mediated knockdown of YAP in mice in vivo Before siRNA injection, siRNA was labeled with Label IT® siRNA Tracker™ Fluorescein Kit without Transfection Reagent (MIR7216, Mirus, WI, USA) according to the manufactural procedures. Under anesthesia with 1% isoflurane, we injected 300 ng Fluorescein-labeled-siRNA (mouse YAP-siRNA, sc-38638, Santa Cruz Biotechnology, Dallas, TX, USA) or Trilencer-27 universal scrambled negative control siRNA duplex (sc-38637, Santa Cruz Biotechnology, Dallas, TX, USA) into retrosplenial cortex (anteroposterior, −2.0 mm form bregma; lateral, 0.6 mm; depth, 1 mm) at a volume of 1 μl using in vivo jetPEI (201-10 G, Polyplus-transfection, Illkirch, France) at 5 months of age (21 weeks). After 16 h, live-cell imaging of ER was performed by two-photon microscopy followed by the method listed above (“In vivo imaging of neuronal ER and Aβ”). After dissection, mouse cerebral cortexes were fixed in 4% PFA for overnight at 4 °C. Tissue sections were prepared at 30 μm thickness using microtome (MICROM HM650V, Thermo Fisher Scientific, Waltham, MA, USA), and stained in floating condition. In brief, sections were incubated with blocking solution (10%FBS and 0.3%triton-X in PBS) for 30 min at RT, and with a primary antibody (anti-YAP antibody (1:100, sc-15407, Santa Cruz Biotechnology, Dallas, TX, USA); pSer46-MARCKS (1:2000, ordered from GL Biochem Ltd., Shanghai, China)) overnight at 4 °C. After three times of washing with PBS for 5 min, the sections were incubated with goat-anti rabbit IgG Alexa568 for 1 h at room temperature, and with DAPI for 5 min. Pathological cascade analysis of a single cell neuron pEGFP-YAPdeltaC was generated by subcloning EcoRI-SalI fragment digested from pBS-YAPdeltaC 17 into pEGFP-C1 (#6084-1, Clontech, Mountain View, CA, USA). Neurospheres differentiated from human iPS cells (with or without APP mutations) 37 were dissociated in TrypLE™ Select (12563-011, Thermo Fisher Scientific, Waltham, MA, USA) containing 10 μM Y27632. 4 × 10 5 cells, centrifuged, and resuspended in 20 μL nucleofector solution (P3 Primary Cell 4D-Nucleofector™ X Kit, V4XP-3012, LONZA, NJ, USA). One μg of pEGFP-YAPdeltaC was added to the cell suspension and electroporated into cells by 4D-Nucleofector (pulse program: CV-110) (4D-Nucleofector Core Unit, AAF-1002B, LONZA, fNJ, USA). The electroporated cells were cultured on Lab-Tek II chambered coverglass coated with poly-L-ornithine (P3655, Sigma-Aldrich, St. Louis, MO, USA) and laminin (23016015, Thermo Fisher Scientific, Waltham, MA, USA) in DMEM/F12 (D6421, Sigma-Aldrich, St. Louis, MO, USA) supplemented with B27 (17504044, Thermo Fisher Scientific, Waltham, MA, USA), Glutamax (35050061, Thermo Fisher Scientific, Waltham, MA, USA), and penicillin/streptomycin (15140-122, Thermo Fisher Scientific, Waltham, MA, USA). After differentiation to neurons for 6 days, time-lapse images of EGFP-YAPdeltaC were acquired by Olympus FV10i-W (Olympus, Tokyo, Japan) at 30 min intervals for 36 h. Cells were co-stained with NucRed™ Live 647 ReadyProbes™ Reagent (R37106, Thermo Fisher Scientific, MA, USA) and 100 nM 2-(4′-methylaminophenyl) benzothiazole (BTA1, B9934, Sigma-Aldrich, MO, USA). The chamber was kept at 37 °C with 5% CO 2 during the experiment. Measurement of signal intensity was performed using the ImageJ software (Ver. 1.45 s). Generation of AAV-YAPdeltaC The AAV vector plasmid carries an expression cassette consisting of a human CMV promoter, the first intron of human growth hormone 1, cDNA encoding rat YAPdeltaC (ins61; accession no. DQ186898.2), woodchuck hepatitis virus post-transcriptional regulatory element (WPRE), and a simian virus 40 polyadenylation signal sequence (SV40 poly[A]) between the inverted terminal repeats of the AAV1 genome. To generate the vectors, AAV2 rep and AAV1 vp expression plasmids, and an adenoviral helper plasmid (pHelper, 240071-54, Agilent Technologies, CA, USA) were co-transfected into HEK293 cells by the calcium phosphate co-precipitation method. The recombinant viruses were purified by isolation from two sequential continuous CsCl gradients. Viral titers were determined by qPCR. Intrathecal injection of AAV and S1P In pharmacological rescue experiments, sphingosine-1-phosphate (S1P, 40 nM Sigma-Aldrich, St. Louis, MO, USA) or artificial cerebrospinal fluid (3525, aCSF, R&D systems, MN, USA) was administered to mice at 4 weeks old into the subarachnoid space via osmotic pump (0.15 μL/h, model 2006, ALZET, Cupertino, Canada) for 28 days. For the YAP rescue experiment, AAV1-YAPdeltaC or AAV-CMV-NINS (titer: 1 × 10 10 vector genomes/mL, 1 μL) was injected into retrosplenialdysgranular cortex at −2.0 mm from bregma, mediolateral 0.6 mm, depth 1 mm. Two days after drug injection, imaging was performed 10 times at 20 min intervals using the following method. In vivo imaging of neuronal ER and Aβ The skull was thinned with a high-speed micro-drill at the surface of the mouse splenial cortex 57 . The head of each mouse was immobilized by attaching the head plate to a custom machine stage mounted on the microscope table. Two-photon imaging was performed using FV1000MPE2 (Olympus, Tokyo, Japan) equipped with an upright microscope (BX61WI, Olympus, Tokyo, Japan), a water-immersion objective lens (XLPlanN25xW; numerical aperture, 1.05, Olympus, Tokyo, Japan), and a pulsed laser (MaiTaiHP DeepSee, Spectra Physics, Santa Clara, CA, USA). Four hours before imaging, BTA1 (100 nM, B-9934, Sigma-Aldrich, MO, USA) and ER-Tracker™ Red (100 nM, E34250, Thermo Fisher Scientific, MA, USA) were injected in a volume of 1 μL into RSD at −2.0 mm from bregma, mediolateral ±0.6 mm, depth 1 mm, under anesthesia with 1% isoflurane. Both BTA1 and ER were excited at 750 nm and scanned at 495–540 nm and 575–630 nm, respectively. High-magnification imaging (101.28 × 101.28 μm; 1024 × 1024 pixels; 1 μm Z-step; 60–80 slices along Z-axis) of the cortical layer I in RSD was performed with a 2 × digital zoom through the thinned-skull window in the retrosplenial cortex 57 . Images of BTA1 and ER-Tracker™ Red were analyzed according to three parameters: ER and BTA1 signal intensity, ER or BTA1 puncta volume, and number of ER-positive cells or BTA1-positive cells per imaging volume. Measurement of ER signal intensity was performed using the ImageJ software (Ver. 1.45 s). Total ER puncta volumes belonging to a single neuron were quantified by IMARIS (Bitplane, Zurich, Switzerland). Neurons derived from genome-edited human iPS cells Human normal iPS cells (ASE-9203, Applied StemCell, Milpitas, CA, USA) were transfected with a mixture of plasmids expressing gRNA (5′-GGAGATCTCTGAAGTGAAGATGG-3′) and the Cas9 gene along with single-stranded oligodeoxynucleotides (for human APP KM670/671NL, 5′-TTGGTTGTCCTGCATACTTTAATTATGATGTAATACAGGTTCTGGGTTGACAAATATCAAGACGGAGGAGATCTCTGAAGTGAATCTGGATGCAGAATTCCGACATGACTCAGGATATGAAGTTCATCATCAAAAATTGGTACGTAAAATAATTTACCTCTTTC-3′) for donor DNA. The Cas9 gene was fused to the 2 A peptide and GFP gene. Cells were electroporated using a Neon system (MPK5000, Thermo Fisher Scientific Inc., MA, USA) with the following conditions: 1200 V, 30 ms, one pulse. Cells were selected with 0.4 μg/mL puromycin for 24–48 h after transfection, and then subjected to a colony cloning process by picking and seeding each visible GFP-positive colony into a well of a 96-well plate. The cells were allowed to grow for 7–10 days, or until a conveniently sized colony was formed. A portion of cells from each colony was subjected to genotype analysis. Briefly, genomic DNA from single-cell colonies was isolated and used to amplify a 308 bp DNA fragment using primers 5′-GCATGTATTTAAAGGCAGCAGAAGC-3′ and 5′-CAATGCTTGCCTATAGGATTACCATGAAAACATG-3′. PCR fragments were subjected to Sanger sequencing. Positive clones were expanded, and a portion of cells was resubmitted for sequencing to confirm the desired genotype. Primers are listed in Supplementary Table 3 . Live imaging of iPSC-derived neurons Human normal iPS cells and mutant human iPS cells were plated on a 6 cm dish with 3 μM SB431542, 3 μM CHIR99021, and 3 μM dorsomorphin, and cultured for 6 days. Next, iPS cells were dissociated into single cells using TrypLE™ Select (12563-011, Thermo Fisher Scientific, MA, USA) containing 10 μM Y27632 (253-00513, Wako, Osaka, Japan). To form neurospheres, the dissociated cells were cultured in KBM medium (16050100, KHOJIN BIO, Saitama, Japan) with 20 ng/mL Human-FGF-basic (100-18B, Peprotech, London, UK), 10 ng/mL Recombinant Human LIF (NU0013-1, Nacalai, Kyoto, Japan), 10 μM Y27632 (253-00513, Wako, Osaka, Japan), 3 μM CHIR99021 (13122, Cayman Chemical, Ann Arbor, MI, USA), and 2 μM SB431542 (13031, Cayman Chemical, Ann Arbor, MI, USA) under suspension culture conditions in a 10 cm cell-repellent dish. Neurospheres were passaged twice every 7 days, and then dissociated in TrypLE™ Select containing 10 μM Y27632 (253-00513, Wako, Osaka, Japan). Dissociated cells were re-seeded onto coverslips coated with poly-L-ornithine (P3655, Sigma-Aldrich, St. Louis, MO, USA) and laminin (23016015, Thermo Fisher Scientific, Waltham, MA, USA) in 8-well chambers or 6-well plates with DMEM/F12 (D6421, Sigma-Aldrich, St. Louis, MO, USA) supplemented with B27 (17504044, Thermo Fisher Scientific, Waltham, MA, USA), Glutamax (35050061, Thermo Fisher Scientific, Waltham, MA, USA), and penicillin/streptomycin (15140-122, Thermo Fisher Scientific, Waltham, MA, USA). Two days later, cells were infected with AAV-CMV-YAPdeltaC-ins61 or AAV-CMV-NINS (MOI: 5000) or treated with 20 nM S1P (S9666, Sigma-Aldrich, MO, USA). Six days after viral or drug application, cells were stained with BTA1 (B-9934, Sigma-Aldrich, St. Louis, MO, USA) and ER-Tracker™ Red (BODIPY™ TR Glibenclamide) for 60 min at 37 °C, and then subjected to live-cell imaging (E34250, Thermo Fisher Scientific, Waltham, MA, USA). Time-lapse images of iPS-derived neurons were acquired at ×60 magnification on an Olympus FV10i-W (Olympus, Tokyo, Japan) at 30 min intervals for 48 h. The chamber was kept at 37 °C with 5% CO 2 . The ratio of cell death patterns was counted 12 h after the start of time-lapse image acquisition. Luciferase assay with iPSC-derived neurons iPSC-derived neurons (2 × 10 4 cells) were transfected with 10 μg of 8xGTIIC-luciferase reporter (34615, Addgene, Watertown, MA, USA) and 10 μg of pGL4.74[hRluc/TK] Vector (E6921, Promega, Madison, WI, USA) using Lipofectamine LTX with Plus Reagent (14338100, Thermo Fisher Scientific, Waltham, MA, USA). The 8xGTIIC-luciferase reporter plasmid possesses eight synthetic TEAD-binding sites upstream of the luciferase gene, making it YAP/TAZ-responsive; this construct was generated by adding four more TEAD-binding sites to 4XGTIIC-Lux, originally created by Ian Farrance 58 ( ). pGL4.74[hRluc/TK] encodes the luciferase reporter gene hRluc ( Renilla reniformis ). After 48 h transfection, an equal volume of Dual-Glo Luciferase Reagent (E2920, Promega, Madison, WI, USA) was added to each well. Firefly luminescence was measured on a Spark 10 M multimode microplate reader (TECAN, Männedorf, Switzerland) after incubation for 20 min. For Renilla luminescence, an equal volume of Dual-Glo Stop & Glo Reagent (E2920, Promega, Madison, WI, USA) was added to each well before Dual-Glo Luciferase Reagent, mixed, and measured on a Spark 10 M. For the recovery experiments, AAV-CMV-YAPdeltaC-ins61 or AAV-CMV-NINS (MOI: 5000) or 20 nM S1P (S9666, Sigma-Aldrich, St. Louis, MO, USA) was added to the culture medium 4 days before plasmid transfection. Electron microscopy of ballooning neurons in mice and human Mouse and human brain samples were fixed with 2.5% glutaraldehyde in 0.1 M phosphate-buffered saline (PBS) for 2 h, incubated with 1% OsO4 buffered with 0.1 M PBS for 2 h, and dehydrated in a series of graded concentrations of ethanol (50, 70, 80, 90, 100, 100, 100, and 100%), and embedded in Epon812 (E14120, science services, München, Germany). Semi-thin (1 μm) sections for light microscopy were collected on glass slides and stained for 30 s with toluidine blue. Ultrathin (90 nm) sections were collected on copper grids, double-stained with uranyl acetate and lead citrate. Images were obtained by transmission electron microscopy (H-7100, Hitachi, Hitachinaka, Ibaraki, Japan). Human samples (cerebral neocortex) was collected at autopsy and directly fixed in 4% paraformaldehyde two overnight. The sliced brain samples were preserved in 20% sucrose contained phosphate buffer in 4 °C until the process for ultrastructural observation. Following procedures were same as mice tissue sample preparation. Induction of apoptosis and necrosis of primary cortical neurons Mouse primary cortical neurons were prepared from E17 ICR mouse embryos. Cerebral cortex tissues were rinsed with PBS, and incubated with 0.05% trypsin in PBS at 37 °C for 15 min. The cells were passed through a 70 μm cell strainer (22-363-548, Thermo Fisher Scientific, MA, USA), collected by centrifugation, and cultured in neurobasal medium (21103049, Thermo Fisher Scientific, Waltham, MA, USA) containing 2% B27 (17504044, Thermo Fisher Scientific, Waltham, MA, USA), 0.5 mM L-glutamine, and 1% Penicillin/Streptomycin (15140-122, Thermo Fisher Scientific, Waltham, MA, USA). Forty-eight hours later, the medium was changed to that containing 0.5 μM AraC (C3631, Sigma-Aldrich, St. Louis, MO, USA). On Day 4 of primary culture, we added 50 mM glutamate to culture medium for 3 h to induce apoptosis, or 25 μg/ml α-amanitin (A2263, Sigma-Aldrich, St. Louis, MO, USA) for 48 h to induce TRIAD necrosis. Neurons were then fixed with 4% FA and were subjected to immunohistochemistry. For western blot, we removed culture medium, added 100 μL sample buffer (62.5 mM Tris-HCl, pH 6.8, 2% SDS, 10% glycerol, 0.0025% BPB, 2.5% 2-mercaptoethanol) to each well of 12well plate, and recovered the samples. The samples were boiled for 10 min and subjected to SDS-PAGE. Ischemia induction of cerebral cortex tissues C57BL/6 J mice were anaesthetized with 1.0% Isoflurane® (099-06571, FUJIFILM, Osaka, Japan). Body temperature of the mice was maintained at 36.5 °C ± 0.5 °C during surgery with a heating plate. Skin and hair were disinfected with 70% ethyl alcohol, and a midline neck incision was made. The common carotid arteries were carefully dissected from fat tissues and the surrounding nerves not to injure vagal nerves, and pulled out with a surgical thread. After obtaining good view of the surgical field, bilateral common carotid arteries were clipped, using a microvascular clip (Dieffenbach Vessel Clip, straight 35 mm, Harvard Apparatus, Holliston, MA, USA) for 10 min. After the surgery, the mice were gently brought back to the cage, and watched carefully until recovered. Statistics Box plot is used to depict distribution of observed data, and the data are also plotted as dots. A box plot shows the median, quartiles and whiskers that represent 1.5× the interquartile range. In the other types of plots, values in each group are summarized by mean ± S.E.M. Statistical differences between disease and control groups were evaluated by the Wilcoxon rank-sum test with post-hoc Bonferroni correction. Correlations between HMGB1 concentration and other markers in each individual subject were calculated using Pearson’s correlation coefficient. Ethics This study was performed in strict accordance with the Guidelines for Proper Conduct of Animal Experiments by the Science Council of Japan. This study was approved by the Committees on Gene Recombination Experiments, Human Ethics, and Animal Experiments of the Tokyo Medical and Dental University (2016-007C6, O2014-005-09, and A2018-153A, respectively). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The authors declare that the data supporting the findings of this study are available within the article and supplementary information. Full anonymized data will be shared by request from any qualified investigator. The Source Data underlying Figs. 1 a, b, 2 b, c, e, g, h, 3 b, c, 4 b, 5 b, c, d, e, 6 b, c, e, g, i, j, 7 b, d, e, f, 8 c, 9 b, c, d, e, f, g, h, 10 c, d, e, h, i, j, k, l, n and o, and Supplementary Figs. 1 , 2 , 3 , 4 a, d, 6 a, b, 9 b, d, 10 b, 11 b, d, 12 b, 14 , 15 a, b, c, d, e and f are provided as a Source Data file. Code availability An original program code used to simulate a number of active cell death is available from our Website ( ). | Alzheimer's remains the leading cause of dementia in Western societies, with some estimates suggesting that as many as 24 million people worldwide are living with the disease. Alzheimer's is characterized by a progressive decline in cognitive ability that eventually affects even basic functions such as walking and swallowing. The exact cause of Alzheimer's is unknown, but pathological changes in the brain, including neuron loss and an accumulation of protein aggregates called beta-amyloid plaques, are a diagnostic hallmark of Alzheimer's disease. Mild cognitive impairment (MCI) describes the slight but measurable changes in cognitive function that are often a precursor to Alzheimer's disease. However, despite the importance of MCI, very little is known about the changes that occur in the brain during the progression from MCI to Alzheimer's. In a recent study published in Nature Communications, researchers led by Tokyo Medical and Dental University have now discovered that preventing pathological changes in the brain at the MCI stage could eliminate Alzheimer's disease altogether. "Neuronal death, is obviously very important in the development of Alzheimer's, but is notoriously difficult to detect in real time because dying cells cannot be stained using chemical or immunohistological methods," says lead author of the study Hikari Tanaka. "Because of this, we used a new biomarker called pSer46-MARCKS to detect degenerative neurites surrounding dying neurons, allowing us to quantify levels of necrosis, a prototype of neuronal death, at different stages of disease." Surprisingly, the researchers found that neuronal death occurred much earlier than originally thought, with higher levels of necrosis seen in patients with MCI than in patients with full-blown Alzheimer's disease. The researchers also observed a significant decrease in the levels of a protein known as YAP in Alzheimer's disease model mice and human patients with MCI. YAP positively affects the activity of a second protein called TEAD, a deficiency of which leads to neuronal necrosis. Microscopic examination revealed that the missing YAP was sequestered within beta-amyloid plaques, which have also been linked to neuronal toxicity. By directly injecting a gene therapy vector expressing YAP analog into the cerebral spinal fluid of mice that were genetically engineered to provide a model of Alzheimer's, the researchers were able to prevent early-stage neuron loss, restore cognitive function, and prevent the development of beta-amyloid plaques. "Confirming that neuronal necrosis was dependent on YAP was really the pivotal moment for us, but observing the almost transformative effects of YAP supplementation was hugely exciting," says senior author of the study Hitoshi Okazawa. "By showing that neuronal necrosis is YAP-dependent and begins prior to the onset of most symptoms, we predict that novel Alzheimer's disease therapies will be developed to prevent the initiation of Alzheimer's disease." "Another important issue is that the necrosis of neurons accumulating intracellular beta-amyloid occurs before formation of beta-amyloid plaques," continues Professor Okazawa. "Residual beta-amyloid after neuronal necrosis seems to be the seed for beta-amyloid plaques outside of neurons. This discovery might change the amyloid hypothesis considering that extracellular beta-amyloid plaque is the top of pathological cascade of Alzheimer's disease." | 10.1038/s41467-020-14353-6 |
Physics | When AI and optoelectronics meet: Researchers take control of light properties | Benjamin Wetzel et al, Customizing supercontinuum generation via on-chip adaptive temporal pulse-splitting, Nature Communications (2018). DOI: 10.1038/s41467-018-07141-w Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-018-07141-w | https://phys.org/news/2018-11-ai-optoelectronics-properties.html | Abstract Modern optical systems increasingly rely on complex physical processes that require accessible control to meet target performance characteristics. In particular, advanced light sources, sought for, for example, imaging and metrology, are based on nonlinear optical dynamics whose output properties must often finely match application requirements. However, in these systems, the availability of control parameters (e.g., the optical field shape, as well as propagation medium properties) and the means to adjust them in a versatile manner are usually limited. Moreover, numerically finding the optimal parameter set for such complex dynamics is typically computationally intractable. Here, we use an actively controlled photonic chip to prepare and manipulate patterns of femtosecond optical pulses that give access to an enhanced parameter space in the framework of supercontinuum generation. Taking advantage of machine learning concepts, we exploit this tunable access and experimentally demonstrate the customization of nonlinear interactions for tailoring supercontinuum properties. Introduction Complexity is a key characteristic of numerous physical systems, ranging from self-organisation to network access. Based on nonlinear or chaotic dynamics, and relying on a large number of parameters usually difficult to access, these complex systems have an ever-growing impact on our everyday life 1 . In photonics, numerous systems fall within this category where, for instance, the development of advanced optical sources 2 , 3 , 4 is of tremendous importance for applications ranging from imaging to metrology 5 , 6 , 7 . A relevant example of this problem is the generation of a supercontinuum (SC) 8 , a broadband spectrum produced by an optical pulse propagating in a medium under the combined actions of dispersion, nonlinearities, and scattering effects 9 , 10 . In fibre-based systems, its underlying formation mechanisms are now well-understood and described within the framework of a modified nonlinear Schrödinger equation 3 . Significant work recently focused on studying broadening effects triggered by modulation instability processes 11 , initiated by long (i.e. sup-picosecond, >1 ps) pulses. In this case, the propagation dynamics are widely influenced by noise effects, ultimately resulting in incoherent output spectra 12 , 13 . However, many applications, including advanced metrology and imaging, today rely on coherent supercontinua 8 , where reproducible and controllable features are particularly required 6 , 7 . Specifically, for e.g. fluorescence imaging, pump-probe measurement techniques, spectroscopy, as well as coherence tomography, versatile control of both spectral and temporal SC properties is essential 7 , 14 , 15 , 16 , 17 . Yet, reproducible SC generation typically requires ultrashort (i.e. sub-picosecond, <1 ps) pulses 8 , where the means for controlling the propagation dynamics in a reconfigurable manner are limited 10 , 18 (i.e. control is constrained by the design of the initial pulse condition and the properties of the propagating medium 19 , 20 , 21 ). Therefore, in spite of numerous, rigorous studies of this regime, the optimization of coherent SC continues to be an experimental challenge 14 , 15 , 22 , 23 . Specifically, the properties of single pulses deterministically seeding the generation of coherent supercontinua can be tuned over different degrees of freedom using state-of-the-art techniques (via e.g. pulse shaping, polarization control, or acousto/electro-optic modulation) 10 , 17 , 18 , 23 , 24 , 25 , 26 , 27 , 28 , 29 . However, these approaches typically rely on external devices that present fundamental limitations in the sub-picosecond regime, on top of being affected by complexity, bulkiness, and costs. More importantly, these techniques only allow for the control of very few single degrees of freedom (e.g. a single pulse’s duration, shape, or power), restricting the available nonlinear dynamics to intra-pulse effects such as soliton deterministic ejection and frequency shifts 3 , 30 , thus ultimately hampering the ability to optimize the overall system output. In contrast, using more than one pulse to seed SC generation not only allows the excitation of multiple independent (intra-pulse) dynamics, but also leads to a richer variety of effects arising from the interaction of different components, i.e. inter-pulse effects such as multi-pulse soliton ejections, dispersive wave radiations, spectral superposition, steering, and collisions 3 , 30 , 31 , 32 , 33 , 34 . Yet, initial efforts towards this direction 32 , 33 , 35 , 36 , 37 , 38 have only partially unleashed the potential of this concept. Recent approaches have been limited to the use of long (sup-picosecond) incoherent seeds, or to mutually-coherent pulses with very restricted control over their number and properties. Consequently, the full extent of SC control via multi-pulse seeding has yet to be achieved. In this article, we demonstrate a method to drastically enhance the control parameter space for the tailoring of nonlinear interactions in guided fibre propagation and achieved SC generation with highly controllable properties. Using the length scales and stability available with integrated photonics 39 , custom sets of multiple and mutually-coherent ultrashort optical pulses (with low, 1 ps minimal separation) are prepared using optical pulse-splitting on a photonic chip, also enabling the adjustment of their individual properties (e.g. power, shape, chirp). The size of the control parameter space (i.e. over 10 36 unique parameter combinations for 256 pulses) makes traditional experimental approaches, based on trial-and-error or exhaustive parameter sweeps, impossible. However, using machine learning concepts, in a similar fashion to approaches demonstrated in a variety of adaptive control scenarios 24 , 40 , 41 , 42 , 43 , we are able to optimize different pulse patterns and experimentally obtain the desired SC outputs. Specifically, we measure the spectral output and employ a genetic algorithm (GA) 44 , 45 to modify the integrated pulse-splitter settings in order to optimize the nonlinear fibre propagation dynamics towards a selected SC criterion (for instance, maximizing the spectral intensity at one or more targeted wavelengths). The results of this proof-of-concept demonstration exhibit versatile control of the output spectra, allowing us to experimentally achieve a seven-fold increase of the targeted SC spectral density when compared to a single pulse excitation with the same power budget. Additionally, we numerically show the potential of this technique, not only for spectral shaping, but also towards the full temporal control of SC generation. Results Experimental setup The approach proposed for the customization of nonlinear interactions via multiple pulse seeding is illustrated in Fig. 1 . Our experimental setup (see Fig. 1c ) comprises custom pulse train preparation via an integrated pulse-splitter and subsequent optical amplification, after which the multiple ultrashort pulses (~200 fs duration) were sent through 1 km of highly-nonlinear fibre (HNLF) in order to generate a SC. The spectrum of the pulse train measured at the HNLF input was centred around 1550 nm and had a 25 nm bandwidth (full-width at half maximum), corresponding to a 59 nm bandwidth at −10 dB. The input spectrum exhibited a small asymmetry (see Supplementary Fig. 1 , 3 and Supplementary Discussion for details on the initial conditions) as well as an envelope modulation, whose properties depended on the pattern (i.e. number and separation) of the pulses generated by our integrated systems. Following fibre propagation and spectral broadening, the SC output was measured using an optical spectrum analyser and assessed with respect to target criteria (see Methods). The integrated device consists of a concatenation of balanced and unbalanced Mach-Zehnder interferometers (MZI), as illustrated in Fig. 2 (see Methods for details). The interferometers are electronically controlled via the use of integrated electrodes, which are responsible for thermally inducing an optical phase difference between the two arms of the interferometer. By adjusting the interferometers’ splitting ratio, an input femtosecond pulse is divided into multiple fractions. Those will follow different path combinations within the waveguide structure of the CMOS-compatible photonic chip, allowing the preparation of a coherent train of multiple pulses with adjustable peak powers and relative delays (with as low as 1 ps pulse separation, see Fig. 2b–e ). The device exhibits low losses (~3 dB overall, see Methods) and the interferometers’ integrated push-pull configuration provides excellent repeatability and stability against environmental perturbations. More importantly, the photonic chip enables versatile control of the pulse train (i.e. power, delay, pulse duration, chirp, etc.): Specifically, the large Kerr nonlinearity and weak anomalous dispersion in the device waveguides 46 can bring about path- and power-dependent nonlinear phase shifts and temporal broadening. These control properties are very important for the optimization of coherent SC features: the ability to adjust multiple pulse shapes, chirps, powers, as well as their relative delays and phases constitutes the key ingredient required for the efficient control of independent and variable deterministic soliton radiation processes (i.e. intra-pulse dynamics) at the basis of spectral broadening in the current propagation regime. This control of the initial parameter space (and the corresponding intra-pulse dynamics leading to subsequent soliton radiation) is also expected to condition inter-pulse dynamics during further fibre propagation, including the tuning of multiple soliton interactions such as repulsion, collision, or spectral superposition 8 , 9 , 30 , 31 , 32 , 33 . The use of multiple yet coherent pulse excitations is thus foreseen as a simple way to customize a wide variety of nonlinear interactions which are otherwise hard to tune using conventional pulse shaping techniques. Remarkably, they are here accessible in a simple yet efficient integrated platform. Fig. 1 Concept of supercontinuum spectral customization via multiple pulse seeding. a Example of spectro-temporal properties (spectrogram 53 ) of a single sub-picosecond pulse after propagation in a nonlinear optical fibre (see Methods). The newly-created spectral components experience progressive temporal walk-off 9 , 30 . At a given distance, only a few of the components temporally overlap, limiting nonlinear effects to intra-pulse interactions and restricting spectral shaping capabilities. b Using several pulses, more complex intra- and inter-pulse dynamics can be excited. Pulse-to-pulse separations on the order of the temporal walk-offs (~ps), enable the interplay of various spectral components generated from different pulses, thus providing enhanced nonlinear control over the spectral shaping. c Experimental setup: An integrated pulse-splitter is used to generate a custom train of pulses with ps-separation. After amplification with an erbium-doped fibre amplifier (EDFA), these are injected into a 1 km-long, highly-nonlinear fibre (HNLF) to form a supercontinuum, monitored using an optical spectrum analyser (OSA). A feedback loop is used to optimize the seed pulse train and tailor the supercontinuum output Full size image Fig. 2 Operational principle of the on-chip optical pulse-splitter. a Schematic diagram: The sample comprises cascaded Mach-Zehnder interferometers (MZI) and different delay lines made of high-refractive-index silica glass (white)—see Methods for details. Adjustable splitting and routing of the optical pulses into the various waveguide paths can be controlled by tuning each MZI arm phase difference Δφ N (where N is the interferometer number) via a refractive index change, thermally-induced using electrodes (yellow). We show in the figure a diagram comprising four interferometers used to adjust the relative splitting of pulses into the two arms of the subsequent unbalanced waveguide section. The last interferometer ( Δφ out ) is used to recombine the train of delayed pulses and regulate the overall output power. b – e Examples of generated optical pulse patterns characterized using intensity autocorrelation (AC—top) and frequency-resolved optical gating (FROG—bottom) measurements 53 . Multiple pulses (up to 16), featured by equal power but different pulse-to-pulse separation (as low as 1 ps), were obtained by setting each of the N = 4 interferometers to predefined splitting conditions, in order to sequentially split and interleave pulse trains with variable relative delays. These simple pulse pattern examples were achieved by setting each MZI relative phase difference to one of three specific values only: Δφ N = 0, so that the incoming signal (i.e. pulse or train of pulses) entirely propagates within the short arm of the unbalanced waveguide section; Δφ N = π , so that the incoming signal entirely propagates within the long arm of the unbalanced waveguide section; Δφ N = ±π/2, so that the incoming signal is equally split between the two arms of the unbalanced waveguide section (leading to a relative delay corresponding to the unbalance between these two arms). Parameters: Δφ 1–4 = 0 for ( b ); Δφ 1–3 = 0 and Δφ 4 = π /2 for ( c ); Δφ 1–3 = π /2 and Δφ 4 = 0 for ( d ); Δφ 1–4 = π /2 for ( e ) Full size image Single-wavelength optimization In order to illustrate the versatility of our scheme for controlling nonlinear pulse propagation, we first demonstrated the enhancement of the power density at a single wavelength of the SC. For this, we limit our study to two cases. As a reference, we used the simplified case of a SC spectrum generated by a single pulse with adjustable power (Fig. 3a ). Here, spectral broadening was mediated by the radiation of multiple solitons and dispersive waves, which subsequently experienced Raman self-frequency shifts 9 , 30 . For comparison, our second case used multiple excitation pulses (prepared using the integrated splitter), possessing the same power budget as in the single pulse study (50 mW), but with input parameters refined via genetic algorithm optimization (see Methods). Fig. 3 Supercontinuum spectral intensity optimization at selected wavelengths. a Spectral intensity map measured at the highly-nonlinear fibre output, generated by a single pulse seed as a function of its average power. b Maximal spectral intensity reached as a function of the selected optimization wavelength, considering either a single pulse seed case (dashed black line—i.e. the maximal intensity retrieved from panel a ), or using the pulse-splitting optimization technique (with up to 16 pulses—red circles) for the same power budget—see Methods. c Spectral intensity enhancement (relative to the single pulse seed case as a reference), for pulse-splitting performed with 16 (red dots) or 32 (blue diamonds) seed pulses. For reference, the input pump spectral location is shown as grey shadings in ( b ) and ( c ). d – f Examples of spectra obtained following intensity maximization at target wavelengths (blue shadings), using single pulse seeding (dashed black lines), or pulse-splitting optimization (red lines—with up to 16 pulses). The insets show the autocorrelation traces of the corresponding, optimal input pulse trains and average powers P in Full size image We found that, for target wavelengths across the SC bandwidth, the use of multiple pulses enabled 20–700% spectral density enhancement relative to the reference (see Fig. 3b, c ). Examples of optimized spectra for three particular target wavelengths are shown in Fig. 3d–f and illustrate how the resulting spectra can vary significantly for a similar input power but drastically different pulse configurations (see insets). Note that, in this work, we restricted our study to a limited number of pulse seeds (either 16 or 32 pulses in the pattern instead of the 256 maximally-achievable with the chip). Nevertheless, in the propagation regime studied (where SC spectral broadening above 2000 nm is intrinsically limited by fibre losses), the use of such a subset of the available parameter space is sufficient to exhibit notable spectral enhancement while ensuring fast convergence in the optimization process (see Methods and Supplementary Fig. 4, for discussion). As expected, the use of 32 pulse seeds was found to outperform the use of 16 pulse seeds (see Fig. 3c ). In this regime, such expected behaviour can be explained by the potential of multiple pulse seeds to judiciously condition the spectral steering and, ultimately, the superposition of independently generated spectral components (see Supplementary Fig. 2 ). It is foreseen that for other applications and target SC outputs, a greater number of pulses, and consequently larger parameter spaces, will enable even better performances. Dual-wavelength optimization Remarkably, the ability to generate multiple pulses with specific delays and properties further enables the control and optimization of typically complex and interdependent dynamics. This feature is illustrated in Fig. 4 , where simultaneous enhancement of the power density at two distinct SC wavelengths can be obtained with our scheme (see Methods). Here, we specifically targeted cases where both wavelength intensities were equivalent (see Methods), but further tunability is accessible depending on the exact optimization criteria used for the algorithm (see Supplementary Discussion and Supplementary Fig. 5 ). Indeed, arbitrary optimization criteria can be implemented 44 , in stark contrast to what can be obtained with a single pulse seed. Spectral broadening mediated by soliton radiation is highly deterministic and typically leads to strong spectral correlation in the resulting SC 3 , 31 , 32 , 47 . In our case however, multiple pulse excitation can seed both independent dynamics and customized nonlinear interaction. This, alone, manifests a powerful example of how our integrated system, along with the implementation of machine learning concepts, can be efficiently used to tailor complex nonlinear processes without extensive system design. Fig. 4 Supercontinuum spectral intensity optimization for different wavelength pair combinations. a Examples of spectra obtained following simultaneous intensity maximization at two target wavelengths (blue shadings), using single pulse seeding (dashed black lines), or pulse-splitting optimizations (red lines—with up to 16 pulses). Note that we used here the same setup and power budget as in Fig. 3 , and simply modified the algorithm optimization criteria (see Methods). b Optimization matrix for wavelength pair combinations, showing normalized intensity enhancement obtained for combinations of wavelength pairs. Such enhancement (see colour bar on the left axis) is calculated as the average intensity at both wavelengths and is normalized relatively to the single pulse seeding case (see Methods). For clarity, we only report results where the intensity at one wavelength is less than twice as large as the intensity at the other wavelength (see Supplementary Fig. 5, for a complete analysis) Full size image Advanced spectro-temporal control Additionally, our approach has the potential of controlling the SC temporal properties, which we confirmed by numerical simulations using a shorter fibre propagation length (in order to ensure reliable and reasonably fast computation of the pulse propagation dynamics—see Methods for details). In particular, by assuming a 200 fs pulse (2 kW peak power), which was randomly split into a train of 64 pulses by our integrated device and propagated through a 50 m-long HNLF, we show how solitons, radiating from the excitation pulses, can individually experience different Raman self-frequency shifts and temporal walk-offs leading to collisions, steering and the generation of novel frequency components 14 , 32 , 33 (see Fig. 5a ). Overall, such an enhanced parameter space can ultimately drive different propagation scenarios and thus provide a high degree of reconfigurability in terms of SC properties (see Supplementary Fig. 2 for different examples of propagation dynamics). Indeed, depending on the initial conditions, highly variable yet coherent SC output spectra 3 can be obtained (Fig. 5b ), with the additional possibility for selecting the delay and order with which specific spectral components emerge from the fibre (see Fig. 5c ) 14 . Fig. 5 Numerical simulations showing control of the supercontinuum spectral and temporal properties. a Example of supercontinuum (SC) temporal (left) and spectral (right) evolution in 50 m of highly-nonlinear fibre (HNLF). A train of 64 pulses, prepared using the integrated pulse-splitter (bottom), is injected into the HNLF to generate a broadband supercontinuum (top). b SC spectra obtained by simulating the propagation of 200 randomly prepared pulse patterns (grey). The average spectrum of these is plotted in black. Additional numerical analysis shows that despite different evolution dynamics, the respective spectra individually retain a high average degree of coherence < g > thanks to the (coherent) optical splitting method employed (The average coherence of each individual spectrum illustrated, computed over a 20 dB bandwidth, was < g > = 0.973—see Methods) 3 , 7 , 39 . c Examples of two different SC temporal profiles (top and bottom panel—obtained from two different input pulse patterns) after narrowband filtering at two specific wavelengths (i.e. 1700 and 1800 nm—see blue and red shadings in ( b ), respectively), showing that the differently-coloured pulses can exhibit diverse arrival times. d The relative delay between these filtered output pulses is computed for various integrated pulse-splitter configurations (brown dots). We found an enhanced temporal tunability compared to SC generated from a single input pulse with randomly adjusted properties (grey squares)—see Methods Full size image Continuous control of the relative delay between two spectral components can thus be obtained over a large temporal window of ±40 ps (Fig. 5d —see Methods). This ability, obtained by exploiting the multiple pulses of the system, provides a higher flexibility and tuning range with respect to conventional shaping techniques applied to a single pulse (see Fig. 5d and Methods for details) 10 , 17 , 18 , 22 , 26 , 28 . In particular, when considering the propagation regime studied here, single pulse seeding always leads to the formation of soliton(s) and associated dispersive waves, the temporal walk-off of which is intrinsically related to their wavelength (i.e. the most red-shifted solitons will typically exhibit a larger delay due to Raman scattering 8 , 9 ). In this framework, even conventional shaping techniques are inherently limited to slightly adjusting the absolute value of the relative delays between different spectral components (see Fig. 5d ). On the other hand, the use of multiple pulse seeding has shown that versatile temporal control between two (or eventually more) pulses at arbitrary wavelengths of the output spectrum can be obtained. This additional temporal tunability, typically required e.g. in advanced imaging systems 14 , 16 , 17 , will complement the experimentally-demonstrated spectral shaping, and is expected to be further enhanced (e.g., by activating additional interferometers and thus exponentially expanding the accessible parameter space). Discussion We demonstrate how adjustable, integrated path-routing can be used to access a wide and controllable optical parameter space. In combination with the use of genetic algorithms (GAs), we showed the generation of supercontinua with broadly reconfigurable characteristics. Most importantly, this is achieved with the same power budget, meaning no additional amplification was used, and therefore the benefits of pulse splitting far exceed the drawbacks due to the additional optical loss of the integrated device. In particular, the improvements provided by the additional degrees of freedom in the multi-pulse excitation regime can condition the interleaving, superposition, and nonlinear interaction between multiple phase-locked pulses, which in turn significantly expands the controllable SC properties by allowing the customization of both their spectral and temporal power distribution. Besides this demonstration in the telecom range, our approach could be extended to other typical laser wavelengths (e.g. 800 or 1064 nm) and/or fibre designs. Using for instance an optical source to seed a fibre with a judiciously chosen dispersion profile (in order to circumvent the loss-induced spectral broadening limitations observed in the current HNLF), coherent and reconfigurable octave spanning SC generation is expected to be readily obtained with our proposed systems. Similarly, the nonlinear fibre used in our experiments for SC generation could be shortened or readily integrated on a photonic chip 46 , 48 , 49 , thus providing a compact and stable system for the deployment of advanced optical functionalities (such as on-chip f -2 f interferometry based on coherent SC 50 ). In this context, we foresee this approach as an invaluable tool for the development of novel optical sources for, e.g. state-of-the-art imaging and metrology applications requiring both spectral and temporal tunability (e.g. pump-probe measurement techniques, hyperspectral imaging, or various schemes for coherent control) 4 , 5 , 6 , 7 , 8 , 14 , 16 , 34 . Additionally, it is worth underlining that we restricted our attention to the use of sub-picosecond pulses (<1 ps) in order to avoid temporal overlap between adjacent excitation pulses during propagation in our integrated system. Yet, the design of a pulse-splitter with shorter relative delays or, equivalently, the use of longer pulses is also expected to unlock novel features in SC adaptive control (via e.g. coherent temporal synthesis 39 and tailored pulse superposition 31 , 33 ), especially within the framework of fundamental studies associated with extreme event formation 11 , 13 , 35 , 38 . Similarly, such an approach is thus expected to allow the optimal exploitation of complex optical systems without a priori knowledge of their dynamics. This may include applications related to advanced nonlinear signal processing 51 , the control of frequency comb emission 4 , 5 , 22 , as well as of laser mode-locking 2 . In turn, this can pave the way towards the next generation of self-adjusting lasers 40 , 52 and ‘smart’ integrated optical systems. Methods Integrated pulse-splitter The on-chip photonic pulse-splitter was formed by cascading N = 9 balanced interferometers and M = 8 delay lines based on integrated optical waveguides (see Fig. 2 ), where the associated delays varied according to the relation Δτ M = 2 Δτ M -1 . The device was fabricated from a CMOS-compatible, high-refractive index silica glass (Hydex) produced by chemical vapour deposition without the need for high-temperature annealing 46 . Patterning was done using UV photolithography and reactive ion etching. This material platform is featured by a refractive index of n = 1.7, along with very low linear (0.06 dB cm –1 ) and negligible nonlinear optical losses (no nonlinear losses measured up to 25 GW cm –2 ). The waveguide dimensions allowed for single mode TE and TM propagation at telecom wavelengths, where the dispersion and nonlinear properties were similar to those reported in Ref. 46 , as verified by means of optical vector analyser measurements (Luna OVA 5000). At the central wavelength of the pulses used in our experiments (i.e. λ = 1550 nm) and for the selected polarization, we estimated the waveguide dispersion to be extremely low ( β 2 = −2.87 ps 2 km −1 and β 3 = −0.0224 ps 3 km −1 ), yet the effective nonlinearity to be significantly high ( γ = 233 W −1 km −1 ). The input and output bus waveguides featured mode converters and were pigtailed to 1.2 m fibre patchcords (SMF-28) on each side, resulting in coupling losses of 1.4 dB per facet. The total propagation length in the sample varies between 5 and 9.5 cm and depends on the selected paths inside the structure. Overall, the total losses in our pulse-splitter were measured to be between 3.10 and 3.36 dB at 1550 nm, depending on the optical path (including the attenuation from the mode converters and pigtails). Gold electrodes were deposited on each arm of the nine balanced interferometers (the last is dedicated to the output splitting and pulse recombination but does not introduce any delay), in order to induce a local and variable thermal modification on the adjacent optical waveguide. Such a modification was controlled electronically (see below) to produce a phase difference Δφ N between the two arms of the interferometer (where N is the interferometer number in the sample). For each of the eight interferometers, the waveguide thermal control allowed for a tunable switching of the optical output beam path between the short and the long arm of the following delay line (thus inducing a path difference equivalent to a delay Δτ M ). The two output waveguides from such elements were then fed into the two input waveguides of the subsequent balanced interferometer. Via these cascaded blocks, it is possible to generate up to 2 M optical pulses with adjustable powers and temporal separation (multiples of Δτ 1 ). We note that in our integrated pulse-splitter, the minimal delay was Δτ 1 = 1 ps, and the maximal delay was Δτ 8 = 128 ps so that we were able to generate up to 256 pulse replicas with adjustable individual powers and tunable temporal separation (multiples of 1 ps) over the range 1–255 ps. We also note that the photonic chip supports bidirectional optical propagation and can also be used by inverting the input and output ports, as illustrated in Fig. 2a (thus leading to a propagation where the associated delays in the chip decrease according to the relation Δτ M = 0.5 Δτ M- 1 ). The splitting ratio of the interferometers was computer controlled, via a push-pull architecture implemented by custom computer-controlled driving electronics, used to apply up to the 5.0 ± 0.2 V voltage swing required to completely switch the beam path from one interferometer output to the other. The splitting ratio characterization was carried out by measuring the optical power at each interferometer and we found, for each element, an extinction ratio above 16 dB (i.e. a cross-talk below 2.5%). From both electronic and optical characterizations, we estimated that the splitting ratio can be modified with a resolution of 0.01% (assuming a wavelength independent response) via ~32,000 voltage control levels per interferometer over the voltage range required for a complete path switching. For the overall sample, this corresponds to more than 10 36 different setting configurations (i.e. combinations) that can be employed for the versatile generation of multiple pulse replicas. The wavelength dependence of the interferometer splitting ratio was characterized using a tunable CW laser (Tunics-Plus). Over the range 1525–1575 nm (i.e. the bandwidth of the pulses used in our experiments), we found only negligible differences in the measured splitting ratio (i.e. maximal ± 3%) and overall losses (~0.2 dB discrepancies in the worst-case scenario, when propagating along a single waveguide path). Finally, the response time of the system was estimated by switching one (or several) interferometer splitting ratios within the sample, and temporally resolving the subsequent SC spectral modification induced after propagation in the HNLF. This was done using a fast spectrometer (see Methods below), which was also used to ensure the long-term stability and excellent repeatability of the pulse-splitter device. For one interferometer, the switching settling time (to reach thermal equilibrium) for the maximal voltage swing, was estimated to be below 100 ms, including the overall lag time of the computer, driving electronics, and detection system (~30 ms overall). Using simultaneous switching of five interferometers, we observed slightly longer settling times that were attributed to thermal cross-talk between adjacent interferometers, as well as longer update times of the sequential commands sent to the electronic drivers (~160 ms). An overall 500 ms settling time was estimated to be sufficient for capturing the main spectral modifications in the experiments performed here while ensuring excellent repeatability. This was confirmed, in case of the optimization routine, by choosing an extremely conservative 3 s settling time, yielding equivalent results to those obtained using a 500 ms settling time. Experimental SC generation and control The fibre laser used in our experiments (Menlo C-Fiber) generated femtosecond pulses at a 250 MHz repetition rate. The initial spectrum, measured with an optical spectrum analyser (OSA), was centred at 1550 nm with a 52 nm bandwidth (full width at half maximum—FWHM). After suitable dispersion management and temporal recompression, the pulse was sent into our integrated pulse-splitter, allowing for the preparation of multiple pulses. The optical pulse had an estimated ~ 200 fs duration (FWHM) and peak power of 300 W when entering the photonic sample. Subsequently, the optical output of the sample (i.e. the set of prepared sub-picosecond pulses) was amplified to the desired power level by using a short length (1.6 m) erbium-doped fibre amplifier (EDFA) before being injected into 1 km of HNLF. At this wavelength, the fibre operates in the anomalous dispersion regime, yet close to the zero-dispersion wavelength (ZDW = 1545 nm, see HNLF parameters below). After propagation in the HNLF, the broadband SC was characterized using a fast spectrum analyser (Avantes—AvaSpec NIR512) allowing measurements over a 954–2580 nm window with a resolution of ~3.5 nm. The spectral intensity(ies) at the wavelength(s) of interest were extracted from these measurements for each iteration and then used to implement the optimization criterion for the GA (see Methods below and Supplementary Discussion). The measurements of the fast spectrum analyser were also performed over the range 600–1750 nm using the OSA (ANDO AQ6317B), thus allowing for consistency verification with improved resolution. Measurements of the initial conditions were carried out at the input of the HNLF: Pulse spectral and temporal diagnostics were done with the OSA as well as with a custom intensity autocorrelator and frequency-resolved optical gating setup (FROG) 53 . Temporal broadening of the sub-picosecond optical pulses was controlled via careful dispersion management throughout the setup until injection into the HNLF. Specifically, we used a combination of single mode fibre (SMF-28) and dispersion compensating fibre (DCF-38 from Thorlabs) with patchcords of specifically chosen lengths to (i) minimize the initial chirp of the pulse emitted from the fibre laser, (ii) temporally recompress the pulse (while limiting nonlinear effects) before injection into the integrated splitter, as well as, (iii) maintain the pulse temporal spreading to a minimum during and after propagation in the sample. This configuration, based on controlled dispersion management and subsequent amplification after pulse-splitting, allows obtaining ~ 200 fs pulses both at the HNLF input and throughout propagation in the photonic chip waveguides (thus avoiding temporal overlap between the adjacent prepared pulses) while limiting the overall distortions induced via nonlinear effects before injection into the HNLF. For the reference experiment using only a single pulse, the pulse-splitter was set with all splitting ratios to their minimal values (so that only one pulse follows the shortest possible waveguide path), while the EDFA current was tuned to linearly sweep the average power with 425 incremental steps over the range 0.1–50 mW. Note that we used an additional optical attenuator before the HNLF for measuring SC output spectra with input average powers below 2.5 mW (i.e. the EDFA amplification threshold). The best result within this ensemble of 425 output spectra—with respect to the desired optimization criterion—was used as reference for comparing the results obtained via multiple pulse optimization. For these experiments, we set the EDFA current to the maximal value used previously (i.e. leading to 50 mW average output power after amplification). In this case, the relative powers of the pulses were directly controlled by modifying each delay interferometer splitting ratio (as well as the output interferometer, for regulating the overall power). GA parameters For the experiments presented, we made use of an optimization process based on a GA implemented directly from the Matlab dedicated function which was employed to adjust the integrated pulse-splitter parameters via driving electronics controlled through a USB connection. Note that no other tuning parameters were used and all other experimental settings were kept constant. For assessing the optimization criteria, we used the spectral intensity measured at one (or two) wavelength(s) in the SC. For the reported experiments, the GA was configured to enhance the discrete spectral intensity at a selected wavelength (i.e. single-objective function—as seen in Fig. 3 ) or both spectral intensities at two selected wavelengths (i.e. multiple-objective function—as seen in Fig. 4 ). In the latter case, the optimization process yields a so-called Pareto front (or frontier) 44 , corresponding to the optimum set of parameters leading to the best trade-offs between the optimization criteria. Note that in Fig. 4 , we only illustrate the case where both spectral intensities at the selected wavelength of interest are similar. A detailed analysis of the Pareto front clearly indicates that more sophisticated optimization processes can readily be achieved (e.g. allowing to select the ratio between the spectral intensity at the desired wavelengths—see Supplementary Fig. 5 for additional details). This, in turn, leads to more versatile output properties. For proof of principle demonstrations, we kept the parameters of the GA function to their default values and selected a crossover of 50%, i.e. the ratio of ‘genes’ (the voltage of a single interferometer) from each ‘individual’ (set of all voltages for the interferometer array) carried from one ‘generation’ to the next 44 , 45 . The number of individuals populating each generation (i.e. each iteration step of the GA) was adjusted depending on the number of genes for each individual (i.e. the number of actively-controlled interferometers). In addition, the maximal number of generations was limited in order to achieve a meaningful optimization towards the targeted SC output in a reasonable time frame (~1 h for each optimization process). Specifically, for five active interferometers (i.e. for generating 16 pulses), we constrained our algorithm to 15 generations with 500 individuals. Correspondingly, when 6 interferometers were actively adjusted (i.e. for 32 pulses), we expanded the population size to 1,000 individuals in order to obtain a better sampling of the initial parameter space (i.e. one additional gene per individual), while reducing the number of generations to 10. Note that, with such a limited number of generations, a systematic convergence (in the strict mathematical definition) of the GA might not be fully reached. However, we obtained a consistent improvement in the desired SC properties even using this limited number of iterations. Such optimization has been carefully verified for various sets of GA parameters (as well as for various settling times—i.e. the time between setting the system and taking the measurement, see Supplementary Fig. 4 and Supplementary Discussion), and was also observed in additional tests for more complex optimization objectives. Numerical simulations of nonlinear pulse propagation Our numerical simulations used a split-step Fourier method to solve the generalized nonlinear Schrödinger equation (GNLSE) 3 , 8 for modelling the pulse evolution in both the on-chip photonic pulse-splitter and the HNLF: $${\frac{{\partial A}}{{\partial z}} + \frac{\alpha }{2}A - \mathop {\sum}\limits_{k \ge 2} {\frac{{i^{k + 1}}}{{k!}}} \beta _k\frac{{\partial ^kA}}{{\partial T^k}} = i\gamma \left( {1 + i\tau _{{\mathrm{shock}}}\frac{\partial }{{\partial T}}} \right)\left( {A\left( {z,T} \right){\int}_{ - \infty }^{ + \infty } {R\left( {T\prime } \right)} \left| {A\left( {z,T - T\prime } \right)} \right|^2\mathrm{d}T\prime } \right)}$$ (1) Here A ( z , T ) is the pulse envelope (in W −1/2 ), evolving in a comoving frame at the envelope group velocity β 1 −1 , so that T = t − β 1 z . The model includes higher-order dispersion (shown on the left-hand side of the equation) and nonlinearity (on the right-hand side), as well as the presence of loss α and initial broadband noise (i.e. one photon with random phase per spectral mode) 9 , 10 . The overall nonlinearity is represented by a nonlinear coefficient γ and includes the self-steepening effect (through a shock timescale τ shock = 1/ ω 0 = 0.823 fs). The nonlinear response function R ( T ′) = (1− f R )δ( T ′) + f R h R ( T ′) encompasses both an instantaneous Kerr effect and a delayed Raman contribution h R ( T ′), the weight of which is given by f R = 0.18. For the HNLF, we used the parameters retrieved from the manufacturer datasheet (OFS Fitel—HNLF ZDW1546): At a central wavelength of 1550 nm, the nonlinear parameter is γ = 11.3 W −1 km −1 , the dispersion is slightly anomalous with β 2 = −0.102 ps 2 km −1 , β 3 = 0.0278 ps 3 km −1 and β 4 = 4.0 × 10 −5 ps 4 km −1 and the linear losses are 0.99 dB km −1 . For simulations of the on-chip photonic pulse-splitter, we modelled the pulse evolution through each waveguide element using a split-step method. This included the dual waveguide system with balanced and unbalanced interferometric structures and associated beam splitter transfer functions. The splitting ratio of the MZI structures, allowing for tuning the pulse path and creating delayed pulse replicas, was modelled by adding a tunable phase offset on one arm of the respective balanced interferometer. The parameters of the waveguides at 1550 nm, assuming a pure TM polarization, were taken as β 2 = −2.87 ps 2 km −1 and β 3 = −0.0224 ps 3 km −1 , with a nonlinear parameter γ = 233 W −1 km −1 and linear losses of 0.06 dB cm –1 (see above and ref. 46 ). For the cases shown in Fig. 1 , we simulated the evolution of a transform-limited Gaussian pulse of 200 fs duration (FWHM) with a peak power of 1 kW directly injected into 10 m of HNLF with the properties shown above (Fig. 1a ), or split into four pulses with different peak powers and 1 ps separation (Fig. 1b ). Note that the same individual properties and overall input energy was used for both cases. The corresponding spectrograms were constructed using a 50 fs hyperbolic-secant gate function 10 , 53 . For the proof-of-concept simulations illustrated in Fig. 5 , we considered a simple case where the integrated pulse-splitter was directly connected to 50 m of HNLF, with coupling losses of 1.4 dB per chip facet. At the input of the pulse-splitter, we injected a transform-limited Gaussian pulse of 200 fs duration (FWHM) with a peak power of 2 kW (i.e. the typical values associated with a fibre laser producing such pulses at 10 MHz repetition rate and 4 mW average output power). For this analysis, we carried out 10,000 simulations randomly varying the splitting ratio of 6 (+1 output) interferometers (such as to produce 64 pulses with 1 ps separation and adjustable powers). The extraction of each soliton/pulse properties at the HNLF output (see Fig. 5c ) was carried out using a spectral filter with a 50 nm bandwidth (FWHM) around the wavelengths of interest. Subsequent analysis of the pulse relative delay at the HNLF output (see Fig. 5d ) was obtained by discarding the filtered pulses with a peak power below 20 W (i.e. below 1% of the initial pulse peak power). For this diagram, we post-selected cases where only one pulse was obtained at each wavelength (after filtering) within the ensemble of 10,000 simulations. The coherence analysis of the supercontinua was carried out by fixing the splitting ratio used in our model and performing 20 stochastic numerical simulations with random noise seeds (i.e. adding one photon with random phase per spectral mode). The degree of spectral coherence | g (1) 12 ( λ , 0)| was thus retrieved as the mean value for the modulus of the degree of first-order coherence calculated at each wavelength λ over this ensemble of 20 simulations, which we found to be a sufficient number for a meaningful estimation of the SC coherence 36 . < g > was computed as the mean spectral coherence over the 20 dB SC spectral bandwidth. Note that the coherence value mentioned in Fig. 5 was then obtained by repeating this procedure for 50 different pulse-splitter settings and then averaging the results. Compared to a single input pulse with adjustable properties, the use of multiple and controllable pulses leads to enhanced tunability in the SC properties, which is illustrated in Fig. 5d . For this comparison, we repeated the previous set of 10,000 stochastic simulations replacing our integrated pulse-splitter by an arbitrary pulse shaper 25 . In particular, we modelled the random variation of the pulse properties by first modifying its peak power P 0 and temporal profile asymmetry ε . The pulse envelope A ( T ) was constructed from two half-Gaussians with different widths being respectively \(T_{\mathrm{0}}^ \pm = \left( {1 \pm \varepsilon } \right)T_{\mathrm{0}}\) . These half-Gaussians were added so that their maxima overlap, forming a pulse of duration T 0 = 200 fs (FWHM) with variable trailing/leading edge steepness. We then added a random quadratic and cubic spectral phase of the form \(\exp \left( {i\eta \nu ^2 + i\kappa \nu ^3} \right)\) on top of the pulse spectrum \(\tilde A(\nu )\) , before finally implementing an additional nonlinear phase shift, i.e. a self-phase modulation with random and tunable nonlinearity of the form \({\mathrm{exp}}\left( {i\varphi _{{\mathrm{NL}}}\left| A \right|^2/P_{\mathrm{0}}} \right)\) . Each of these parameters was randomly changed for 10,000 different realizations, and uniformly distributed over the ranges: P 0 = [0 2] kW; ε = [−0.5 0.5]; η = [−0.5 0.5] ps 2 ; κ = [−0.05 0.05] ps 3 ; φ NL = [−3 π 3 π ]. Although not exhaustive, such adjustable properties are typical of common optical processing systems (e.g. extra fibre length, pulse spectral shaping, etc.) and allow for a direct comparison with our pulse-splitter-based simulations, as the typical SC output bandwidths and coherence degrees remained quantitatively similar for such pulse durations 3 . Note that the SC filtering, processing, and post-selection described above remained otherwise unchanged. In order to verify the overall validity of the dynamics observed in our experiments, we also performed simulations of the evolution of a single pulse directly injected into 1 km of HNLF with variable input powers (see Supplementary Fig. 1 ). In this case, numerical simulations were carried out for propagation in the HNLF only, using the input conditions measured at the HNLF input from spectral (OSA) and temporal (autocorrelator/FROG) experimental characterization. Data availability The data that support the findings of this study are available from the corresponding authors upon reasonable request. | Using machine-learning and an integrated photonic chip, researchers from INRS (Canada) and the University of Sussex (UK) can now customize the properties of broadband light sources. Also called "supercontinuum", these sources are at the core of new imaging technologies and the approach proposed by the researchers will bring further insight into fundamental aspects of light-matter interactions and ultrafast nonlinear optics. The work is published in the journal Nature Communications on November 20, 2018. In Professor Roberto Morandotti's laboratory at INRS, researchers were able to create and manipulate intense ultrashort pulse patterns, which are used to generate a broadband optical spectrum. In recent years, the development of laser sources featuring intense and ultrashort laser pulses—that led to the Nobel Prize in Physics in 2018—along with ways to spatially confine and guide light propagation (optical fibre and waveguides) gave rise to optical architectures with immense power. With these new systems, an array of possibilities emerges, such as the generation of supercontinua, i.e extended light spectra generated through intense light-matter interactions. Such powerful and complex optical systems, and their associated processes, currently form the building blocks of widespread applications spanning from laser science and metrology to advanced sensing and biomedical imaging techniques. To keep pushing the limits of these technologies, more tailoring capability of the light properties is needed. With this work, the international research team unveils a practical and scalable solution to this issue. An ultrashort pulse is sent into an optical fibre and produces new frequency components via intense light-matter interactions. The progressive spectral broadening of the initial light pulse occurring during propagation, ultimately leads to the formation of a so-called supercontinuum. In the example here, this corresponds to a "white light" source which, similarly to a rainbow, is composed of all the colours seen in the visible region of the electromagnetic spectrum. Credit: Benjamin Wetzel Dr. Benjamin Wetzel (University of Sussex), principal investigator of this research led by Prof. Roberto Morandotti (INRS) and Prof. Marco Peccianti (University of Sussex), demonstrated that diverse patterns of femtosecond optical pulses can be prepared and judiciously manipulated. "We have taken advantage of the compactness, stability and sub-nanometer resolution offered by integrated photonic structures to generate reconfigurable bunches of ultrashort optical pulses," explains Dr. Wetzel. "The exponential scaling of the parameter space obtained yields to over 1036 different configurations of achievable pulse patterns, more than the number of stars estimated in the universe," he concludes. With such a large number of combinations to seed an optical system known to be highly sensitive to its initial conditions, the researchers have turned to a machine-learning technique in order to explore the outcome of light manipulation. In particular, they have shown that the control and customization of the output light is indeed efficient, when conjointly using their system and a suitable algorithm to explore the multitude of available light pulse patterns used to tailor complex physical dynamics. These exciting results will impact fundamental as well as applied research in a number of fields, as a large part of the current optical systems rely on the same physical and nonlinear effects as the ones underlying supercontinuum generation. The work by the international research team is thus expected to seed the development of other smart optical systems via self-optimization techniques, including the control of optical frequency combs (Nobel 2005) for metrology applications, self-adjusting lasers, pulse processing and amplification (Nobel 2018) as well as the implementation of more fundamental approaches of machine-learning, such as photonic neural network systems. | 10.1038/s41467-018-07141-w |
Earth | Methane hydrate dissociation off Spitsbergen not caused by climate change | Klaus Wallmann et al, Gas hydrate dissociation off Svalbard induced by isostatic rebound rather than global warming, Nature Communications (2018). DOI: 10.1038/s41467-017-02550-9 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-017-02550-9 | https://phys.org/news/2018-01-methane-hydrate-dissociation-spitsbergen-climate.html | Abstract Methane seepage from the upper continental slopes of Western Svalbard has previously been attributed to gas hydrate dissociation induced by anthropogenic warming of ambient bottom waters. Here we show that sediment cores drilled off Prins Karls Foreland contain freshwater from dissociating hydrates. However, our modeling indicates that the observed pore water freshening began around 8 ka BP when the rate of isostatic uplift outpaced eustatic sea-level rise. The resultant local shallowing and lowering of hydrostatic pressure forced gas hydrate dissociation and dissolved chloride depletions consistent with our geochemical analysis. Hence, we propose that hydrate dissociation was triggered by postglacial isostatic rebound rather than anthropogenic warming. Furthermore, we show that methane fluxes from dissociating hydrates were considerably smaller than present methane seepage rates implying that gas hydrates were not a major source of methane to the oceans, but rather acted as a dynamic seal, regulating methane release from deep geological reservoirs. Introduction Vast amounts of methane are bound in gas hydrates that accumulate in seafloor sediments across continental margins. These ice-like solids are stable under high pressure/ low temperature conditions but dissociate under ocean warming or relative sea-level lowering. The global gas hydrate inventory totals some 1000 billion metric tons of carbon 1 , the decomposition of which would affect carbon cycling and climate on the global scale 2 , 3 , 4 . Hence, hydrate dissociation has been invoked to explain many observations, such as the Paleocene-Eocene Thermal Maximum 2 and the rapid postglacial increase in atmospheric methane 5 . Whereas seafloor methane emissions and the associated formation of 13 C-depleted carbonates have been ascribed to hydrate dissociation 6 , 7 , 8 , direct evidence for the latter process is still conspicuously lacking. Nevertheless, it is argued that a positive feedback associated with methane release from widespread hydrate dissociation could amplify future global warming 4 . Observed methane seepage from the upper continental slope of northwestern Svalbard at ~ 400 m water depth has been attributed to gas hydrate dissociation induced by warming of ambient bottom waters and postulated as the onset stage of this future trend 7 . Numerical modeling studies support this hypothesis since numerous seepage sites are located at the up-dip limit of the gas hydrate stability zone where a moderate rise in ambient bottom water temperature would induce hydrate decomposition 9 . However, gas hydrates have never been sampled from the upper slope margin off northwest Svalbard, and dating of authigenic carbonates associated with the methane seeps reveals that seepage has been active for at least 3000 years 8 . Moreover, methane seepage is also known to prevail at depths shallower than the hydrate stability zone 10 , 11 and a hydrate-bearing seep area south of Svalbard shows limited influence from short-term ocean warming 12 . Hence, methane seepage from the seafloor may not originate from dissociating gas hydrates but from free gas that migrates to the seafloor along high-permeability stratigraphic or structural conduits 10 . Here, we present the first geochemical data that unequivocally confirm gas hydrate dissociation in sediments cored off Western Svalbard. We find remnant freshwater from hydrate dissociation that was formed over the last 8000 years when isostatic rebound induced by the deglaciation of the Barents Sea ice sheet outpaced eustatic sea-level rise. Furthermore, we find that seafloor methane seepage subsequently increased because the permeability of sediments was enhanced by the decay of hydrates that previously clogged the pore space, thereby enhancing methane release from underlying geological reservoirs. Results Sampling During R/V MARIA S. MERIAN cruise MSM57 in August 2016, sediment cores were recovered using the MARUM-MeBo70 drill-rig and a conventional gravity corer (GC) at the upper slope off northwestern Svalbard where numerous gas flares were previously identified (Fig. 1 ) 11 . A micro-temperature logger (MTL, Antares type 1854) was modified to fit into the core pilot tube to measure in situ formation temperature during MeBo deployments. The cores were analyzed for porosity while dissolved chloride and sulfate concentrations were determined in pore fluids separated from the bulk sediment (as outlined in the methods section). Fig. 1 Location of coring sites and gas flares. Gas flares (blue dots) were identified during a previous cruise 11 . Locations are listed in Supplementary Table 1 Full size image Sediment and pore water composition Sediments in the recovered cores are mixed hemipelagic to glaciomarine deposits composed of a wide range of grain sizes from clay to sand with variable amounts of gravel to pebble-sized rocks. They were deposited by ice-rafting and/or as glacial debris flows associated with nearby trough-mouth-fan deposition 13 and bottom current activity 14 on the upper slope during the Late Pleistocene. Our measurements indicate a down-core temperature increase associated with a geothermal gradient of 45–50 °C km −1 and a general decline in porosity, dissolved chloride and sulfate with sediment depth (Fig. 2 ). Porosity profiles reflect compaction and random grain size variations with low-porosity sections dominated by sand/boulder intervals and high-porosity layers associated with significant clay/silt contents. Sulfate is removed from the pore water by microbial sulfate reduction and the anaerobic oxidation of methane (AOM). Elevated sulfate concentrations below 5 m sediment depth detected in cores GeoB21632-1 and GeoB21639-1 are probably artifacts caused by the intrusion of sulfate-bearing seawater that was employed as drilling fluid and penetrated into permeable sediment layers. Dissolved inorganic carbon is strongly depleted in 13 C at the base of the sulfate-bearing zone (Supplementary Fig. 1 ). The significant negative δ 13 C values (−40‰) are driven by AOM 15 rather than the degradation of organic matter 16 . The down-core increase towards positive δ 13 C-DIC values (up to + 17‰) may reflect active methanogenesis via CO 2 reduction leaving behind a residual dissolved inorganic carbon (DIC) pool enriched in 13 C 15 . The isotopic composition of methane at the base of the cores (−53‰) is characteristic for biogenic gas containing a small but significant admixture of thermogenic methane from deeper sources 17 . It is similar to the isotopic composition of gas seeping from the seabed 11 , 16 and gas bound in methane hydrates sampled at 890 m water depth 18 . Fig. 2 In situ temperature, porosity, and pore water composition. Symbols indicate observations and lines represent best fitting model results. Chloride concentrations were corrected to account for seawater intrusion during the drilling process (methods section). a In situ temperature at 391 m water depth. b Porosity at 391 m water depth. c Dissolved sulfate in pore fluids at 391 m water depth. d Dissolved chloride in pore fluids at 391 m water depth. e In situ temperature at 404 m water depth. f Porosity at 404 m water depth. g Dissolved sulfate in pore fluids at 404 m water depth. h Dissolved chloride in pore fluids at 404 m water depth Full size image Dissolved chloride decreases significantly with sediment depth (Fig. 2 ). None of the drill cores contained gas hydrates and measurements with an infrared camera conducted within 1 h after core retrieval showed no negative temperature anomaly indicative for endothermic gas hydrate dissociation 19 . The in situ temperature measurements clearly show that methane hydrate was not stable in the cores taken at 391 m water depth, whereas at 404 m only the uppermost sediment section was located within the gas hydrate stability zone during the time of sampling (Fig. 3 ) 20 . Hence, we conclude that the observed chloride depletion is not an artifact caused by gas hydrate dissociation upon core retrieval but rather indicates in situ admixture of freshwater. The isotopic composition (δ 18 O, δ 2 H) of the pore fluids and their lithium and boron content (Supplementary Figs. 2 and 3 ) indicate that the freshwater indeed originates from gas hydrate dissociation 21 that occurred when temperatures increased to their present level and/or the pressure was reduced by a marine regression (Supplementary Discussion). Fig. 3 Phase boundary between free methane gas and methane hydrate. Phase boundaries are defined for structure type-I methane hydrate in sulfate-free pore water 20 for bottom water salinity (35 PSU, solid line) and the minimum salinity observed in the cores (32 PSU, broken line). In situ formation temperatures are plotted as solid squares (391 m water depth) and open circles (404 m water depth) Full size image Discussion Using a transport-reaction model (details in methods section) we investigate potential scenarios of hydrate dissociation that are consistent with the geochemical variations observed within the boreholes. The 400 m deep seabed at the continental margin off northwestern Svalbard is primarily influenced by North Atlantic water 22 . The temperature of this relatively warm bottom water is highly variable and affected by the strength of the Atlantic inflow via the European Nordic Seas into the Arctic Ocean 23 . Temperature measurements conducted in the area over the last 30 years indicate mean summer values (May–October) of 2.7 °C at 400 m water depth with an interannual variability of ± 1 °C 7 . Summer temperatures have increased by 1 °C over the last 30 years 7 . However, a 60-year record of bottom water summer temperatures off northwestern Svalbard at 360–400 m water depths reveals a cooling trend from 1950 to 1980 followed by a temperature rise until 2010 22 . Hence, it is unclear whether the bottom water warming observed during the last decades is due to natural variability 23 or anthropogenic forcing. Continuous temperature monitoring at 390 m water depth over a period of almost 2 years reveals strong seasonality, with minimum temperatures of 2–2.5 °C during May to June, maximum temperatures of 3.5–4 °C during November to December, and a mean annual temperature of 2.9 ± 0.5 °C for the year 2011 8 . Considering these observations, we conducted a series of model experiments to investigate the response of hydrates at the seabed in 391 m water depth to ambient bottom water warming. Specifically, we model the evolution of a hydrate layer extending from 10 meters below the seafloor (mbsf) to the base of the hydrate stability zone located at 28 mbsf for an initial bottom water temperature of 2 °C and a geothermal gradient of 45 °C km −1 (Fig. 4b ). The model was forced by a linear ambient temperature increase from 2 °C in 1980 to 3 °C in 2010 superimposed over the observed seasonal cycle (Fig. 4a ). Model results demonstrate that the conduction of heat through the sediment column (Fig. 4c ) induced melting at the base of the hydrate stability zone as shown by the chloride depletion at 28 mbsf (Fig. 4d ). However, the modeled chloride depletion is much smaller than that observed because hydrate melting in the modeled scenario is limited by slow heat conduction and mitigated by the endothermic dissociation reaction 9 . Additional model experiments conducted under alternative initial hydrate distributions also critically fail to reproduce freshening over the scales observed in our core data (Supplementary Fig. 4 ). Essentially, the modeling demonstrates that more time and energy are required to yield the down-core pore water freshening. Hence, we conclude that the observed chloride depletion has not been produced by bottom water warming during the past three decades. Fig. 4 Model results for hydrate melting at 391 m water depth. a Bottom water temperatures ( T BW ) applied as model forcing. b Percent of pore space occupied by gas hydrate (Sat GH ). c Bulk sediment temperature ( T ). Dots indicate temperatures measured in drill holes at 391 m water depth (Fig. 2 ). d Dissolved chloride concentration in pore fluids (Cl). Dots indicate concentrations in cores retrieved at 391 m water depth (Fig. 2 ) Full size image Surface temperatures at <200 m water depth peaked during the early Holocene (8–11 ka) throughout the Nordic seas including the area off northwestern Svalbard 24 , 25 , 26 . This thermal optimum was followed by slow cooling resulting in constantly low temperatures over the last few thousand years 26 . It is not known whether these surface trends also apply to bottom waters in our study area. A sediment core taken at 327 m water depth yields a trend similar to that at the surface when benthic foraminiferal δ 18 O data are used to reconstruct ambient bottom water temperatures 26 . However, a well-calibrated benthic transfer function applied to the same core does not show the early Holocene maximum but indicates that bottom water temperatures were constant over the entire Holocene 26 . Nevertheless, we applied our model to investigate whether gas hydrate dissociation possibly induced by the early Holocene optimum might explain the observed chloride depletion (Supplementary Fig. 5 ). Subsurface temperatures (100–200 m) and bottom water temperatures (327 m) calculated from foraminiferal δ 18 O 26 were employed to define the model forcing. Bottom water temperatures were assumed to rise from an initial value of 2.15 °C at 13 ka to a maximum of 4.8 °C during the early Holocene. A hydrate layer extending from 16 meters below seafloor (mbsf) to 20 mbsf was assumed as the initial condition. The simulations showed that the entire layer was dissociated at 10.7 ka because of the heat that penetrated into the sediment from above. The resulting chloride minimum was erased by molecular diffusion within a few thousand years. Hence, it is unlikely that the observed pore water anomaly was created by gas hydrate dissociation during the early Holocene. Relative sea-level data from Prins Karls Forland 27 and northwestern Svalbard 28 clearly document a marine regression during the Holocene, with a resulting drop in hydrostatic pressure that could have induced gas hydrate dissociation. Our drill sites at the upper continental slope are located ~ 50 km west of the coastal sites where major changes in relative sea-level have been recorded 28 . The upper slope was probably not covered by a grounded glacial ice sheet. However, the northwestern rim of the ice sheet was located on the adjacent shelf break at a distance of only 5–10 km from the upper slope drill sites during Late Glacial Maximum conditions 29 . Considering the mechanical coupling between the continental shelf and upper slope, it follows that the upper slope experienced considerable isostatic depression during glacial conditions and subsequent uplift after ice sheet retreat. We use output from an isostatically coupled ice sheet model of the retreat of the Barents Sea ice sheet 30 to constrain the postglacial rebound history in our study area on the upper continental slope off Prins Karls Forland (Supplementary Fig. 6 ). Relative sea-level change (Fig. 5a ) was calculated from seabed uplift and eustatic sea-level 31 and applied as forcing for our sediment model to investigate whether the chloride depletion observed in the slope cores can be better explained by isostatic rebound. Fig. 5 Model results for pressure-driven gas hydrate dissociation at 391 m and 404 m water depth. a Relative sea level (rsl) applied as model forcing. b Concentration of dissolved sulfate (measured: symbols; modeled: black lines) and methane (modeled: red lines) at the end of the simulation (0 ka). Shapes of modeled methane concentrations profiles match those of ex situ methane concentration profiles analyzed during the cruise (data not shown). c Percent of pore space occupied by gas hydrate (Sat GH ) at 391 m water depth. d Percent of pore space occupied by gas hydrate (Sat GH ) at 404 m water depth. e Dissolved chloride concentration in pore fluids (Cl) at 391 m water depth. f Dissolved chloride concentration in pore fluids (Cl) at 404 m water depth. Symbols indicate data in cores retrieved at 391 m and 404 m water depth Full size image Model experiments were conducted for 391 and 404 m water depth under a wide range of initial gas hydrate saturations to determine the optimal scenario depicted in Figs. 2 and 5 . The experiments commence at 8 ka when the relative sea-level was 12 m higher than present and the model is forced by a decline in hydrostatic pressure determined from relative sea-level change (Fig. 5a ). It is initially assumed that a gas hydrate-bearing sediment layer is present at the base of the gas hydrate stability zone (Fig. 5c, d ) and that the chloride excluded during hydrate accumulation has previously diffused away. The subsequent decline in hydrostatic pressure induces an upward movement of the hydrate stability zone and hydrate dissociation at its base. Dissolved chloride concentrations decrease because dissociating hydrates release freshwater into the pore space (Fig. 5e, f ). The upward displacement of the stability zone and the corresponding hydrate dissociation are mitigated by a coeval decline in salinity and temperature induced by the dissociation process itself that consumes heat and releases freshwater into the pore space 32 . Even though gas hydrates are stabilized by these negative feedback mechanisms, the dissociation front migrates upward and the hydrate layer is eroded from the bottom until the ongoing reduction of hydrostatic pressure induces complete dissociation of the hydrate layer. The resulting chloride minimum is significantly broadened over time through molecular diffusion. The final dissolved sulfate profile is controlled by upward diffusion of dissolved methane that consumes sulfate via AOM 33 (Fig. 5b ). Dissolved methane in deep sediments is saturated with respect to free gas because gaseous methane fills part of the pore space initially occupied by gas hydrates at the end of the simulation. In these model experiments, ambient bottom water temperature is maintained at a constant value of 2.5 °C until 0.1 ka when the temperature is allowed to rise exponentially to attain its modern value of 3.0 °C (Supplementary Fig. 7 ). Bottom water heating at the end of the simulation period is required to attain a final temperature profile consistent with the data (Fig. 2 ). However, we note that the heating applied over the last 100 years of the experiment resulted in no further dissociation because hydrates were already fully decomposed by seabed uplift and pressure reduction prior to this final episode of warming. According to our transport-reaction model, most of the dissociated methane hydrate inventory (5483 mol m −2 ) was released as free gas into the water column (4944 mol m −2 ) at 391 m water depth. The remaining portion was dissolved in pore fluids and consumed by AOM. The calculated methane release corresponds to an annual mean flux of 0.6 mol m −2 yr −1 averaged over the model time-period. This gas flux should be regarded as a maximum estimate because the model does not consider the dissolution of gas bubbles in surface sediments. During the summer of 2012, a mean methane gas bubble flux of 1–13 mol m −2 yr −1 was measured in our study area (Fig. 1 ) 11 . These fluxes that are fed by a sub-seabed methane gas reservoir 8 , 10 exceed the fluxes that were induced by postglacial gas hydrate dissociation by an order of magnitude (Fig. 6 ). We propose that gas flow from the deep reservoir would have been largely blocked in the past by the gas hydrate layer that explains our observed chloride profiles. Within this 4 m thick layer, over 60% of the pore space was occupied by gas hydrate prior to the onset of dissociation (Fig. 5c ). Such high saturations can reduce sediment permeability by up to two orders of magnitude 34 , 35 . Hence, geologically derived gas fluxes into the water column are higher on the shelf and upper continental slope but decrease in deeper waters where hydrates are stable and provide a barrier to ongoing seepage 10 , 36 , 37 . This down-slope trend has been attributed to the sealing of permeable sediments by gas hydrate formation 36 . It has also been proposed that a portion of the gas flow is not permanently blocked but diverted up-slope until it reaches the up-dip limit of the hydrate stability zone where it seeps into the ocean 7 . Fig. 6 Methane gas fluxes from sediments into the overlying bottom water. Modeled fluxes induced by gas hydrate dissociation at 391 and 404 m over the last eight thousand years are compared to the area-averaged range of methane gas fluxes measured at active seeps (vertical bar) in our study area (Fig. 1 ) 11 Full size image Our analysis of sediment cores of Western Svalbard unambiguously confirms that retreat of the Barents Sea ice sheet led to offshore gas hydrate dissociation, a process that has been widely speculated upon from modeling and geological observations 3 , 5 , 38 , 39 , 40 , 41 but up until now, has remained unproven. Furthermore, combined modeling and geochemical analysis reveals that methane hydrates at the up-dip limit of the hydrate stability zone decomposed via postglacial isostatic rebound in contrast to previous hypotheses that invoke anthropogenic bottom water warming 7 , 9 . Our data and model results also show that gas hydrates are not in themselves a significant source for gas release at the seabed. Rather, they act as a dynamic seal that blocks fluid-flow pathways for gas migration from deep geological reservoirs. Previous estimates of seafloor methane emissions by ongoing and future gas hydrate decomposition consider gas released from hydrates but ignore the potentially more significant increase in flux from underlying gas reservoirs upon hydrate dissociation 6 , 23 , 42 . Hence, the impact of future seabed methane fluxes on global environmental change may yet be underestimated, and further research is required to quantify the flux from deep natural gas reservoirs amplified by deterioration of the overlying hydrate seal. Methods Analytics Sediment samples recovered by MeBo drilling and gravity coring were transferred into the vessel’s cold lab where a squeezer equipped with 0.2 µm filters was employed to separate pore fluids, applying argon pressures of 1–5 bar. Pore fluids were analyzed for chloride in the on-board laboratory applying argentometric titration as described at . Sub-samples were taken and preserved for later on-shore analyses. Ion-chromatography (IC) was employed to determine anion concentrations (Cl − , SO 4 2− ), whereas inductively coupled plasma atomic emission spectroscopy (optical ICP) was used to determine dissolved metal concentrations (lithium, boron). Dissolved chloride was determined by titration and IC. These two independent methods produced almost the same concentrations deviating in most cases by <1%. Chloride concentrations reported hereafter refer to the mean of these two measurements. IC measurements revealed that some of the pore water samples were contaminated by seawater employed as drilling fluid. Sulfate concentrations were used to correct for seawater admixture using the following two-component mixing equation: $$C_{{\mathrm{PW}}} = \frac{{C_{\mathrm{M}} - C_{{\mathrm{SW}}} \cdot f_{{\mathrm{SW}}}}}{{1 - f_{{\mathrm{SW}}}}}$$ (1) where C PW is the in situ pore water concentration, C M the concentration measured in samples affected by seawater admixture, C SW the concentration in seawater, and f SW the fraction of seawater in the sample. The seawater fraction ( f SW ) was calculated as: $$f_{{\mathrm{SW}}} = \frac{{C_{{\mathrm{S}} - {\mathrm{M}}}}}{{C_{{\mathrm{S}} - {\mathrm{SW}}}}}$$ (2) where C S-M is the sulfate concentration measured in seawater-affected samples and C S-SW is the sulfate concentration in seawater used as drilling fluid ( C S-SW = 28.93 mM). This approach was applied to samples taken below the sulfate penetration depth only, because it assumes that the original pore water contains no sulfate. Figure 2 shows the corrected chloride concentrations. Severely contaminated samples containing more than 10 mM sulfate were discarded. About 5 ml of wet sediment were collected at each sampled sediment depth for the analysis of sediment porosity. Porosity was determined as volume of pore water per volume of wet sediment by weight difference before and after freeze-drying assuming a dry density of 2.5 g cm −3 and a pore water density of 1.023 g cm −3 . Stable oxygen and hydrogen isotope ratios ( 18 O/ 16 O, 2 H/ 1 H) of water were analyzed by an automated equilibration unit in continuous flow mode (Gasbench 2) coupled to a Delta plus XP isotope ratio mass spectrometer (Thermo Fisher Scientific). Isotopic ratios are reported in δ-notation in parts per thousand (‰) relative to the VSMOW standard. Samples were measured in duplicates and the reported value is the mean value. External reproducibility based on repeated analyses of a control sample was better than 0.1 and 1‰ for δ 18 O and δ 2 H, respectively. Stable carbon isotope ratios ( 13 C/ 12 C) of dissolved CH 4 (“headspace technique”) and DIC were determined by GC-isotope ratio mass spectrometry. Stable carbon isotopic ratios are reported in δ-notation in ‰ relative to the V-PDB standard (mean of at least two analytical replicates). Standard deviations of triplicate stable isotope measurements were <0.5‰. Geochemical modeling A simple numerical model was set up to evaluate the pore water data. It uses concepts developed in previous transport-reaction models 35 , 43 . The model calculates fractions of bulk volume occupied by pore water, methane gas, methane hydrate, and sediment grains. It considers that gas hydrates and gas bubbles fill the pore space without supporting the grain structure such that the porosity is not affected by hydrate dissociation. Steady state compaction is considered and the resulting exponential down-core decrease in porosity is prescribed with measured porosity data. Moreover, it is assumed that the excess pressure and volume created by hydrate dissociation induces rapid gas bubble ascent and gas seepage at the sediment surface as observed in the study area. Fluid flow is ignored and gas transport is treated as a non-local process that removes gas from the sediment column directly into the overlying water column to conserve the total sediment volume and maintain hydrostatic pressure in the sediment column. Phase densities change with sediment depth but are assumed to be constant over time. The model simulates temperature, and the dissolved components chloride, sulfate and methane, the endothermic dissociation of gas hydrate into freshwater and free gas, the dissolution of gas hydrates and gas bubbles in ambient pore fluid and AOM. Dissolved chloride is an inert tracer that is transported in the water phase by molecular diffusion only, whereas dissolved sulfate and methane are consumed by AOM. Mass balance equations for the three phases hydrate, gas, and pore water are formulated as: $$\frac{{\partial \,\rho _{\mathrm{H}} \cdot f_{\mathrm{H}}}}{{\partial \,t}} = - M_{\mathrm{H}} \cdot \left( {R_{\mathrm{M}} + R_{{\mathrm{HD}}}} \right)$$ (3) $$\frac{{\partial \,\rho _{\mathrm{G}} \cdot f_{\mathrm{G}}}}{{\partial \,t}} = M_{\mathrm{G}} \cdot \left( {R_{\mathrm{M}} - R_{{\mathrm{GD}}} - R_{{\mathrm{EX}}}} \right)$$ (4) $$\frac{{\partial \,\rho _{\mathrm{W}} \cdot f_{\mathrm{W}}}}{{\partial \,t}} = + n_{{\mathrm{HW}}} \cdot M_{{\mathrm{H2O}}} \cdot R_{\mathrm{M}} + M_{\mathrm{H}} \cdot R_{{\mathrm{HD}}} + M_{\mathrm{G}} \cdot R_{{\mathrm{GD}}}$$ (5) where f i (i = H, G, W) are the fractions of bulk sediment volume occupied by methane hydrate (H), methane gas (G), and pore water (W), ρ i are the corresponding phase densities, M H , M G , and M H2O are the molar masses of methane hydrate ( M H = n HW M H2O + M G ), methane gas ( M G = 16 g mol −1 ) and water ( M H2O = 18 g mol −1 ), n HW is the number of water molecules per molecule of hydrate ( n HW = 6), R M is the molar rate of hydrate dissociation, R HD the rate of hydrate dissolution, R GD the methane gas dissolution rate, and R EX the rate of gas bubble expulsion. The mass balance for dissolved chloride is formulated as: $$\frac{{\partial \,f_{\mathrm{W}} \cdot C_{{\mathrm{Cl}}}}}{{\partial \,t}} = \frac{{\partial \,}}{{\partial \,z}}\left( {f_{\mathrm{W}} \cdot D_{{\mathrm{Cl}}} \cdot \frac{{\partial \,C_{{\mathrm{Cl}}}}}{{\partial \,z}}} \right)$$ (6) where C Cl is the concentration of dissolved chloride in the water phase and D Cl is the effective diffusion coefficient of dissolved chloride in the pore volume occupied by water. Archie’s law is applied to consider the effects of tortuosity on molecular diffusion in porous media. Thus, D Cl is calculated as: $$D_{{\mathrm{Cl}}} = \frac{{D_{{\mathrm{MCl}}}}}{{f_{\mathrm{W}}^{1 - m}}}$$ (7) where D MCl is the molecular diffusion coefficients of chloride in seawater and m takes a value of 2 44 . Mass balance equations for dissolved methane and sulfate are defined correspondingly: $$\frac{{\partial \,f_{\mathrm{W}} \cdot C_{{\mathrm{CH4}}}}}{{\partial \,t}} = \frac{{\partial \,}}{{\partial \,z}}\left( {f_{\mathrm{W}} \cdot D_{{\mathrm{CH4}}} \cdot \frac{{\partial \,C_{{\mathrm{CH4}}}}}{{\partial \,z}}} \right) + R_{{\mathrm{GD}}} + R_{{\mathrm{HD}}}{\mathrm{ - }}f_{\mathrm{W}} \cdot R_{{\mathrm{AOM}}}$$ (8) $$\frac{{\partial \,f_{\mathrm{W}} \cdot C_{{\mathrm{SO4}}}}}{{\partial \,t}} = \frac{{\partial \,}}{{\partial \,z}}\left( {f_{\mathrm{W}} \cdot D_{{\mathrm{SO4}}} \cdot \frac{{\partial \,C_{{\mathrm{SO4}}}}}{{\partial \,z}}} \right) - f_{\mathrm{W}} \cdot R_{{\mathrm{AOM}}}$$ (9) where R AOM is the rate of anaerobic methane oxidation while D CH4 and D SO4 are the diffusion coefficients of methane and sulfate in pore water. The molecular diffusion coefficients are calculated as function of sediment temperature 45 . Reaction rates and concentrations of dissolved tracers are given in molar units. Concentrations and rates of anaerobic methane oxidation ( R AOM ) refer to the pore water volume while the rates of hydrate dissolution ( R HD ), gas bubble dissolution ( R GD ), hydrate dissociation ( R M ), and gas expulsion ( R EX ) are formulated with respect to the bulk sediment volume. The following energy equation is employed to simulate heat flow considering heat consumption during hydrate melting and multiphase conduction 43 , 46 : $$\frac{{\partial \,}}{{\partial \,t}}\left( {C_{\mathrm{V}} \cdot T} \right) = \frac{{\partial \,}}{{\partial \,z}}\left( {K_{\mathrm{0}} \cdot \frac{{\partial \,T}}{{\partial \,z}}} \right) - r_{\mathrm{T}} \cdot R_{\mathrm{M}}$$ (10) where T is temperature, C V is the volumetric thermal heat capacity of the solid-water-hydrate-gas mixture, K 0 is the effective thermal conductivity and r T is the energy consumption during hydrate dissociation (53.8 × 10 3 J mol −1 ). K 0 and C V are defined as: $$K_{\mathrm{0}} = K_{\mathrm{S}}^{f_{\mathrm{S}}} \cdot K_{\mathrm{H}}^{f_{\mathrm{H}}} \cdot K_{\mathrm{W}}^{f_{\mathrm{W}}} \cdot K_{\mathrm{G}}^{f_{\mathrm{G}}}$$ (11) $$C_{\mathrm{V}} = f_{\mathrm{S}} \cdot C_{\mathrm{S}} + f_{\mathrm{H}} \cdot C_{\mathrm{H}} + f_{\mathrm{W}} \cdot C_{\mathrm{W}} + f_{\mathrm{G}} \cdot C_{\mathrm{G}}$$ (12) where the thermal conductivities and heat capacities of the individual phases are assumed to be constant over depth and time ( C S = 0.78 J cm −3 K −1 , C W = 4.31 J cm −3 K −1 , C H = 1.82 J cm −3 K −1 , C G = 2.23 J cm −3 K −1 , K S = 1.58 × 10 6 J cm −1 K −1 yr −1 , K W = 1.83 × 10 5 J cm −1 K −1 yr −1 , K H = 1.61 × 10 5 J cm −1 K −1 yr −1 , K G = 1.01 × 10 4 J cm −1 K −1 yr −1 ). Molar rates of hydrate dissociation ( R M ), gas hydrate dissolution ( R HD ), and gas bubble dissolution ( R GD ) are defined as 46 , 47 : $$R_{\mathrm{M}} = k_{\mathrm{M}} \cdot f_{\mathrm{H}} \cdot \frac{{\rho _{\mathrm{H}}}}{{M_{\mathrm{H}}}} \cdot \mathrm{Max}\,\left[ {1 - \frac{{\it P_{{\mathrm{HY}}}}}{{\it P_{\mathrm{D}}}},\,0} \right]$$ (13) $$R_{\mathrm{HD}} = k_{{\mathrm{HD}}} \cdot f_{\mathrm{H}} \cdot \frac{{\rho _{\mathrm{H}}}}{{M_{\mathrm{H}}}} \cdot \mathrm{Max}\,\left[ {1 - \frac{{\it C_{{\mathrm{CH4}}}}}{{\it C_{{\mathrm{CH4 - H}}}}},\,0} \right]$$ (14) $$R_{{\mathrm{GD}}} = k_{{\mathrm{GD}}} \cdot f_{\mathrm{G}} \cdot \frac{{\rho _{\mathrm{G}}}}{{M_{\mathrm{G}}}} \cdot \mathrm{Max}\,\left[ {1 - \frac{{\it C_{{\mathrm{CH4}}}}}{{\it C_{{\mathrm{CH4 - G}}}}},\,0} \right]$$ (15) where k M , k HD , and k GD are kinetic constants (in yr −1 ), P D is the dissociation pressure of hydrate, C CH4-H is the concentration of dissolved methane at equilibrium with methane hydrate, and C CH4-G the concentration of dissolved methane at equilibrium with methane gas. According to these rate definitions, hydrates dissociate when P D exceeds the ambient hydrostatic pressure ( P HY ), whereas gas hydrate and gas dissolve when the ambient concentration of dissolved methane ( C CH4 ) is lower than the corresponding equilibrium value. P D is calculated for each time step as a function of changing sediment temperature and pore water salinity (dissolved chloride concentration) applying a thermodynamic model 20 , whereas P HY is continuously updated considering relative sea-level change. C CH4-H and C CH4-G are calculated as a function of sediment temperature, salinity, and hydrostatic pressure 20 while the ambient methane concentration is calculated solving the mass balance equation for dissolved methane. The kinetic constant for gas hydrate dissociation is set to a sufficiently large value ( k M ≥ 2 yr −1 ) such that the rate of endothermic hydrate dissociation is limited by heat transfer rather than the intrinsic kinetic properties of hydrate grains. The kinetic constants for hydrate and gas dissolution employed in the model ( k HD ≥ 1 yr −1 , k GD ≥ 1 yr −1 ) ensure that dissolved methane attains and maintains equilibrium with gas hydrate and gas in sediment layers where these phases are present. The rate of gas expulsion ( R EX ) is governed by the following equation: $$R_{{\mathrm{EX}}} = k_{{\mathrm{EX}}} \cdot \frac{{\rho _{\mathrm{G}}}}{{M_{\mathrm{G}}}} \cdot \left( {f_{\mathrm{S}} + f_{\mathrm{H}} + f_{\mathrm{G}} + f_{\mathrm{W}} - {\mathrm{1}}} \right)$$ (16) The kinetic constant k EX is set to a sufficiently large value (≥1 yr −1 ) such that excess gas is expelled from the sediment and the total volume of the sediment column is conserved. Methane is oxidized by microbial consortia using sulfate as terminal electron acceptor 48 : $$\mathrm{CH}_4 + \mathrm{SO}_4^{2 - } \Rightarrow \mathrm{HCO}_3^ - + \mathrm{HS}^ - + \mathrm{H}_2{\mathrm O}$$ (17) The kinetic equation for this microbial reaction is defined as 49 : $$R_{{\mathrm{AOM}}} = k_{{\mathrm{AOM}}} \cdot C_{{\mathrm{CH4}}} \cdot \frac{{C_{{\mathrm{SO4}}}}}{{C_{{\mathrm{SO4}}} + K_{{\mathrm{SO4}}}}}$$ (18) where k AOM is a kinetic constant and K SO4 is a Monod constant ( K SO4 = 1 mM). The AOM rate is controlled by the concentration of dissolved methane, whereas dissolved sulfate is only rate-limiting when the sulfate concentration in the pore water is smaller than 1 mM 49 . The value chosen for the kinetic constant ( k AOM ≥ 1 yr −1 ) inhibits leakage of dissolved methane through sulfate-bearing surface sediments. Initial gas hydrate contents were defined according to the dissolved chloride depletion observed in the pore water data, whereas initial gas saturations were set to zero. The initial temperature profile was defined applying a steady state heat flow model that considers compaction and the corresponding increase in thermal conductivity with sediment depth. Initial concentrations of dissolved chloride and sulfate were set to ambient bottom water values while methane concentrations were set to equilibrium values with respect to methane gas. The upper boundary of the model column is located at the sediment-water interface while the lower boundary was positioned at 100 mbsf. Hydrate, gas, and water saturations and dissolved tracer concentrations were maintained at constant values at the upper and lower boundary throughout the simulation. A constant gradient, corresponding to the local geothermal gradient, was employed as lower boundary condition for temperature. Hydrostatic pressure ( P HY ) was reduced considering relative sea-level change. A corresponding P HY change was applied over the entire model column. Gas hydrates present in the model column were destabilized when the ambient dissociation pressure ( P D ) exceeded the applied P HY value. Bottom water temperature was allowed to increase over the last 100 years of the simulation and the heat was transferred into the sediment column employing the heat flow model. The temperature increase induced a rise in P D that led to gas hydrate dissociation if ambient P HY was smaller than the resulting P D . The model was set up in MATHEMATICA and solved using finite differences and the method-of-lines approach as implemented in MATHEMATICA’s solver for partial differential equations. The model has a resolution of 0.1 m in the top 30 m and a coarser resolution below. Mass balance calculations showed that masses and energy were conserved within an error smaller than 0.1%. Ice sheet modeling The evolution of the Barents Sea ice sheet and associated isostatic recovery of the Barents Sea continental shelf from the Last Glacial Maximum to present day is derived from a suite of model experiments carried out to reconstruct dynamics of the Eurasian ice sheet complex 30 , 50 , 51 . The thermomechanical ice model used is based on a higher-order solution to the equations governing ice sheet flow and has been verified against benchmark experiments for higher-order models 52 , tested against 3D flow observations at an alpine glacier 53 and applied to a broad variety of past and present glacier and ice sheet scenarios to investigate their response to environmental and internal forcing 54 , 55 , 56 , 57 . The model is coupled to climate using a degree-day parameterization modified to include the effects of high latitude sublimation under extreme continental conditions 50 , 58 , 59 , 60 , 61 . Model experiments are integrated through time on a finite-difference grid with a resolution of 10 km, with climate forcing imposed by perturbations in the NGRIP paleo isotope curve and a global eustatic sea-level curve used to determine ice flotation and calving losses at marine-terminating margins 31 . Initial ice extent, thickness and the loaded topography are inherited from a Mid-Weichselian (Marine Isotope Stage 4) experiment, allowing sufficient spin-up time for the ice sheet and isostatic loading to attain a transient equilibrium with the forcing climate at the point of kick-starting Late Weichselian experiments at 37 ka BP. Ice thickness, extent, and the timing of advance and retreat have been constrained extensively throughout the ice complex by a diverse suite of empirical data, including geomorphological, chronological, and geophysical datasets 62 , 63 , 64 , 65 , 66 , honoring the broad interpretations of ice sheet history inferred from the geological record. A relatively thick lithosphere of 120 km is predicted throughout the region, with a relative insensitivity to lower mantle viscosity observed at all sites 30 . Isostatic loading is calculated within the ice flow model using the commonly implemented elastic lithosphere/relaxed asthenosphere scheme 67 , identified as a reasonable approach in the absence of a full spherical earth model (Supplementary Fig. 6 ). Relative sea-level was calculated as the difference between eustatic sealevel 31 and seabed elevation. Over 10–8 ka, the rapid rise in eustatic sea-level clearly outpaced the slow postglacial rebound (Supplementary Fig. 6 ). These trends were reversed after 8 ka when the global sea-level rise slowed down drastically while the seabed kept on moving upwards. Hence, relative sea-level reached a maximum at 8 ka. Data availability All relevant data are available from the authors. This includes the pore water (KW), heat flow (MR), porosity, and methane carbon isotope data (GB, TP), DIC-carbon isotope data (MET) as well as the results and code of the transport-reaction model (KW) and the results of the ice sheet model (HP). Positions, descriptions, and photographs of cores are made publicly available through the PANGAEA information system sustained by the World Data Center for Marine Environmental Sciences (WDCMARE). | For years, methane emissions from the seabed have been observed in the Arctic Ocean off Spitsbergen. Researchers have proposed that the warming of seawater by climate change is responsible for the release of methane, but this has not been confirmed. Now, an international team reports that post-glacial uplift is the most likely cause of methane hydrate breakdown. The study is published today in the international journal Nature Communications. Methane hydrates, also known as flammable ice, occur in many regions of the oceans. But this product of methane and water only forms a solid compound under high pressure in cold temperatures. If the pressure is too low or the temperature is too high, the hydrates decompose and the methane is released as gas from the sea floor into the water column. Spitsbergen has been experiencing severe outgassing for several years. Does the methane originate from decomposed methane hydrates? What is the cause of the dissociation of the hydrates? Warming due to climate change, or other natural processes? An international team of scientists has now been able to answer this question. They have published a report in Nature Communications. "Our investigations show that uplift of the sea floor in this region caused by the melting of the ice masses since the end of the last ice age is probably the reason for the dissolution of methane hydrate, which is already ongoing for several thousand years," explains Prof. Dr. Klaus Wallmann, first author of the study by GEOMAR Helmholtz Centre for Ocean Research Kiel. "The region has raised more than the sea level has risen, causing a pressure relief, so that the methane hydrates dissociate at the stability limit," Wallmann continues. For their investigations, the scientists carried out expedition MSM 57 with the German research vessel Maria S. Merian, led by the Center for Marine Environmental Sciences at the University of Bremen. The mobile drilling rig MARUM-MeBo70 was also used for this study. "With this special device, we were for the first time able to gain long sediment cores in this area," explains chief scientist Prof. Dr. Gerhard Bohrmann from MARUM. "In these cores, we found significant amounts of freshwater that originate from decomposed hydrates," Bohrmann continues. The scientists were able to prove that this process started 8,000 years ago, and therefore cannot be attributed to global warming of the past decades. The rotary drill bit of the MeBo70 has penetrated a layer of limescale activated in soft sediments. Bright aragon cements typically fill the cavities of the sea carbonates. Credit: MARUM - Zentrum für Marine Umweltwissenschaften, Universität Bremen; G. Bohrmann In addition to the geochemical analyses, results of a model simulation of ice distribution in the Arctic since the last ice age were used. "The results show that the rate of isostatic uplift at our drill sites after melting exceeded the eustatic sea-level rise throughout the post-glacial period," explains Prof. Bohrmann. "In other words, the land has risen faster and stronger as the sea level rose, so that the pressure in the hydrate reservoir decreased and the hydrates finally became unstable," adds Prof. Wallmann. Thus, the scientists argue that the dissociation of hydrates can be explained by this process, especially since the warming of sea water in deep layers of the ocean is still low. The investigations off Spitsbergen show a methane release not caused by climate warming. Further research efforts are necessary at other locations to investigate whether this applies also to other areas of the Arctic or even in middle latitudes. | 10.1038/s41467-017-02550-9 |
Biology | Novel histone modifications couple metabolism to gene activity | Adam F Kebede et al. Histone propionylation is a mark of active chromatin, Nature Structural & Molecular Biology (2017). DOI: 10.1038/nsmb.3490 Journal information: Nature Structural and Molecular Biology , Nature Structural & Molecular Biology | http://dx.doi.org/10.1038/nsmb.3490 | https://phys.org/news/2017-10-histone-modifications-couple-metabolism-gene.html | Abstract Histones are highly covalently modified, but the functions of many of these modifications remain unknown. In particular, it is unclear how histone marks are coupled to cellular metabolism and how this coupling affects chromatin architecture. We identified histone H3 Lys14 (H3K14) as a site of propionylation and butyrylation in vivo and carried out the first systematic characterization of histone propionylation. We found that H3K14pr and H3K14bu are deposited by histone acetyltransferases, are preferentially enriched at promoters of active genes and are recognized by acylation-state-specific reader proteins. In agreement with these findings, propionyl-CoA was able to stimulate transcription in an in vitro transcription system. Notably, genome-wide H3 acylation profiles were redefined following changes to the metabolic state, and deletion of the metabolic enzyme propionyl-CoA carboxylase altered global histone propionylation levels. We propose that histone propionylation, acetylation and butyrylation may act in combination to promote high transcriptional output and to couple cellular metabolism with chromatin structure and function. Main Histones are modified by a variety of post-translational modifications (PTMs) that cooperate to regulate chromatin structure and function 1 . These PTMs are implicated in the regulation of gene expression, higher-order chromatin structure and response to external stimuli 2 , 3 , 4 . Although histone acetylation and methylation are relatively well studied, multiple previously unknown types of histone PTMs have recently been identified 1 , but their functions are largely unknown. A subset of these new PTMs comprises lysine acylations, including propionylation (Kpr) 5 , butyrylation (Kbu) 5 , crotonylation (Kcr) 6 , hydoxyisobutyrylation (Khib) 7 , succinylation (Ksuc) and malonylation (Kmal) 8 . Notably, most of these newly discovered histone acylation sites overlap with known histone acetylation (Kac) sites. This observation prompts questions concerning the functional overlap between histone Kac and other short chain acylations 9 . Among the newly discovered histone acylations, histone lysine propionylation (three-carbon molecule: C3) and butyrylation (C4) were the first reported, and they are, in their chemical structures and properties, most similar to acetylation (C2). Indeed, by adding additional carbon atoms to the lysine side chain, they may be regarded as linear analogs to tetrahedral mono-, di- and tri-methylation 5 , 10 . Early reports have suggested that some Kpr and Kbu events may be catalyzed in vitro by histone acetyltransferases (HATs), but in vivo evidence for this remains lacking 11 . The activity of many histone-modifying enzymes is regulated by the intracellular concentrations of metabolites, such as acetyl-CoA, because the metabolites serve as cofactors for the relevant enzyme 12 , 13 . Coupling of enzyme activity to cofactor availability has the potential to couple chromatin states to metabolism. Propionyl-CoA, the putative cosubstrate for histone propionylation, originates from oxidation of odd-chain fatty acids and catabolism of branched-chain amino acids. Butyryl-CoA is also an intermediate of fatty acid beta-oxidation 11 . Thus, histone short chain acylations are potential candidates linking the metabolic state of the cell with chromatin architecture. We characterized histone H3 lysine propionylation and butyrylation, using site-specific antibodies that we developed. We observed that H3K14 propionylation, as well as butyrylation, were most enriched at the promoters of the most highly expressed genes and, together with acetylation, were able to promote a higher transcriptional output. In agreement with this observation, we found that the most active genes in mouse livers had multiple acylation marks. Notably, inducing changes to the metabolic state in a defined mouse model drove changes in the liver histone acylation profiles. In accordance with acylation states being important for chromatin function, we found that the BAF chromatin-remodeling complex recognizes specific acylation marks. Moreover, using an in vitro transcription system, we observed that propionyl-CoA was able to stimulate transcription from a chromatinized template. Taken together, our results suggest that H3K14 propionylation is a previously unknown metabolite-directed histone PTM that defines the transcriptionally active chromatin state. Results Identification of histone H3K14 propionylation and butyrylation To study the functions of Kpr and Kbu in histones, we first aimed to identify which lysines were modified in vivo . For this, we performed mass spectrometry analysis of HeLa histones and identified a total of five Kpr and seven Kbu sites on the tails of histones H3 and H4 ( Fig. 1a and Supplementary Fig. 1a–d ). Notably, all of the identified sites are also known sites of acetylation (Kac) 14 ( Fig. 1a ). Figure 1: Histone lysine propionylation and butyrylation sites. ( a ) The N-terminal sequence of histones H3 and H4 with propionylation (pr, green) and butyrylation (bu, blue) sites identified by mass spectrometry in HeLa cells in this study (details and spectra in Supplementary Fig. 1 ). Known acetylation sites in the N-terminal tails are also shown (ac, red). ( b , c ) Specificity of anti-H3K14pr ( b ) and anti-H3K14bu ( c ) antibodies. Dot blot assays with the indicated affinity-purified antibodies and serial dilutions of H3 (aa10–19) peptides that were unmodified (un), acetylated (ac), propionylated (pr) or butyrylated (bu) at lysine 14. Details of peptide sequences and antibodies used in this study are available in Supplementary Tables 1 and 2 ). ( d ) Detection of full-length and trypsin-digested nucleosomes with anti-H3K14pr, anti-H3K14bu, anti-H3K9ac and anti-H3 antibodies. Mononucleosomes from HeLa cells were incubated with no (−) or increasing amounts of trypsin to cleave the histone tails and immunoblotted as indicated for H3K14pr, H3K14bu, H3K9ac and H3. Ponceau staining shows full-length histones and the trypsin-digestion products. Full size image As models to study histone Kpr and Kbu, we chose to focus on H3K14, because acetylation at H3K14 has been linked with transcriptional activation 14 . To study H3K14pr and H3K14bu in vivo , we attempted to raise specific antibodies. After affinity purification, we obtained antibodies specific for both acylations. In peptide dot blot assays, the H3K14pr and H3K14bu antibodies recognized the corresponding immunizing peptides with high specificity ( Fig. 1b,c ). The antibodies specifically recognized histone H3 in immunoblots ( Supplementary Fig. 2a ), and signals were competed away by the corresponding immunizing peptides ( Supplementary Fig. 2b,c ). Next we performed a limited trypsin digestion of native nucleosomes, which removes the histone tails while leaving the DNA-protected core regions largely intact, followed by immunoblotting. We observed a loss of signal for H3K14pr and H3K14bu, as well as for other tail modifications ( Fig. 1d ), confirming that these antibodies indeed recognize H3 tail modifications. To rule out cross-reactivity with other potential H3 tail acylation sites, we performed additional peptide dot blots on a collection of peptides carrying acetylation, propionylation and butyrylation at major N-terminal lysines of histone H3 (K9, K18, K23 and K27). We did not detect recognition of these modifications ( Supplementary Fig. 2d,e ). Together, these results strongly suggest that our antibodies were indeed specific for H3K14pr and H3K14bu. Notably, we also observed an enhanced signal in H3K14pr and H3K14bu when cells were treated with sodium butyrate (NaBu), an inhibitor of class I and class II histone deacetylases 15 (HDACs; Supplementary Fig. 2a ). However, the precise removal mechanisms of these modifications remain unknown. We next confirmed the presence of H3K14pr and H3K14bu in different human and mouse cell lines, as well as in a variety of mouse tissues, including liver, kidney and testis ( Supplementary Fig. 2f,g ). In immunofluorescence, H3K14bu also demonstrated a uniform, nuclear dotted pattern that was excluded from DAPI-dense heterochromatic regions and nucleoli ( Supplementary Fig. 2h ), and was reminiscent of active histone marks 4 , 16 , 17 . HATs propionylate and butyrylate histones Previous studies performed in vitro reported that some HATs, mainly p300 and CBP, also possess propionyltransferase and butyryltransferase activities 5 , 18 , 19 , 20 , 21 . Indeed, our in vitro histone acylation assays using radiolabeled acyl-CoAs revealed the general histone propionylation and butyrylation activity of not only p300 but also the GNAT family HATs, GCN5 and PCAF ( Fig. 2a ). Moreover, these histone acylation assays showed that both GNAT family enzymes are capable of propionylating and butyrylating histone H3 at K14, in addition to their known acetylation activity ( Supplementary Fig. 3a ). This activity is consistent with findings from a previous report showing in vitro specificity of PCAF for H3K14 in peptide acylation assays 21 . To demonstrate the enzyme-dependent accumulation of Kpr, we performed a time course in vitro acylation assay on histone octamers and observed a gradual increase in H3K14pr signal only in the presence of GCN5 ( Supplementary Fig. 3b,c ). These results suggest that histone propionylation and butyrylation are, under these conditions, enzyme catalyzed. Figure 2: HATs can have propionylation and butyrylation activities. ( a ) In vitro HAT assays performed with calf thymus histones as substrate and tritium ( 3 H)-labeled acetyl-, propionyl- or butyryl-CoA as acyl donors. Autoradiogram is shown with Ponceau staining as loading control. ( b ) Immunoblotting for H3K9ac, H3K14pr and H3K14bu following siRNA knockdown of GCN5 and/or PCAF in HeLa cells. A representative experiment of three independent experiments is shown. Uncropped blot images are shown in Supplementary Data Set 1 . ( c ) Immunoblotting for H3K18ac, H3K14pr and H3K14bu following siRNA knockdown of p300 and/or CBP in HeLa cells. ( d ) ChIP–qPCR analysis of H3K9ac and H3K14pr enrichments in HeLa cells transiently transfected with control siRNA ( siCTRL ) or siRNAs targeting both GCN5 and PCAF ( siGCN5 + PCAF ). Mean percentage of input is shown, and error bars indicate s.d. of three technical replicates. Primers for the indicated genes were all near TSSs. 'Gene desert' is a control region on chromosome 2 ( Supplementary Table 3 ). Full size image To identify which enzymes regulate propionylation and butyrylation of H3K14 in vivo , we used a candidate approach based on our in vitro data. We used small interfering RNA (siRNA) to deplete GCN5, PCAF, p300 and CBP in cells. Immunoblot analysis revealed that the combined knockdown of GCN5 (also known as KAT2A ) and PCAF (also known as KAT2B ) in HeLa cells led to a substantial ( ∼ 40%) reduction in H3K14pr, but not H3K14bu, levels ( Fig. 2b and Supplementary Fig. 3d,f,i ). The reduction in H3K14pr was comparable to that of H3K9ac, a known target of GCN5 and PCAF activity ( Fig. 2b and Supplementary Fig. 3f ) 22 . Following p300 (official symbol EP300 ) and CBP (official symbol CREBBP ) double knockdown in HeLa cells, we observed a smaller effect ( ∼ 20% reduction) on the global levels of H3K14pr and H3K14bu ( Fig. 2c and Supplementary Fig. 3e,g,h ). To investigate the effects of GCN5 and PCAF on H3K14pr levels on specific genomic regions, we performed chromatin immunoprecipitation followed by quantitative PCR (ChIP–qPCR) at selected target promoter regions following siRNA-mediated knockdown of both enzymes ( Supplementary Fig. 3j,k ). As expected, we observed reduced enrichment of H3K9ac when both GCN5 and PCAF were depleted at all active target promoters ( B-ACTIN (also known as ACTB ), CCDC66 , SIRT6 and ALKBH6 ) tested 23 ( Fig. 2d ). Notably, enrichment of H3K14pr was also decreased at all of the active gene promoters that we tested, indicating that GCN5 and PCAF indeed contribute to H3K14pr at promoter regions. Taken together, these findings suggest that HATs have acyl-transferase activity both in vitro and in vivo . Histone propionylation is regulated by a metabolic enzyme Propionyl-CoA and butyryl-CoA, the putative cosubstrates of Kpr and Kbu, respectively, are intermediates in fatty acid catabolism 11 ( Fig. 3a ). Thus, changes in their levels as a result of metabolic activity may affect levels of histone propionylation and butyrylation, thereby linking cellular metabolism to chromatin acylation states. It is conceivable that differential acylation activity could be a result of the cellular concentrations of acetyl-, propionyl- and butyryl-CoA. Previous studies have shown that acetyl-, propionyl- and butyryl-CoA are found at an approximate ratio of 4:2:1 in mouse livers, and this ratio can change following mutations in metabolic enzymes 24 , 25 . Figure 3: Histone propionylation is sensitive to propionyl-CoA metabolism. ( a ) Metabolic pathways producing and consuming propionyl-CoA and butyryl-CoA. Both can be derived from fatty acid beta-oxidation. Propionyl-CoA is an end product of odd-chain fatty acid oxidation and can be carboxylated by the PCC complex. Butyryl-CoA is an intermediate in the oxidation of even-chain fatty acids that is further broken down into acetyl-CoA, with the first step being catalyzed by ACADS. ( b ) Immunoblotting with anti-pan-propionyllysine (pan-Kpr) and anti-PCCA antibodies on lysates prepared from livers of wild type (WT), propionyl-CoA carboxylase alpha subunit-deficient ( Pcca −/− ) and gene-therapy-treated ( Pcca −/− treated) mice in which PCCA had been reintroduced. Note that, Pcca −/− mice show a strong increase in propionyl-CoA levels 26 . Tubulin is a loading control. ( c ) Immunoblotting with anti-pan-butyryllysine (pan-Kbu) and anti-ACADS antibodies, using whole cell or tissue lysates from either mouse embryonic fibroblasts (MEFs) or livers. ACADS was depleted by siRNA (in MEFs, siAcads ) or knocked out (in liver, Acads −/− ). Immunoblotting with an antibody to tubulin was used as a loading control. Acads −/− livers have elevated butyryl-CoA levels 24 , 25 . ( d ) Immunoblotting with antibody to H3K14pr on acid extracts prepared from livers of WT, Pcca −/− and Pcca −/− treated 26 mice (quantification in Supplementary Fig. 3l ). A representative experiment of three independent livers is shown. Uncropped blot images are shown in Supplementary Data Set 1 . ( e ) Immunoblot using an antibody to H3K14bu on acid extracts prepared from WT and Acads −/− mouse livers (quantification in Supplementary Fig. 3m ). Ponceau staining is shown as a loading control d and e . Full size image We next explored a potential link between H3K14 propionylation and/or butyrylation and metabolic pathways and focused on two candidate enzymes, propionyl-CoA carboxylase (PCC) and acyl-CoA dehydrogenase short-chain (ACADS), both of which degrade their respective CoAs ( Fig. 3a ). Depletion of PCC and ACADS can lead to a global increase in propionyl-CoA and butyryl-CoA concentrations 24 , 25 , respectively, and a consequent increase in general protein propionylation and butyrylation ( Fig. 3b,c ). Notably, histones isolated from livers of Pcca (alpha subunit of PCC) 26 knockout mice displayed higher levels of H3K14pr ( ∼ 1.3 fold) than H3 from livers of control mice ( Fig. 3d and Supplementary Fig. 3l ), as well as a strong increase in general protein propionylation, as detected by pan-specific anti-Kpr ( Fig. 3b ). In agreement with this finding, levels of H3K14pr and of general propionylation in livers of Pcca knockout mice reverted to wild-type levels when human PCCA was expressed in knockout mice in a gene therapy model 26 ( Fig. 3b,d and Supplementary Fig. 3l ). Notably, we found that H3K14bu levels remained largely unchanged in livers from Acads knockout mice despite increased CoA levels and an increase in general protein butyrylation 25 ( Fig. 3c,e and Supplementary Fig. 3m ). Genomic localization of H3K14pr and H3K14bu in the mouse liver We next addressed the genome-wide distribution of histone Kpr and Kbu. Given that our hypothesis was that histone Kpr and Kbu might be metabolically regulated, we aimed to carry out chromatin immunoprecipitation sequencing (ChIP–seq) experiments in a metabolically relevant tissue, the liver, where we could challenge mice by fasting. First, we confirmed by mass spectrometric analysis of acid-extracted histones that H3K14pr and H3K14bu were present in mouse liver ( Fig. 4a and Supplementary Fig. 4 ). We also found that, in H3, Lys9, Lys18 and Lys23 were propionylated and/or butyrylated in mouse livers ( Fig. 4a and data not shown). Next, we performed ChIP–seq experiments using chromatin isolated from livers of mice experiencing different metabolic challenges: either refed (mice fasted for 12 h and fed again for 4 h) or fasted for 48 h. In addition, we performed ChIP–seq to determine the distribution of histone acetylation so that we could compare it to the distribution of butyrylation and propionylation. However, all of the antibodies to H3K14ac that we tested cross-reacted with other H3K14 acylation states (data not shown). Thus, we chose H3K9ac as the prototypical example of a histone acetylation site linked to active genomic region. Previous studies have shown that genome-wide profiles of H3K9ac and H3K14ac extensively overlap 27 . Figure 4: Histone acylations are specifically enriched at active promoter regions. ( a ) A scheme depicting histone H3 with the acetylation (ac, red), propionylation (pr, green) and butyrylation (bu, blue) sites identified by mass spectrometry in the N-terminal tail of H3 from mouse liver tissue (details and spectra in Supplementary Fig. 4 ). ( b ) Genome-browser snapshot of mouse chromosome 9, showing depth-normalized ChIP–seq tracks for Input, H3K9ac, H3K14pr and H3K14bu from livers of mice refed or fasted 48 h. ( c ) Genomic annotation of H3K9ac, H3K14pr and H3K14bu peaks. ( d ) Venn diagrams showing overlap between histone-acylation-enriched genes in livers from refed (left) and 48 h fasted (right) mice. Full size image H3K14pr and H3K14bu displayed overall profiles reminiscent of those of other 'active' histone marks ( Fig. 4b,c ), such as H3K9ac and H3K4me3 (ref. 28 ). Annotation of H3K14pr- and H3K14bu-enriched regions revealed that they were highly enriched at promoter and transcription start site (TSS) regions (defined as −1 kb to +100 bp), where ∼ 30–35% of the H3K14pr and H3K14bu peaks mapped, as compared with 22% of H3K9ac peaks ( Fig. 4c ). Notably, in livers from refed mice, 87% of both H3K14pr (3,790 of 4,341) and H3K14bu (2,814 of 3,224) marked genes were also enriched for H3K9ac ( Fig. 4d ). Similarly, in livers from fasted mice, we found that 89% (458 of 515) of H3K14pr targets and 87% (631 of 724) of H3K14bu targets were enriched for H3K9ac ( Fig. 4d ). We found that this high degree of overlap was also conserved when we compared our data with recently published 29 H3K4me3 data from livers of mice fed ad libitum or fasted ( Supplementary Fig. 5a ). Considering that H3K9ac, H3K14pr and H3K14bu overlapped extensively genome wide ( Fig. 4b–d ), we performed immunoprecipitation experiments with mononucleosomes purified from HeLa cells to test whether these marks could indeed co-occur in the same nucleosomes. We found that H3K14pr or H3K14bu nucleosomes were enriched for H3K9ac and H3K4me3 ( Supplementary Fig. 5b ). In agreement with this result, our mass spectrometric analysis of HeLa histones revealed H3K9ac on the same peptides together with H3K14pr or H3K14bu ( Supplementary Fig. 1c,d ). We conclude that histone acylations can co-occur on chromatin. Despite the high overlap between H3K14pr, H3K14bu and H3K9ac target genes, we still found some genes ( ∼ 11–13%) in which we detected only H3K14pr or H3K14bu enrichments above our threshold ( Fig. 4d ). Gene ontology analysis of such genes from the refed state revealed that these genes may be associated with several important biological processes ( Supplementary Fig. 5c,d ). H3K14pr and H3K14bu are associated with transcriptionally active chromatin To gain insight into the roles of histone propionylation and butyrylation in gene expression, we performed RNA-seq, using RNA isolated from mouse livers under the same two metabolic states as described above (that is, refed and 48 h fasted). We observed the same metabolically induced changes in gene expression as those published previously (data not shown) 29 . In livers from both refed and fasted mice, we found enrichment of H3K14pr and H3K14bu across TSSs that correlated with RNA expression levels ( Fig. 5a ), thus suggesting that the enrichments are indicative of efficient gene expression. This result was also consistent with recent reports of enrichment of other histone acylations, such as histone crotonylation (Kcr) and β-hydroxybutyrylation (Kbhb), at active genes 6 , 30 . Moreover, a comparison of our data with published data for H3K4me3 (ref. 29 ), H3K9bhb 29 and RNA polymerase II (ser5p) 31 revealed a strong positive correlation ( Supplementary Fig. 5e ). Figure 5: Correlation between histone acylations and gene expression. ( a ) Input-normalized histone acylation profiles ±2 kb from the TSSs of genes grouped into four quartiles on the basis of level of expression (obtained from RNA-seq data): Q1, least expressed genes; Q2 and Q3, intermediately expressed genes; Q4, highly expressed genes. IP, immunoprecipitate. ( b ) Expression levels in reads per kilobase pair per million mapped reads (RPKM) of genes that are enriched in the indicated histone acylation mark(s) ±1 kb around their TSSs in livers from refed mice. '+' indicates the presence of a mark, and '–' indicates its absence. '+/−' is used to indicate that either H3K14pr or H3K14bu is present, but not both. P values were calculated using the Wilcoxon rank sum test (two-tailed); ** P = 1.42 × 10 −7 , *** P = 7.93 × 10 −79 ; **** P = 4.25 × 10 −100 . ( c ) Expression levels of genes that are enriched in the indicated histone acylation mark(s) ±1 kb around their TSSs in livers from fasted mice. *** P = 4.08 × 10 −20 ; **** P = 1.31 × 10 −10 ; ns, not significant. The number of genes plotted is indicated. For box plots in b and c , the center line is the median, and the central rectangle spans the first quartile (Q1) to the third quartile (Q3) (IQR, interquartile range). The upper whisker extends from the hinge to the largest value no further than 1.5× the IQR from the hinge (Q3 + 1.5× IQR). The lower whisker extends from the hinge to the smallest value at most 1.5× IQR of the hinge (Q1 – 1.5× IQR). Outliers are not shown on the plots. ( d – f ) Top Gene Ontology biological-process terms associated with a gain of H3K9ac ( d ), H3K14pr ( e ) and H3K14bu ( f ) in livers from fasted mice, determined with GREAT 33 . Full size image Next, we examined the links between gene expression and the co-occurrence of different acylations in more detail. The most active genes in both the refed and the fasted conditions displayed enrichments above threshold of multiple acylations (H3K9ac, H3K14pr and/or H3K14bu) within 1 kb of their TSS ( Fig. 5b,c ). In contrast, genes for which we detected enrichment of only one acylation (that is, H3K9ac) above threshold showed lower expression levels ( Fig. 5b,c ). We analyzed whether 'triple-acylated' genes were associated with specific biological processes and found that, in both the refed and fasted states, 'transcription' and 'covalent chromatin modification' were among the top ten significantly enriched terms ( Supplementary Fig. 5f,g ). To evaluate the changes in histone acylation between the refed and fasted states, we called differentially enriched peaks using chromstaR 32 . We found 812, 6,496 and 6,180 differentially enriched regions that were gained in livers of fasted mice for H3K9ac, H3K14pr and H3K14bu, respectively (data not shown). Notably, gene ontology analysis of these regions 33 revealed 'carboxylic acid metabolic process' as the most significantly enriched term ( Fig. 5d–f ). Moreover, other terms related to fatty acid or lipid metabolism were also enriched, thus indicating that lipid metabolism may be a primary target of fasting-induced histone acylations ( Fig. 5d–f ). Taken together, our data suggest that histone Kpr and Kbu are linked to transcriptional activation. Moreover, in combination with histone acetylation, Kpr and Kbu have a strong predictive power of gene expression levels, and the gain of H3K14pr and H3K14bu in livers of fasted mice targets primarily lipid metabolism pathways. Histone acylations are differentially bound by the BAF or PBAF remodeling complex Acetylated lysines can be specifically recognized by bromodomains 34 , 35 . However, very little is known about which domains, if any, specifically bind to other short-chain acylations. A previous study 10 found that the recombinant bromodomains of BRD4 bind weakly to propionylated lysines, but an unbiased approach toward identifying binders has not been performed. Because we observed a correlation between the enrichment of multiple acylations and high levels of gene expression ( Fig. 5b,c ), we explored whether proteins or protein complexes binding to multiple acylation states might be involved in integrating and mediating the effect of acylations on chromatin function. To identify potential binders to histone Kpr and Kbu in an unbiased approach, we performed peptide pulldown experiments followed by mass spectrometry. We found that H3K14ac and H3K14pr peptides bound a near-identical set of proteins ( Fig. 6a and Supplementary Fig. 6a ). Notably, prominent among the H3K14ac and H3K14pr binders were all the subunits of the mammalian BAF or PBAF (polybromo-associated BAF) (also known as SWI/SNF) complex. However, we did not detect any BAF or PBAF subunits that bound specifically to H3K14bu ( Fig. 6a,b and Supplementary Fig. 6a ). We validated these results by immunoblotting for a subset of the (P)BAF subunits ( Fig. 6c ). Given that the (P)BAF subunits BAF180 (also known as PBRM1), SMARCA4 (BRG1), SMARCA2 (BRM) and BRD7 contain bromodomains, we reasoned that they might be involved in the observed binding activity. To test this possibility, we first asked whether the PBRM1 bromodomain binds acylated H3K14. Indeed, a recombinant second bromodomain of PBRM1 fused to GST (GST-PBRM1(2)) bound directly to peptides containing H3K14pr and H3K14ac ( Fig. 6d and Supplementary Fig. 6b ). We also validated our result, using an independent holdup assay developed previously 36 , in which biotinylated peptide resins were used to 'hold up' GST-PBRM1(2) molecules before the filtrate was analyzed by capillary electrophoresis. Similarly to the results of the peptide pulldown experiments, H3K14ac and H3K14pr peptides bound GST-PBRM(2), whereas the H3K14un and H3K14bu peptides did not ( Fig. 6e ). Figure 6: Histone acylations differentially bind the BAF remodeling complex. ( a ) Volcano plot showing all enriched proteins in H3K14ac (left) and H3K14pr (right) peptide affinity purifications relative to H3K14 unmodified peptides. Proteins beyond the cutoff (−log 2 false discovery rate >1.301) were considered enriched or depleted. Subunits of the (P)BAF complex are in red. ( b ) Volcano plot showing all enriched proteins in H3K14bu peptide affinity purifications relative to H3K14 unmodified peptide. ( c ) H3K14 peptide affinity purifications from HeLa nuclear extract, immunoblotted with antibodies against (P)BAF complex subunits (BAF180, BAF60A, BRG1 and BAF53). A representative of two experiments is shown. Uncropped blot images are shown in Supplementary Data Set 1 . ( d ) Peptide affinity purifications with GST-tagged bromodomain 2 of BAF180 (PBRM1) (GST-PBRM1(2)). ( e ) Holdup assay 36 with GST-PBRM(2), using biotinylated peptide resins or glutathione beads as a positive control. Unbound protein passed through the filter, and concentrations were measured by capillary electrophoresis. Mean binding intensity is shown, with error bars representing s.d. of three independent assays. ( f ) H3K14 peptide affinity purification from HeLa nuclear extract incubated in the presence of either DMSO (control) or bromodomain inhibitor PFI-3, and subsequent immunoblotting with antibody to PBRM1. Full size image Next, we inhibited BRG1 and PBRM1 bromodomain function in HeLa nuclear extract by using the inhibitor PFI-3, which binds the bromodomains, thus preventing binding to cognate targets 37 . PFI-3 treatment resulted in a reduction in PBRM1 binding to all H3K14 peptides ( Fig. 6f ). In conclusion, our unbiased approach suggested that Kpr and Kac are recognized by a highly overlapping set of bromodomain containing proteins, including the BAF or PBAF complex. In contrast, the same complexes did not seem to recognize Kbu, presumably because of its longer acyl chain, which in turn might be recognized by yet-unidentified binders. Because Pbrm1 was a strong candidate binder for H3K14pr in cells, we investigated whether depletion of Pbrm1 affects the expression of H3K14pr-marked genes. We chose four H3K14pr-target genes and assessed the effect of Pbrm1 knockdown ( Fig. 7a,b ). Notably, in NIH3T3 cells, all of the target genes tested had reduced expression in siPbrm1 cells ( Fig. 7c ). ChIP–qPCR experiments at the TSSs of the same target genes revealed that the enrichment of neither H3K9ac nor H3K14pr was affected ( Fig. 7d ). Figure 7: Histone propionylation stimulates transcription. ( a ) Genome-browser snapshot of ChIP– and RNA-seq data from NIH3T3 cells at the Plin2 gene. ( b ) Immunoblot using antibody to Pbrm1 to verify depletion of Pbrm1 after siRNA-mediated knockdown. Stain-free image is shown as loading control. ( c ) RT–qPCR after knockdown of Pbrm1 in NIH3T3 cells at selected H3K14pr-enriched genes. Mean expression is normalized to that of β-actin ( Actb ). Error bars represent s.d. of three technical replicates of one representative experiment out of three independent experiments. ( d ) ChIP–qPCR analysis of H3K9ac and H3K14pr enrichments at the TSSs of the same genes as in c after knockdown of Pbrm1. The intergenic region is used as a negative control. The mean percentage of input is shown, and error bars represent s.d. of three technical replicates of a representative experiment out of two experiments. ( e ) Scheme of the stably integrated reporter system in 293 T-Rex cells. The luciferase gene was under the control of a TK promoter downstream of five Gal4-binding sites. PCAF was fused to the Gal4 DNA-binding domain (DBD) to activate transcription of the reporter. ( f ) ChIP–qPCR analysis of H3K9ac and H3K14pr at the TK promoter of the 293 T-Rex cells after transient transfection with or without a plasmid encoding Gal4-PCAF. Error bars represent s.d. of three technical replicates of one representative experiment out of three. Enrichments were normalized to H3. ( g ) Mean relative luciferase expression after transient transfection of 293 T-Rex cells with a control (CTRL) or Gal4-PCAF plasmids. Error bars represent s.d. of three technical replicates of one representative experiment out of three. ( h ) Scheme of an in vitro transcription assay. ( i ) In vitro transcription assay using a chromatin template. The recovered RNA was analyzed with denaturing PAGE, and a representative autoradiogram of two independent experiments is shown. Panels are from the same experiment and same gel. Uncropped autoradiogram is shown in Supplementary Data Set 1 . ( j ) Immunoblotting for H3K14pr after in vitro transcription assay shown in i and extraction of proteins. Full size image Histone propionylation stimulates transcription To further investigate the role of histone propionylation in transcriptional activation, we applied a stably integrated reporter system in 293 T-Rex cells in which the luciferase gene was under the control of a TK promoter preceded by five Gal4-binding sites ( Fig. 7e ) 38 . ChIP–qPCR experiments revealed that, following transient transfection of a plasmid driving the expression of PCAF fused to a Gal4 DNA-binding domain, H3K14pr enrichments at the TK promoter increased by ∼ 2.5-fold ( Fig. 7f ). This increase was accompanied by a ∼ 3.3-fold increase in luciferase expression ( Fig. 7g ), thus suggesting that the increased histone acylation at the TK promoter contributed to transcriptional activation. To test whether propionylation can indeed stimulate transcription, we used a cell-free in vitro transcription system in which we could add either acetyl-CoA or propionyl-CoA ( Fig. 7h ). In this system, transcription from chromatinized templates occurs in an activator- and CoA-dependent manner 39 . Notably, we observed that propionyl-CoA was able to stimulate transcription to a similar extent as acetyl-CoA ( Fig. 7i ). Using our antibody to H3K14pr, we confirmed that H3K14 was indeed propionylated on the in vitro –assembled chromatin ( Fig. 7j ). Taken together, our results strongly suggest that histone propionylation stimulates transcription, at least in vitro , and does so to a similar extent as one of the hallmarks of transcriptional activation, histone acetylation. Discussion Our findings provide an initial functional characterization of histone H3 tail acylations, particularly propionylation at H3K14 and its link to cellular metabolism. Our data integrate histone propionylation into the growing number of non-acetyl histone acylations 9 , 40 , and our combined in vivo and in vitro data suggest a functional role of histone propionylation in transcription. We found that the HATs p300, GCN5 and PCAF are able to utilize propionyl- and/or butyryl-CoA as acyl-group donors in vitro . This activity is consistent with findings from previous studies reporting that p300 and GCN5 propionylate and butyrylate histones in vitro 5 , 19 , 21 . However, our results demonstrate that GCN5 and PCAF regulate the levels of histone propionylation (Kpr) in vivo . Nevertheless, we cannot exclude the possibility that Kpr and Kbu might also be catalyzed by yet-uncharacterized acyltransferases and may even partially result from nonenzymatic acylation. Our results identify a previously unknown modification, H3K14 propionylation, in vivo . We mapped the enrichment profile of H3K14pr in mouse livers and found that H3K14pr marks active TSSs and promoters. Notably, H3K14pr enrichment levels are strongly correlated with transcriptional activity in two different metabolic states. Our genome-wide analyses predicted that this short-chain acylation functions combinatorially with Kac and Kbu to promote a higher transcriptional output. How nonacetyl histone acylations would support this increased transcription is an intriguing question. Beyond to charge neutralization of the target lysine to potentially increase DNA accessibility, our results suggest that one possible mechanism may be the recruitment of acylation-state-specific binding complexes, as shown previously for histone acetylation 3 , 41 , 42 and methylation. Notably, different histone lysine methylation states (for example, mono-, di- and trimethylation) lead to the recruitment of distinct readers 42 , 43 , 44 . In agreement with this result, we found in our unbiased search for binders that the (P)BAF chromatin-remodeling complex 45 , 46 , 47 recognizes distinct acylation marks. Thus, high local concentrations of Kac and Kpr might serve to efficiently recruit an overlapping set of transcriptional regulators necessary for high-level transcriptional output. Because we did not identify H3K14bu-specific binders, it is possible that the main function of Kbu might be to deter such binding, for example, preventing (P)BAF complex recruitment and therefore contributing to a dynamic association of factors with chromatin and high transcriptional activity 48 . Alternatively, Kbu may recruit additional binding proteins that we were unable to identify using our approach. Given that no histone H3 tail acetylation (including H3K14) has been associated with effects on global chromatin compaction to date, we do not anticipate any effect of H3K14pr or H3K14bu on global chromatin compaction 49 . Finally, our demonstration that propionyl-CoA carboxylase, an enzyme involved in metabolic disease 26 , 50 , alters global histone propionylation emphasizes a direct link between metabolic pathways that consume the cosubstrates and histone propionylation. Our results also suggest a potential role for histone propionylation in metabolic signaling and disease. Whether the changes observed in global levels of H3K14pr result from alterations in the enrichment of this modification at specific genomic loci remains to be determined. In conclusion, our study sheds light on the complexity of the regulation of chromatin function by histone modifications through the identification and characterization of previously unknown modification sites and types. Our results demonstrate a role of histone propionylation in transcription and suggest that histone propionylation, in combination with acetylation and butyrylation, may couple cellular metabolic state to chromatin structure and function. Methods Rabbit polyclonal-antibody generation and purification. All antibodies used were raised in rabbits (Biogenes GmbH) with the immunizing peptides (synthesized by Biosyntan or in-house facility) STGGK(pr)APRKQGGC, STGGK(bu)APRKQGGC for H3K14pr and H3K14bu, respectively, according to a protocol from Biogenes. A list of peptides used in this study is available in Supplementary Table 1 . For purification, immunizing peptides were coupled to Sulfolink beads (Pierce) and used as bait according to the manufacturer's instructions. Different ratios (50:1 to 10:1) of serum to antibody were used depending on the antibody. In some cases, depletion with beads coupled to unmodified or acetylated peptides was applied to obtain non-cross-reacting antibodies. Antibodies were eluted with 0.1 M glycine, pH 2.5, and dialyzed against PBS and stored at 4 or −20 °C. Peptide dot blots. Peptides were quantified on the basis of lyophilized-pellet weight as well as with the Scopes method (absorbance at 205 nm) 51 . To test the specificity of histone-modification antibodies, serial dilutions of chemically synthesized peptides with or without different modifications at different sites (H3K14 or H4K16) were spotted directly onto nitrocellulose membranes of 0.1 μm pore size. The membrane was air-dried for 30 min and probed with antibodies. Mass spectrometric identification of histone lysine propionylation and butyrylation. Acid-extracted histones from HeLa cells were separated by C8 reversed-phase chromatography (Vydac C8, 5-μm beads, 300-Å pore size, 200 × 14 mm) on a GE Biosciences SMART system, as previously described 52 . We routinely loaded 400–500 μg of total protein (as estimated by Bradford assay) and collected fractions of 0.3 ml. To reduce the amount of samples for MS analysis two neighboring fractions were pooled, so that all histone fractions contained approximately 20 μg of protein. HeLa histone-protein-containing samples were separated by 4–20% Tris-glycine gels (1 mm, NuPAGE, Invitrogen). Dialyzed acid-extracted histones from mouse liver were separated on a 16% Tris-glycine WedgeWell gel (Invitrogen). Bands were stained with colloidal Coomassie stain (colloidal blue, Invitrogen), and histone bands were excised and digested either by limited trypsin proteoloysis (HeLa and mouse liver histones) or Arg-C (HeLa histones) digestion. Limited trypsin digestion was performed in the following way. After reduction/alkylation with DTT (10 mM in 50 mM ammonium bicarbonate, 56 °C, 45 min) and iodoacetamide (55 mM, 25 °C, 20 min, dark) the gel pieces were subjected to extensive washing cycles and a final dehydration step (acetonitrile incubation followed by vacuum evaporation). After rehydration of the gel pieces with trypsin on ice (Promega, 12.5 ng/μl in 50 mM ammonium bicarbonate) we performed tryptic digestion for 15 min (HeLa histones) or 20 min (mouse liver histones) at 37 °C and stopped proteolysis instantly by acidification (TFA 1% (v/v) final concentration) and peptide desalting. ArgC digestion was performed similarly, but enzyme incubation was carried out overnight. Throughout the procedure, we tried to avoid using alcohols as organic solvents, to minimize formation of artificial post-translational modifications. Peptide samples were desalted on STAGE tips, as described previously 53 , and analyzed via LC–MS with the following parameters. LC–MS parameters for HeLa histones: samples were separated within 60 min with a linear gradient starting from 4% buffer B (80% MeCN in 0.5% acetic acid) to 60% buffer B at a flow rate of 250 nl/min, and this was followed by a washout step (95% buffer B, 8 min, 500 nl/min) and reequilibration (2% buffer B, 7 min, 500 nl/min). Mass spectrometry (OrbitrapXL+ETD) was performed essentially as previously described 52 except that we acquired both Top10 CID and ETD data for most of the samples. LC–MS parameters for mouse liver histones: Peptides were separated on a 20-cm (75-μm ID) in-house-packed (ReproSil-Pur C18-AQ, 1.9 μm beads, Dr. Maisch Laboratories) fused silica emitter nanocolumn with a 60-min nonlinear gradient (0–8%, 5 min; 8–45%, 40 min; 45–80%, 4 min; 80%, 5 min; 80–0%, 1 min; 0%, 4.5 min) of buffer B (80% MeCN in 0.1% formic acid) at a flow rate of 250 nl/min via an Easy nLC1000 UHPLC interfaced with a Q Exactive mass spectrometer. MS acquisition used a Top10 DDA workflow. MS full scans (50-ms max. IT) were performed at 70,000 resolution (at m / z 200; profile mode) with a scan window of 300 to 1,650 m / z and an AGC target value of 3e 6 . HCD MS/MS scans (120 ms max. IT) were acquired at 35,000 resolution, an AGC target of 1 × 10 5 and an isolation window of 1.6 AMU (NCE 28, exclusion of charge states +1 and unassigned), Data analysis of HeLa histone samples was conducted in PEAKS Studio 7.5, enabling the 'inchorus' mode (the latter including a Mascot 2.2 database search) and the Peaks PTM algorithm. We performed PEAKS-studio database searches with the following parameters. Parent-mass error tolerance, 10 p.p.m., fragment-ion error tolerance, 0.5 Da (CID, ETD) and charge-dependent precursor-ion m / z correction for charge sates 1+ to 5+. We allowed a maximum of two missed cleavages for ArgC and up to four missed cleavages for limited trypsin digestion, including peptides with two non-enzyme-specific ends. Carbamidomethylation was set as a fixed modification. As variable modifications, we enabled methionine oxidation; acetylation, propionylation and butyrylation of lysine; methylation and dimethylation of arginine and lysine; and trimethylation of lysine. In addition, we also considered modifications occurring at protein N termini. Data analysis of mouse liver histones was conducted essentially as described above but with PEAKS studio 8.0 with the following changes. The HCD-spectrum fragment-ion tolerance was set to 0.02 Da, and cleavage specificity was set to fully tryptic (allowing four missed cleavages). MS/MS spectra of peptides carrying PTMs were accepted at an FDR of 1% and subjected to manual curation. SDS–PAGE, immunoblotting and peptide competition. Either self-casted or 4–20% min-PROTEAN TGX stain-free precast gels (Bio-Rad) were used for SDS–PAGE, and proteins were transferred with a Trans-Blot Turbo transfer system (Bio-Rad) to nitrocellulose membranes, according to the manufacturer's instructions. Membranes were blocked in Tris-buffered saline–Tween (TBST) with 0.2% Tween and 4% BSA at room temperature (RT) for 1 h. Different amounts of antibody were incubated with the membrane in TBST with 4% bovine serum albumin (BSA) and incubated with gentle agitation at 4 °C overnight. For peptide competition experiments, antibodies were pre-incubated with peptides for 30 min at 4 °C before adding membranes. The membranes were washed with three changes of TBST for 5 min each and incubated with a secondary antibody conjugated to horseradish peroxidase (HRP) in TBST with 4% BSA at RT for 1 h. The membrane was washed again with three changes of TBST for 5min before antibody binding was detected by incubation of the membrane with an enhanced chemiluminescent (ECL) HRP substrate (Millipore or Bio-Rad) and imaged using film or ChemiDoc Imaging system (Bio-Rad). Quantifications of western blots were done using the ImageLab software (Bio-Rad) according to instruction manual. Immunofluorescence. Mouse embryonic fibroblasts (MEFs) were seeded on glass coverslips 1 d before the experiment, so that they were 80–90% confluent by the next day. Coverslips were washed twice with PBS and fixed at RT with 4% paraformaldehyde (PFA) and 2% sucrose in PBS for 10 min. Fixation was stopped by washing coverslips three times with PBS. The cells were permeabilized with 0.5% Triton X-100 in PBS at RT for 20 min, washed two times in PBS and blocked with 4% BSA in PBST (with 0.2% Tween-20) at RT for 1h. Primary antibodies were diluted 1:1,000–2,000 in 4% BSA in PBST, and the coverslips were incubated upside down in 100 μl antibody solution at 4 °C in a wet chamber overnight. The cover slips were washed three times for 5 min each in PBST and then incubated at RT with the fluorophore-coupled secondary antibody diluted 1:200 in 4% BSA in PBST for 45 min. The coverslips were washed again three times for 10 min in PBS and were than mounted on glass slides in Vectashield mounting medium (Vector laboratories) containing the DNA intercalating dye DAPI. The coverslips were sealed with nail varnish and stored at 4 °C in the dark. Imaging was done using Leica SP8 UV confocal microscope. Cell lines. All cell cultures were maintained at 37 °C in a humidified atmosphere containing 5% CO 2 . HeLa, human embryonic kidney (HEK293), Raw264.7 macrophages and MCF-7 cells were maintained in Dulbecco's modified Eagle`s medium, high glucose (DMEM, PAA), supplemented with 10% fetal bovine serum (PerBio), 1% L -glutamine (200 mM) and 1× Pen/Strep (100×) solution (PAA). For NIH3T3 cells, 10% newborn calf serum (NCS) was used with all other components same as the other cells. Mycoplasma testing was done for HeLa and NIH3T3 cells by an in-house tissue culture facility. Acid extraction of histones from cells and tissues. For acid extraction, cells were harvested and washed in ice-cold PBS once. The cell pellet was resuspended in PBS containing 0.5% Triton X-100 (v/v), protease inhibitors and 10 mM sodium butyrate (NaBu) at a cell density of 10 7 cells/ml. The cells were incubated on ice for 10 min to lyse the cell membrane, pelleted and washed again in half of the volume of the same buffer. The nuclear pellet was acid-extracted by resuspension in 0.2 M HCl and incubation on ice by at 4 × 10 7 cells/ml density for at least 30 min. After acid extraction, the sample was centrifuged at 17,000 g for 15 min and the acidic supernatant containing the histones transferred to a new tube. Acid extracts were neutralized with 1/5 volume of 1 M Tris, pH 9.5, and protein concentrations were measured with Bradford assays. Acid extraction of histones from tissues was carried out as described previously 54 . After the 0.4 N H 2 SO 4 extraction step, supernatants were neutralized with 1/2 volume of 1 M Tris, pH 9.5, and dialyzed against 1× PBS. Histone acylation assays. Full-length HATs (p300, Flag-GCN5 and Flag-PCAF) were expressed and purified with a baculovirus system, as previously described 55 . Different histone substrates (calf thymus histones (Sigma (H9250)) or recombinant octamers) were mixed in a final reaction volume of 30 μl with 1× HAT buffer (5% glycerol, 50 mM Tris, pH 8.0, 0.1 mM EDTA, pH 8.0) containing 10 mM NaBu, 1× Complete protease-inhibitor cocktail (Roche), 1 mM DTT and 40 μM acyl-CoA (Sigma). For radioactive assays [ 3 H]acetyl-CoA (specific activity 11.6 Ci/mmol, Hartmann Analytic), [ 3 H]propionyl-CoA (specific activity 100 Ci/mmol, American Radiolabeled Chemicals) or [ 3 H]butyryl-CoA (specific activity 110 Ci/mmol, American Radiolabeled Chemicals) were used. Reactions were incubated at 30 °C for 1 h, stopped by addition of 6 μl of 6× Laemmli buffer and boiling for 5 min at 95 °C and loaded on SDS–PAGE gels. For the acylation dot blot time-course assay, 2 μl of reaction (300 ng octamers) was spotted at each time point before immunoblotting. siRNA transfection of mammalian cells. ON-TARGETplus SMARTpool siRNAs were purchased from Dharmacon (GE Healthcare). siRNAs were resuspended to 100 μM stock concentration in 1× siRNA resuspension buffer (Thermo Fischer Scientific) and stored at −20 °C. For transfection, siRNAs were used at 10 nM final concentration to transfect HeLa cells with Lipofectamine RNAiMAX (Invitrogen) transfection reagent. The transfections were carried out with the Forward Transfection protocol provided in the manufacturer's instructions. Cells were harvested 72 h post-transfection for RT–qPCR, chromatin immunoprecipitation (ChIP) and/or western blot analysis. A list of siRNAs used in this study are provided in Supplementary Table 4 . RNA isolation and quantitative PCR. Total RNA was isolated with a Zymo Quick RNA prep kit (Zymo) according to the manufacturer's instructions. 300 μl RNA lysis buffer was used per well of a six-well plate. RNA concentration was measured with a NanoDrop 2000 spectrophotometer (Thermo Fischer Scientific). 0.5–1 μg total RNA was used for each reverse-transcription reaction. cDNA was synthesized with RevertAid First Strand cDNA Synthesis Kits (Thermo Fischer Scientific) using oligo(dT) primer according to the protocol provided by Thermo. cDNA was diluted 1:2 or 1:5 in Ultra-pure H 2 O before RT–qPCR (Light Cycler, Roche). Animal handling and preparation of chromatin from mouse livers. Mice from the C57BL/6J strain were used for all experiments. Maintenance and fasting experiments were done in accordance with the ethical regulations in the IGBMC and the Institut Clinique de la Souris in Strasbourg (ICS), France in compliance with the French and European legislation on the care and use of laboratory animals. All animal experimentation was approved by the Direction des Services Vétérinaires du Bas-Rhin, Strasbourg, France. Refed control mice were fasted for 12 h during the day (7 a.m. to 7 p.m. followed by a short refeeding (7 p.m. to 11 p.m.)). This procedure was done to synchronize fed mice and as a better control than ad libitum mice. Fasted mice were transferred into cages without food for 48 h (10 p.m. to 10 p.m.). The experiments were not randomized and were not performed with blinding. Mice for ChIP experiments were all 8- to 14-week-old males. The protocol to formaldehyde-cross-link fresh mouse livers was adapted from a previous study 56 with some modifications. All dissection and perfusion steps were done at room temperature. Cross-linking time was measured from the time formaldehyde reached the liver, as calculated on the basis of flow rate and tubing volume. Livers were perfused with 1% formaldehyde in PBS for 5 min. After cross-linking, livers were dissected out and placed in petri dishes containing 5 ml of cold buffer A on ice and diced into small pieces. The pieces were transferred into a 15-ml glass homogenizer (Kontes) and 5 ml more buffer A used to wash any remaining liver pieces. Livers were then homogenized with a type A pestle with ∼ 10–15 strokes up until there were no clumps remaining. To remove debris, liver homogenate was filtered stepwise through a 250-μm tissue strainer (Pierce) into 15-ml conical tube and kept on ice until all livers were processed. Liver homogenate was then either stored at −80 °C after snap freezing in liquid nitrogen or further processing to prepare chromatin. ChIP. To prepare chromatin, the NEXSON protocol was used as previously described 57 . ChIP was done according to previous protocols 52 , 58 . For immunoprecipitation with antibodies against histone acylations an amount of chromatin equivalent to 25 μg DNA was used. Before immunoprecipitation, the chromatin was diluted 1:10 with dilution buffer and precleared with 20–80 μl of a 50% slurry of salmon sperm and BSA-blocked Protein A/Protein G (3:1 mix) Sepharose beads (GE Healthcare) in dilution buffer for 1 h. After the precleared chromatin was recovered by centrifugation, antibody was added for immunoprecipitation overnight. 40 μl of Protein A and Protein G bead mix slurry was used to bind immunocomplexes for 3–4 h at 4 °C. The bound immunocomplexes were collected at 2,000 g for 1 min, washed twice with low-salt ChIP buffer (20 mM Tris-HCl, pH 8.0, 2 mM EDTA, 1% Triton X-100, 0.1% SDS and 150 mM NaCl), twice with high-salt ChIP buffer (with 500 mM NaCl) and once with LiCl wash buffer (20 mM Tris-HCl, pH 8, 1 mM EDTA, 250 mM LiCl, 1 mM EDTA, 1% NP-40 and 1% sodium deoxycholate) at 4 °C. After a final wash in TE buffer (10 mM Tris pH 8, and 1 mM EDTA) the beads were resuspended in 200 μl freshly prepared elution buffer (1% SDS and 0.1 M NaHCO 3 ), and the immunocomplexes were eluted by incubation at 30 °C in a thermoshaker (Eppendorf) at 900 r.p.m. for 30 min. The eluates were de-cross-linked and purified with a PCR purification kit (Qiagen) before being used for ChIP–seq library preparation or qPCRs. A list of all primers used in this study can be found in Supplementary Table 3 . ChIP–seq data analysis. The ChIP–seq library was prepared according to Illumina protocols, and sequencing was done with the HiSeq2500 (Illumina) platform with a read length of 1× 50 bp. We obtained at least 30 million reads per ChIP–seq sample. Only sequences with base-quality phred scores greater than 30 were used. Sequenced reads were mapped to the reference genome Mouse, mm9, with Bowtie 1 (ref. 59 ). Bowtie 1.0.0 was run with the following arguments: -m 1—strata--best -y -S -l40 -p 2. fastq --S. SAMtools was used to filter and sort uniquely mapping reads and generate bam files 60 . Bam files were converted to bed files with BEDtools 61 . Bam files were converted to bigwigs with deeptools 62 for visualization in a genome browser, IGV 63 or UCSC 64 . MACS1.4 was used for peak calling for H3K9ac under the following parameters: --nomodel,--gsize 1.87e+9 and --qvalue 0.05 (refs. 65 , 66 ). For H3K14pr and H3K14bu, peak calling was performed in SICER 67 with the settings: effective genome size fraction, 0.8; window size 200 bps; gap size, 1,000 bp; redundancy threshold, 1. To find the overlapping regions between samples, Operate on Genomic Intervals in Galaxy 68 or BEDtools was used, and Venny was used to plot Venn diagrams 69 after defining target genes as genes with a peak within 1 kb of their transcription start site. Homer 70 was used to annotate peaks to different genomic regions. A GTF file containing annotations mm9.ensgene.gtf was downloaded from Ensembl (Ensembl v67). Average profiles of histone marks across transcription start sites or peaks were made in seqMINER 71 . Gene Ontology analysis was performed with DAVID6.8 (ref. 72 ) by using Ensembl gene IDs as input. Differential peaks between the refed and the fasted conditions were called in chromstaR 32 . Replicates were handled by chromstaR internally. Differential regions for every mark between the refed and fasted conditions (chromstaR mode = ′differential′) were called with a bin size of 500 bp with a step size of 100 bp. To increase the stringency of the peak calls, differential regions were kept if the maximum posterior probability of the differential state anywhere within the region was higher than (1–10 −4 ). Differential regions were exported for GO term enrichment analysis in GREAT 33 . For the gain of bu in the fasted state and the gain of pr in the fasted state, we considered only differential regions with a differential score >0.999999 and >0.9999999, respectively, to reduce the number of differential regions tested. RNA-seq and data analysis. RNA-seq data were generated from total RNA isolated from three refed and three 48-h-fasted mice and sequenced with the HiSeq2500 (Illumina) platform with 1 × 50-bp read length. Reads were mapped onto the mm9 assembly of the mouse genome with TopHat v2.0.10 (ref. 73 ) and the bowtie2 v2.1.0 aligner 74 . Only uniquely aligned reads were retained for further analyses. Quantification of gene expression was performed in HTSeq v0.5.4p3 (ref. 75 ) by using gene annotations from Ensembl release 67. Read counts were normalized across libraries with a method proposed previously 76 . Comparisons of interest were performed with a previously proposed method implemented in the DESeq2 Bioconductor library (DESeq2 v1.0.19) 77 . Resulting P values were adjusted for multiple testing using the Benjamini–Hochberg method 78 . Genes were considered upregulated if the log 2 fold change was greater than 1 and the adjusted P value was less than 0.05. Genes were considered downregulated if the log 2 fold change was less than −1 and the adjusted P value was less than 0.05. Peptide pulldown and identification of binder proteins. Peptide pulldowns using nuclear extracts (prepared according to Dignam protocol 79 ) were performed with the Vermeulen protocol 80 in three independent experiments with some changes as follows. C-terminally biotinylated peptides were first bound to streptavidin–Sepharose beads (GE) with a five-fold excess of peptide to saturate the binding capacity of the beads. Each pulldown reaction contained 10–20 μl of saturated beads and 500 μg of nuclear extract in a total volume of 600 μl pulldown binding buffer (50 mM Tris-HCl, pH 8, 150 mM NaCl, 0.25% NP-40, 1 mM DTT, 10 sodium butyrate, 10 mM nicotinamide and 1× protease-inhibitor cocktail (Roche)). The pulldown mix was incubated at 4 °C in a rotation wheel. Beads were then washed four times with 1 ml pulldown wash buffer (with 300 mM NaCl) for 5 min on a wheel. Bound proteins were eluted by boiling in 1× Laemmli buffer. In-gel digestion and mass spectrometric identification of bound proteins was performed as previously described 80 . Holdup assays for validation were performed as previously described 36 . Gal4 recruitment assay. T-Rex 293 cells (Invitrogen) with a stably integrated 5×Gal4RE-tk-luc-neo plasmid driving the expression of firefly luciferase under the control of the TK promoter downstream of five Gal4 DNA-binding sites were provided by R.M. and grown according to the manufacturer's instructions. Plasmid encoding PCAF (amino acids 352–832) fused to the Gal4 DNA-binding domain (DBD) or control plasmid pGL3-U6-sgRNA-PGK-puro was cotransfected with pRL-CMVwith Lipofectamine 3000 (Invitrogen). pRL-CMV, a Renilla reporter plasmid, was used to normalize firefly luciferase values for transfection efficiency in each well. Firefly and Renilla luciferase activities were measured 24 h after transfection with a Dual-Luciferase Reporter Assay System (Promega E1910) or cells cross-linked for chromatin immunoprecipitation. In vitro transcription assay. Wild-type recombinant histones were expressed in Escherichia coli and purified from inclusion bodies as previously described 81 . Histone octamers were refolded and assembled with the previously described pG5MLP 5S array 82 by NAP1 and ACF 83 . In vitro transcription assays were performed in the presence of acetyl-CoA, propionyl-CoA or no coenzyme 84 . Statistics. The Wilcoxon test was used to evaluate the significance of the differences between median values of box plots. For all quantified western blot and RT–qPCR results, s.d. values of three technical or independent experiments were calculated using Microsoft Excel. Data availability. A Life Sciences Reporting Summary for this paper is available. All ChIP–seq and RNA-seq data are available at the GEO database under superseries accession number GSE101598 . All other data are available upon request. Additional information Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Accession codes Primary accessions Gene Expression Omnibus GSE101598 | Scientists of Helmholtz Zentrum München and Ludwig-Maximilians-Universität Munich (LMU) have discovered that two new classes of histone modifications couple cellular metabolism to gene activity. The study was published in Nature Structural and Molecular Biology. DNA is wrapped around proteins called histones, which serve to package the DNA inside the cell nucleus and play an important role in regulating gene expression. Histone proteins can be modified with small chemical groups, many of which are products of cellular metabolism. By impairing or enhancing transcription, these modifications are able to influence gene activity. While this could provide a potential mechanism through which our environment - for example the food we eat - can lead to changes in gene expression, it remains unclear how histone marks are really coupled to cellular metabolism and how this might affect chromatin organisation and gene activity. In this study, Robert Schneider's team from the Institute of Functional Epigenetics (IFE) at Helmholtz Zentrum München and LMU looked at two novel modifications, propionylation and butyrylation (i.e. the addition of propionyl and butyryl groups to the histone proteins). The researchers found that propionyl and butyryl groups can be present on histone H3, specifically at lysine residue 14 (H3K14). Propionate and butyrate are products of fatty acid metabolism According to the study, these modifications specifically mark highly expressed genes and their presence changes upon metabolic alterations, for example upon fasting. The researchers also showed that histone propionylation can drive transcription in the test tube (in vitro) suggesting it is a stimulatory mark capable of causing genes to be expressed when it is present. "Interestingly, propionate and butyrate are products of fatty acid metabolism" explains Robert Schneider. "This means that these histone modifications might be a way through which the metabolic state of the cell is linked to chromatin architecture." One way in which histone modifications elicit biological responses is by being recognised by specific proteins, so-called "readers". Using a combination of pulldown assays and mass spectrometry, the team identified the specific reader proteins for these novel marks. The addition of these marks in response to fatty acid metabolism and binding of the reader proteins changes the signature of chromatin and thus it's functional state. "These results are especially significant with regard to metabolic diseases, like diabetes and obesity" said Schneider. "Our aim now is to study the role of these new gene switches in disease models." The results suggest a possible role for histone propionylation in metabolic signalling and disease as propionyl-CoA carboxylase (the enzyme that degrades this co-factor) is implicated in metabolic diseases and indeed alters histone propionylation. | 10.1038/nsmb.3490 |
Chemistry | Using two CRISPR enzymes, a COVID diagnostic in only 20 minutes | Liu, T.Y. et al. Accelerated RNA detection using tandem CRISPR nucleases. Nat Chem Biol (2021). DOI: 10.1038/s41589-021-00842-2 Journal information: Nature Chemical Biology | http://dx.doi.org/10.1038/s41589-021-00842-2 | https://phys.org/news/2021-08-crispr-enzymes-covid-diagnostic-minutes.html | Abstract Direct, amplification-free detection of RNA has the potential to transform molecular diagnostics by enabling simple on-site analysis of human or environmental samples. CRISPR–Cas nucleases offer programmable RNA-guided RNA recognition that triggers cleavage and release of a fluorescent reporter molecule, but long reaction times hamper their detection sensitivity and speed. Here, we show that unrelated CRISPR nucleases can be deployed in tandem to provide both direct RNA sensing and rapid signal generation, thus enabling robust detection of ~30 molecules per µl of RNA in 20 min. Combining RNA-guided Cas13 and Csm6 with a chemically stabilized activator creates a one-step assay that can detect severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RNA extracted from respiratory swab samples with quantitative reverse transcriptase PCR (qRT–PCR)-derived cycle threshold ( C t ) values up to 33, using a compact detector. This Fast Integrated Nuclease Detection In Tandem (FIND-IT) approach enables sensitive, direct RNA detection in a format that is amenable to point-of-care infection diagnosis as well as to a wide range of other diagnostic or research applications. Main Current strategies for RNA detection in clinical samples based on qRT–PCR provide high sensitivity (limit of detection of ~1 molecule per µl) but are too complex to implement for rapid point-of-care testing, an essential component of pandemic control 1 , 2 , 3 . CRISPR–Cas proteins offer a simpler alternative to PCR-based methods due to their programmable, RNA-guided recognition of RNA sequences 4 , 5 , 6 . Detection takes advantage of the intrinsic enzymatic functions of CRISPR–Cas proteins from type III and type VI CRISPR–Cas systems, including the multisubunit effector Csm/Cmr or the single-protein effector Cas13 (refs. 7 , 8 , 9 , 10 ). Target RNA recognition by the effector triggers multiple-turnover trans -cleavage of single-stranded RNA (ssRNA) by either the same protein, in the case of Cas13, or by a separate protein called Csm6, in the case of Csm/Cmr 7 , 8 , 11 , 12 . Enzymatic RNA cleavage generates a fluorescent signal when directed toward a reporter oligonucleotide containing a dye and quencher pair 7 , 8 , 13 . Cas13 or Csm/Cmr is capable of reaching an RNA detection sensitivity in the attomolar range in under an hour when coupled to a target sequence amplification procedure, such as reverse transcription recombinase polymerase amplification (RT–RPA) or reverse transcription loop-mediated isothermal amplification (RT–LAMP) 7 , 8 , 9 , 10 , 14 , 15 . However, addition of RT–LAMP to CRISPR-based detection requires multiple liquid handling steps and/or high-temperature incubation (55–65 °C), procedures that are challenging to implement for point-of-care testing 9 , 10 , 16 , 17 , 18 . RT–RPA can be combined with Leptotrichia wadei Cas13a (LwaCas13a) detection in a single step, but the single-step assay exhibits a reduced sensitivity compared to the two-step assay, likely due to differences in optimal buffer and salt conditions for the enzymes in each step 19 , 20 . We reported that amplification-free RNA detection using L. buccalis Cas13a (LbuCas13a) with three guide RNAs could rapidly identify SARS-CoV-2 genomic RNA in human samples with PCR-derived C t values up to 22, corresponding to ~1,600 copies per µl in the assay 21 . When tested in a mobile phone-based imaging device, this assay detects as low as ~100–200 copies per µl of target RNA within 30 min (ref. 21 ). However, increased speed and sensitivity of one-pot detection chemistries are still needed to enable their widespread use for point-of-care diagnostics. Csm6, a dimeric RNA endonuclease from type III CRISPR–Cas systems, has the potential to boost RNA detection based on its endogenous function in signal amplification 9 , 10 , 22 . During CRISPR–Cas interference, activation of Csm or Cmr by viral RNA recognition triggers synthesis of cyclic tetra- or hexaadenylates (cA 4 or cA 6 ) that bind to the Csm6 CRISPR-associated Rossman fold (CARF) domains and activate its higher eukaryote/prokaryote nucleotide-binding (HEPN) domains’ ribonuclease activity 5 , 11 , 12 , 23 . However, diagnostic methods using this cascade were limited to a detection sensitivity of ~500 fM to 1 nM RNA (~10 5 –10 9 copies per µl) without inclusion of RT–LAMP to amplify the target sequence 9 , 10 . Use of Enterococcus italicus Csm6 (EiCsm6) in a reaction where LwaCas13a’s target-triggered trans -ssRNA cleavage generated linear hexaadenylates with a 2′,3′-cyclic phosphate (A 6 >P, activating ligand of EiCsm6) resulted in low detection sensitivity (1 µM target RNA) and no reduction in detection time 22 . Thus, current methods using Csm6 have not resulted in a level of signal amplification that enables CRISPR proteins to directly detect RNA at a concentration relevant for diagnostics. Here, we describe a tandem nuclease assay using Cas13a and Csm6 to achieve both high sensitivity and fast signal generation without requiring a preceding target amplification step. This involved the design of potent, chemically stabilized activators for Csm6 as well as use of eight different CRISPR RNA (crRNA) sequences for Cas13a detection. We also show that this assay can be implemented in a portable device, consisting of a microfluidic chip and a compact detector, to detect viral RNA extracted from human samples. This indicates that the assay is robust and simple to adapt for use in point-of-care testing workflows. This also highlights the value of combining unrelated CRISPR–Cas effectors for sensitive, one-pot detection of RNA. Results Integrating LbuCas13a and Thermus thermophilus Csm6 (TtCsm6) for rapid RNA detection The minimal signal generation observed with Csm6 in RNA detection assays has limited its use for diagnostics 22 . We hypothesized that this limitation could be explained by the ability of some Csm6 proteins to degrade cA 6 or cA 4 activators over time, causing autoinactivation 23 , 24 , 25 , 26 , 27 . Using the Csm6 protein from the T. thermophilus type III-A system 12 , 28 , 29 , which recognizes cA 4 or A 4 >P for activation 11 , 12 , we first tested whether 5′-A 3–6 U 6 -3′ oligonucleotides would produce TtCsm6-activating ligands when cleaved by RNA target-bound LbuCas13a (Fig. 1a,b ). The oligonucleotide A 4 -U 6 stimulated robust reporter cleavage by TtCsm6 relative to oligonucleotides containing A 3 , A 5 or A 6 at the 5′ end (Fig. 1b ), consistent with TtCsm6’s preference for a linear activator resembling its natural cA 4 ligand 11 , 12 . Furthermore, the reaction requires the LbuCas13a–crRNA complex, its target RNA and the A 4 -U 6 oligonucleotide, confirming that TtCsm6 activation is tightly coupled to target RNA detection by LbuCas13a via A 4 -U 6 cleavage (Fig. 1c ). Fig. 1: Activation and inactivation of TtCsm6 in an LbuCas13a–TtCsm6 assay. a , Schematic of TtCsm6 activation by LbuCas13a assembled with a crRNA. Binding of target RNA (blue) by LbuCas13a (pink) leads to activation of its trans -ssRNA cleavage, which results in trimming of the poly(U) region (red) from the TtCsm6 activator. This liberates an oligoadenylate activator with a 2′,3′-cyclic phosphate (yellow) that binds to the TtCsm6 CARF domains (dark blue) and activates the HEPN domains (light blue) for cleavage of a fluorophore quencher RNA reporter. b , LbuCas13a–TtCsm6 assay with TtCsm6 activators (2 µM) containing different lengths of oligoadenylates (A 3 –A 6 followed by U 6 ). LbuCas13a is complexed with crRNA GJK_073 , whose sequence is from a spacer found in a Lachnospiraceae bacterium genome. All reactions contain 200 pM target RNA ( GJK_075 ). Fluorescence is plotted as F / F 0 (fluorescence (F) divided by initial fluorescence at time t = 0 ( F 0 )). The bar graph (right) shows the mean normalized fluorescence and s.e.m. as error bars ( n = 3) at the assay endpoint (118 min). c , LbuCas13a–TtCsm6 assay with the A 4 -U 6 oligonucleotide plotted alongside controls. Cas13 refers to LbuCas13a assembled with crRNA GJK_073 . ‘Reporter only’ refers to a reaction with only the fluorescent reporter in buffer. Mean normalized fluorescence and s.e.m. ( n = 3) at the assay endpoint of 118 min are plotted as in b . d , As in b but with varying concentrations of the A 4 -U 6 oligonucleotide. e , LbuCas13a–TtCsm6 reactions with 1 µM A 4 -U 6 were monitored until the signal reached a plateau. The indicated reagents (A 4 -U 6 oligonucleotide, target RNA, TtCsm6 protein, or buffer) were then added to the reaction where indicated by the arrow. A control reaction without target RNA in the reaction is also overlaid for comparison (no target RNA). Mean normalized fluorescence and s.e.m. ( n = 3) are plotted as lines with error bars. Source data Full size image Activation of TtCsm6 with varying concentrations of A 4 -U 6 produced an initial burst of fluorescence followed by a plateau within ~20–30 min, with the final fluorescence level proportional to the amount of Csm6 activator present (Fig. 1d ). Because equal amounts of reporter molecules were present in all reactions, the plateau in the fluorescence signal likely corresponds to a cessation of reporter cleavage by TtCsm6. Addition of A 4 -U 6 activator to an LbuCas13a–TtCsm6 reaction in which the fluorescence had plateaued was shown to rapidly increase reporter cleavage by TtCsm6, while addition of more target RNA or TtCsm6 had no effect relative to a buffer control (Fig. 1e ). Taken together, these data suggest that Csm6’s A 4 >P ligand is depleted over time, thereby deactivating its RNase activity. Consistent with this conclusion, direct activation of Csm6 with 0.5–2 µM A 4 >P in the absence of LbuCas13a exhibited a similar pattern of inactivation as that observed for the full LbuCas13a–TtCsm6 reaction with A 4 -U 6 (Extended Data Fig. 1a ). Liquid chromatography–mass spectrometry (LC–MS) analysis also showed that A 4 >P is degraded to A 2 >P following incubation with TtCsm6, suggesting that linear activators, as with cyclic oligoadenylates 24 , 25 , 26 , are subject to Csm6-catalyzed cleavage (Supplementary Table 1 ). In the context of the natural cA 4 ligand, A 4 >P may represent an intermediate on the pathway to full TtCsm6 inactivation (Fig. 2a ) 25 , 26 . Site-selective chemical stabilization of Csm6 activators Mathematical modeling suggested that blocking degradation of A 4 >P by Csm6 could dramatically improve fluorescent signal generation in a Cas13–Csm6 detection assay (Extended Data Fig. 2 ). Mutations in Csm6 or complete replacement of the 2′-hydroxyl (2′-OH) groups in cyclic oligoadenylates with 2′-fluoro (2′-F) or 2′-deoxy (2′-H) groups were previously shown to block activator degradation, but these strategies either abolished or drastically lowered the activation of Csm6 for RNA cleavage 23 , 25 . We wondered whether site-selective chemical modifications in linear A 4 >P activators might prevent degradation of the Csm6-activating oligonucleotide, while still maintaining high-level activation of Csm6. To test whether modified linear activators can increase detection sensitivity and/or speed in a LbuCas13a–TtCsm6 assay, we replaced the 2′-hydroxyls of A 4 -U 6 with either a single or triple 2′-H, 2′-F or 2′- O -methyl (2′-OMe) groups to block cleavage within the A 4 sequence (Fig. 2b and Extended Data Fig. 3a–c ). Although inserting multiple modifications into the activators led to slow activation of TtCsm6, a single substitution of the 2′-hydroxyl at A2 with either a 2′-deoxy or 2′-fluoro modification boosted the target RNA detection limit from 1.25 × 10 6 copies per µl (2 pM) to 1.25 × 10 4 copies per µl (20 fM), constituting a 100-fold increase in sensitivity over an unmodified A 4 -U 6 activator (Fig. 2b and Extended Data Fig. 3a–c ). In addition, these singly modified activators enabled faster distinction between the sample fluorescence signal and background signal for reactions measured with 1.25 × 10 6 copies per µl (2 pM; above background at ~10 min), 1.25 × 10 5 copies per µl (200 fM; above background at ~20 min) and 1.25 × 10 4 copies per µl (20 fM; above background at ~40 min) of target RNA (Fig. 2b ). The single-fluoro A 4 -U 6 also exhibited improved signal to background over the single-deoxy A 4 -U 6 , indicating its superiority in maintaining TtCsm6 activation (Fig. 2b ). Fig. 2: Effect of single 2′-activator modifications on Csm6-based signal generation in an RNA detection assay. a , Schematic showing degradation of cA 4 to A 4 >P and A 2 >P by TtCsm6’s CARF domains (dark blue) and its effect on RNA cleavage by the HEPN domains (light blue). Binding of cA 4 or A 4 >P to TtCsm6 can activate it for ssRNA cleavage. Cleavage of cA 4 or A 4 >P to A 2 >P by the CARF domains leads to inactivation of TtCsm6’s RNase activity. b , LbuCas13a–TtCsm6 reactions with 2 µM unmodified A 4 -U 6 (left) or A 4 -U 6 bearing a 2′-fluoro (F; center) or 2′-deoxy (H; right) modification on the second nucleotide (A2). LbuCas13a is assembled with crRNAs 604 and 612, which target an in vitro transcribed RNA corresponding to a fragment of the SARS-CoV-2 genome (gblock). RNA was added at concentrations ranging from 0 copies per µl to 1.25 × 10 6 copies per µl (2 pM). Controls lacking target RNA and the TtCsm6 activator or containing only the fluorescent reporter in buffer were run in parallel and overlaid in each panel. A schematic of the modified activator is shown above each graph. Adenosines with a 2′-hydroxyl (OH), 2′-F or 2′-H are shown in gray, pink or white, respectively. Mean normalized fluorescence and s.e.m ( n = 3) are plotted over 60 min as lines with error bars at each measurement. c , As in b , but using EiCsm6 and its activators, which include A 6 -U 5 (left), a single-fluoro A 6 -U 5 (center) or a single-deoxy A 6 -U 5 (right) instead of TtCsm6 and A 4 -U 6 activators. The 2′ modification is on the third adenosine (A3) to block degradation of A 6 >P to A 3 >P by EiCsm6 (ref. 23 ). Source data Full size image To confirm that the enhanced signal amplification provided by the singly modified A 4 -U 6 activators was indeed due to better activation by the A 4 >P ligand, we showed that A 4 >P oligonucleotides containing a 2′-fluoro or 2′-deoxy at the A2 nucleotide could sustain TtCsm6’s activity for longer times than the unmodified A 4 >P (Extended Data Fig. 1a–c ). In addition, incubation of TtCsm6 with the single-fluoro A 4 >P produced A 3 >P, indicating that TtCsm6-catalyzed degradation of A 4 >P to A 2 >P was blocked by the modification but cleavage could occur at other positions (Supplementary Tables 1 and 2 ). Interestingly, we found that application of an analogous single-fluoro substitution strategy to EiCsm6, an ortholog that uses cA 6 /A 6 >P as an activating ligand 12 , 23 , did not lead to a substantial improvement in the sensitivity or speed of an LbuCas13a–EiCsm6 RNA detection assay (Fig. 2c and Extended Data Fig. 4a,b ). It is possible that the longer A 6 >P activator of EiCsm6 is more susceptible to nucleolytic degradation than the shorter A 4 >P activator of TtCsm6. Taken together, these results demonstrate that the single-fluoro A 4 -U 6 activator of TtCsm6 improves signal amplification by 100-fold relative to an unmodified activator in a tandem nuclease detection assay with LbuCas13a. Programmability and benchmarking of RNA detection For TtCsm6 and its modified activator to be useful for enhancing RNA detection sensitivity and/or speed, the programmable nature of the CRISPR nuclease used in tandem for detection must be preserved. To test this in our Cas13–Csm6 assay, we added TtCsm6 and its single-fluoro activator to an LbuCas13a protein programmed with different crRNA sequences that target the RNA genome of SARS-CoV-2, the causative agent of COVID-19 (Fig. 3a ). Detection using different crRNA sequences exhibited similar sensitivity and kinetics (Fig. 3a ), showing that this one-step assay is programmable with different crRNA sequences and thus could potentially be adapted for detection of virtually any RNA sequence. To further improve detection sensitivity, we complexed LbuCas13a with a pool of eight crRNA sequences, which enables coverage of a wide range of SARS-CoV-2 variants (Supplementary Table 4 ) and exhibits improved signal-to-background compared to two crRNAs (Extended Data Fig. 5a–c ). Using this optimized set of crRNAs, LbuCas13a could detect as few as 63 copies per µl of externally validated Biodefense and Emerging Infections (B.E.I.) SARS-CoV-2 RNA in 2 h (Fig. 3b ). Inclusion of these eight guides in an LbuCas13a–TtCsm6 tandem reaction enabled detection of 31 copies per µl of SARS-CoV-2 RNA in the same time frame (Fig. 3c ). To determine whether addition of TtCsm6 could accelerate the time to detection, we compared the results of these two detection chemistries after a 20-min reaction time. While LbuCas13a was unable to detect concentrations ranging from 31 to 125 copies per µl by 20 min, the assay containing both LbuCas13a and TtCsm6 could detect 31 copies per µl within 20 min ( P < 0.05; Fig. 3b,c and Extended Data Fig. 6 ). Taken together, these results demonstrate that, with optimized crRNAs for LbuCas13a and a chemically stabilized activator for TtCsm6, the tandem CRISPR nuclease assay enables amplification-free detection of RNA sequences from an infectious pathogen with both high sensitivity and a rapid time to detection. Fig. 3: Programmability and benchmarking of the LbuCas13a–TtCsm6 assay. a , Testing the programmability of the LbuCas13a–TtCsm6 assay. LbuCas13a is assembled with crRNAs 604 and 612 (left) or other crRNAs (516, 517 or 528) targeting SARS-CoV-2 RNA. Twist synthetic SARS-CoV-2 RNA was used as the target. Mean normalized fluorescence ( F / F 0 ) values ± s.e.m. ( n = 3) are plotted as lines with error bars over the reaction time course. b , Detection of B.E.I. SARS-CoV-2 RNA using LbuCas13a assembled with eight crRNAs (604, 612, 542, 546, 564, 569, 588 and 596). The slope of fluorescence increase (arbitrary units (AU) per min) due to reporter cleavage was analyzed at the assay endpoint (118 min) and at an earlier time point (20 min). The mean slope ± 95% confidence interval (CI) was plotted for each RNA concentration, with individual slopes overlaid as clear circles ( n = 3). Pairwise comparisons to the control (0 copies per µl of RNA) were done by analysis of covariance (ANCOVA), as described in ref. 21 . Two-tailed P values are shown above each individual comparison; comparisons that were not significant (NS) have a P value of >0.05 or signal that was lower than the control. RNA concentrations to the left of the dotted line were detected. c , LbuCas13a–TtCsm6 detection of B.E.I. SARS-CoV-2 RNA using eight crRNAs as in b and the single-fluoro A 4 -U 6 activator of TtCsm6. The mean fluorescence ( F / F t = 6 ) ± 95% CI was determined at 20 min and 118 min, with individual F / F t = 6 values overlaid as clear circles ( n = 3). Pairwise comparisons of reactions containing target RNA with the control (0 copies per µl of RNA) were done using an unpaired t -test with Welch’s correction. One-tailed P values are shown above the graph; non-significant P values ( P > 0.05) are indicated by NS. d , Accuracy of the LbuCas13a–TtCsm6 assay over 20 replicates. Each replicate containing SARS-CoV-2 RNA was compared to the 95th percentile of the control distribution to determine if it had higher signal; a difference was considered significant when the P value was ≤0.05. The number detected out of 20 replicates is shown at 16, 30 and 60 min for varying concentrations of B.E.I. SARS-CoV-2 RNA. The FDA’s threshold for limit of detection is shown as a red dotted line. Source data Full size image In addition to sensitivity and speed, diagnostic assays must meet the accuracy threshold set by the Food and Drug Administration (FDA) for emergency use authorization, which stipulates that 19 of 20 replicates (95% accuracy) are detected at ~1 to 2 times the limit of detection. To determine if the LbuCas13a–TtCsm6 tandem nuclease assay would meet this validation criterion, we ran 20 replicates of our assay around the limit of detection and analyzed individual replicates by comparing them to the 95th percentile of the negative control distribution (Fig. 3d and Extended Data Figs. 7 and 8 ; see Methods ). Our assay detected 125, 63 and 31 copies per µl of viral RNA with 100% accuracy (20/20 replicates) and 16 copies per µl with 75% accuracy (15/20 replicates) after 60 min, indicating that 31 copies per µl is the limit of detection as defined by the FDA (Fig. 3d ). In addition, 125 copies per µl and 63 copies per µl of viral RNA could be detected with 95% accuracy (19/20 replicates) in 16 min and 30 min, respectively. Taken together, the performance and simplicity of the LbuCas13a–TtCsm6 tandem nuclease chemistry creates a FIND-IT assay that integrates the sensitivity of LbuCas13a with the accelerated time to detection provided by the chemically stabilized TtCsm6 activator while simultaneously achieving the accuracy required for diagnostic tests. Detecting viral RNA with FIND-IT on a compact detector To demonstrate the feasibility of incorporating FIND-IT into a point-of-care testing workflow, we developed a detector consisting of a microfluidic chip with reaction chambers, a heating module that maintains the reactions at 37 °C and a compact fluorescence imaging system (Fig. 4a – c ). The detector, which simultaneously monitors signal from two reaction chambers, is capable of sensing changes in signal as small as 1%, corresponding to a difference of three times the root mean square error (r.m.s.e.) in the mean signal. We applied LbuCas13a–TtCsm6 reactions containing either target RNA (B.E.I. SARS-CoV-2 genomic RNA) or water (no target RNA) to the reaction chambers and monitored the fluorescence signal for 1 h (Fig. 4d ). The signal in a reaction containing 400 copies per µl of extracted SARS-CoV-2 genomic RNA increased non-linearly, resulting in a ~4.7-fold increase in 1 h, while the negative control lacking target RNA exhibited a ~1.7-fold change (Fig. 4d , left). This ~270% increase in fluorescence of the 400 copies per µl reaction compared to the background control contrasts with the ~3% variation between two negative controls run side by side (Fig. 4d , right). Given the substantial difference in output between positive and negative samples, this suggests that the LbuCas13a–TtCsm6 reaction is both chemically compatible with the microfluidic chip and, when implemented in a compact detector, could be amenable for use in point-of-care diagnostic testing strategies. Fig. 4: Implementing FIND-IT on a compact fluorescence detector for analysis of COVID-19 samples. a , A compact and sensitive fluorescence detector for the FIND-IT assay. The heating module (I), sample cartridge (II), imaging optics (III) and camera (IV) are indicated by Roman numerals. The size of the imaging optical train is shown above. b , Photograph showing the compact fluorescence imaging system; scale bar, 20 mm. c , Photograph showing the imaging chambers on the microfluidic chip; scale bar, 1.75 mm. d , Detection of B.E.I. SARS-CoV-2 RNA on a compact detector using the FIND-IT assay. Either 400 copies per µl of RNA or water (0 copies per µl) was added to the reactions. Images (top) show the reaction mixture in the imaging wells after 60 min, and graphs (bottom) show the full time courses of the normalized fluorescence signal ( F / F 0 ) for each well over 1 h. Examples of regions of interest used for integrating fluorescence intensity in the wells are shown as orange boxes. e , Detection of SARS-CoV-2 RNA in total RNA extracted from respiratory swab samples obtained from the Innovative Genomics Institute (IGI) testing laboratory 30 . Each sample was run in parallel on the microfluidic chip with a no target control (0 copies per µl), as in d . Nine positive samples ( C t = 9 to 33) and six negative samples ( C t > 37) were tested, and their relative fluorescence change over background was determined. The detection threshold was set as two s.d. above the mean fluorescence change of negative samples relative to their controls ( n = 6; see Methods ). The percentage of positive and negative samples detected by 20 min (dark blue) and 40 min (light blue) is shown. Full time course data are provided in Extended Data Fig. 9 , and individual C t values of the samples are provided in Supplementary Table 5 . f , Scatter dot plot showing the C t values of 39 positive samples identified in a total of 308 samples obtained from the IGI testing laboratory 30 . A green dotted line indicates a C t value of 33, corresponding to the highest C t value detected by FIND-IT in e . Source data Full size image To test whether FIND-IT could detect SARS-CoV-2 genomic sequences in total RNA extracted from human respiratory swab samples, we tested nine positive samples with qRT–PCR-derived C t values ranging from 9 to 33 (Supplementary Table 5 ) as well as six negative samples ( C t > 37) on the compact detector (Fig. 4e and Extended Data Fig. 9 ) 30 . To account for fluorescence variations from cartridge to cartridge, we determined the fluorescence change of each sample relative to a no target control that was run in parallel, as shown in Fig. 4d (Extended Data Fig. 9a,b ; see Methods ). To set a threshold for detection of positive signal, we required that the relative change of a sample exceed the mean and two standard deviations of the clinical negative distribution. Using this criterion, we found that FIND-IT detected nine of nine samples with C t values ranging from 9 to 33 by 40 min, indicating concordance of our test with the previous qRT–PCR analysis (Fig. 4e ). All except one of the nine positive samples could also be detected at 20 min, highlighting the fast detection speed achievable with this tandem nuclease chemistry (Fig. 4e and Supplementary Table 5 ). In addition, the clinical negative samples ( C t > 37) were not detected above the detection threshold at 20 or 40 min (Fig. 4e ). A tenfold dilution of the sample with a C t value of 33 (~three C t values higher; Extended Data Fig. 10 ) reduced signal-to-background levels, as expected (Extended Data Fig. 9a ). Based on the C t values of all COVID-19-positive samples ( n = 39) identified in samples from the IGI testing laboratory ( n = 308), our assay would be expected to detect ~95% of all positives within 40 min (Fig. 4f ) 30 . This shows that, when combined with upstream RNA extraction, this assay could be used as part of a diagnostic testing workflow to capture most individuals at the early, most infectious stages of COVID-19, when C t values typically range between 20 and 30 (ref. 31 ). Discussion CRISPR–Cas nucleases are attractive tools for diagnostic applications given their programmability and ability to directly detect sequences in viral genomes. However, increases in the sensitivity and speed of CRISPR-based detection are needed to enable widespread use of these technologies as diagnostics. In this study, we demonstrate that LbuCas13a can be coupled with TtCsm6 using chemically modified RNA activators, leading to a 100-fold increase in sensitivity over unmodified activators. Combining TtCsm6 with an LbuCas13a effector containing eight different crRNAs also enabled detection of as low as 31 copies per µl of extracted viral RNA, with detection accuracy reaching 100% by 60 min over 20 replicates. Higher viral RNA concentrations can be detected with similar accuracy at earlier time points, with 125 copies per µl and 63 copies per µl detected in 16 and 30 min, respectively. We also show that the intrinsic programmability of LbuCas13a is preserved in this assay with different crRNAs targeting the SARS-CoV-2 viral genome. The assay can be readily adapted to a compact device composed of a microfluidic cartridge integrated with a compact LED camera. It also detected SARS-CoV-2 RNA in clinical samples with C t values ranging between 9 and 33, indicating its ability to potentially capture individuals at highly infectious stages of disease 31 . Full concordance of positive sample detection with previous analysis by qRT–PCR in the IGI testing lab was also observed by 40 min. Implementation of a full point-of-care testing strategy using FIND-IT will require additional development of suitable sample collection and processing procedures as well as a robust data analysis and management pipeline. Future studies with greater numbers of human samples will also be needed for clinical validation of diagnostic tests that utilize FIND-IT for RNA detection. The sensitive and accelerated response of this one-pot detection chemistry could have broad use for detection of SARS-CoV-2 RNA and also other viral or cellular RNA sequences in the future. This approach is one of the few diagnostic technologies that does not require target amplification, relying instead on tandem nucleases, Cas13 and Csm6, to directly sense RNA with both high sensitivity and fast signal generation. It achieves a sensitivity of 31 copies per µl of RNA, which is lower than the 100 copies per µl concentration considered to be useful for diagnostic surveillance and pandemic control 1 . Importantly, the FIND-IT chemistry can reach this detection sensitivity rapidly and in the context of a single combined reaction containing both nucleases and the activator oligonucleotide. This workflow could potentially reduce costs and simplify the development of point-of-care tests compared to assays that require additional reagents for reverse transcription, target amplification and in vitro transcription before CRISPR-based detection 9 , 10 , 16 , 17 , 20 , 22 . Amplification-free chemistries would also help meet a critical need when the supply of amplification-based reagents is strained, such as during a pandemic. Removing the amplification step also lowers the risk of target contamination both within and across sample batches. Finally, Csm6 has the potential to be integrated with other Cas13-based RNA detection platforms 21 , 32 and thus could enhance time to detection and/or sensitivity for a variety of diagnostic applications. These results establish FIND-IT as a simple, one-pot assay that enables rapid RNA detection with high sensitivity and accuracy. The programmability of FIND-IT also indicates that it could be adapted to detect virtually any RNA sequence. Furthermore, we show that this assay can be implemented on a compact detector and thus would be amenable for use in point-of-care diagnostics. FIND-IT demonstrates the value of stabilizing Csm6 nuclease activation and deploying it in tandem with RNA-cleaving CRISPR–Cas proteins for fast, sensitive RNA detection in a simple, amplification-free format 33 , 34 , 35 , 36 . This technology could enable practical on-site detection of viral or human RNA in clinical samples or plant, fungal or microbial RNA in environmental samples. Methods Protein purification The DNA construct encoding LbuCas13a was codon-optimized for expression in E. coli and cloned with an N-terminal His-SUMO tag by Genscript (pGJK_His-SUMO-LbuCas13a) 13 . LbuCas13a expression and purification were performed as previously described 13 with modifications. A construct encoding for N-terminal His 6 -maltose binding protein (MBP)-tagged LbuCas13a (p2CT-His-MBP-Lbu_C2c2_WT; used in Fig. 1 and Extended Data Fig. 3 ) or His 6 -SUMO-tagged LbuCas13a (used in Figs. 2 – 4 and Extended Data Figs. 1 and 4 – 9 ) was transformed into E. coli Rosetta 2(DE3) pLysS cells cultured in Terrific broth at 37 °C and expressed as described previously 13 . Cell pellets were resuspended in lysis buffer (50 mM HEPES, pH 7.0, 500 mM NaCl, 1 mM TCEP, 10% glycerol, 0.5 mM phenylmethylsulfonyl fluoride (PMSF) and EDTA-free protease inhibitor tablets (Roche)) and lysed by sonication. Following centrifugation, the clarified lysate of His 6 -MBP-LbuCas13a or His 6 -SUMO-LbuCas13a was applied to a HiTrap Ni-NTA column (GE Healthcare) and eluted over a linear imidazole (0.01–0.3 M) gradient. TEV or SUMO protease was added to remove the MBP or SUMO tag, respectively, and the protein was dialyzed overnight at 4 °C. LbuCas13a was further purified by ion exchange using a HiTrap SP column (GE Healthcare) and, in the case of the MBP-tagged construct, by an MBP trap HP column (GE Healthcare). Finally, size-exclusion chromatography (SEC) was performed using a Superdex 200 16/600 column (GE Healthcare) in buffer containing 20 mM HEPES, pH 7.0, 200 mM KCl, 10% glycerol and 1 mM TCEP. Peak fractions were pooled, concentrated and flash frozen in aliquots with liquid nitrogen. The codon-optimized sequence encoding residues 2–467 of TtCsm6 (also known as TTHB152) was cloned into the 1S expression vector as an N-terminal His 6 -SUMO-tagged protein with a TEV cleavage site (pET_His6-SUMO-TEV-TtCsm6). The protein was expressed in BL21(DE3) cells, as described previously 29 . Cell pellets were resuspended in lysis buffer (20 mM HEPES, pH 8.0, 500 mM KCl, 5 mM imidazole and 1 mM TCEP) supplemented with EDTA-free protease inhibitor tablets (Roche) and lysed by sonication. Following centrifugation, the clarified lysate was mixed with HIS-Select Nickel Affinity Gel (Sigma) at 4 °C, and the resin was washed with lysis buffer. The protein was eluted with 250 mM imidazole. TEV protease was added to remove the His-SUMO tag, and the protein was dialyzed overnight at 4 °C against buffer containing 25 mM HEPES, pH 7.5, 150 mM NaCl, 5% (vol/vol) glycerol and 1 mM TCEP. The protein was centrifuged to remove aggregates, concentrated and subjected to SEC on a Superdex 200 16/600 column (GE Healthcare). Peak fractions were pooled, concentrated to ~600 µM and flash frozen in liquid nitrogen. Working stocks were prepared by diluting the protein in buffer containing 20 mM HEPES, pH 7.5, 200 mM KCl, 5% (vol/vol) glycerol and 1 mM TCEP and flash freezing in small aliquots with liquid nitrogen. The codon-optimized DNA sequence encoding EiCsm6 was synthesized and cloned into a pET expression vector encoding His 6 -tagged EiCsm6 with a TEV cleavage site (ENLYFQG) by Genscript. EiCsm6 was expressed and purified by Shanghai ChemPartner. Briefly, the plasmid encoding His-EiCsm6 was transformed into BL21(DE3) Star cells and grown in LB broth at 37 °C to an optical density (OD) of ~0.6. Expression was induced with 0.5 mM IPTG, and cultures were transferred to 18 °C for 16 h. Cells were collected by centrifugation, and pellets were resuspended in lysis buffer (20 mM HEPES, pH 7.5, 500 mM KCl and 5 mM imidazole) supplemented with EDTA-free protease inhibitors (Roche) and lysed by sonication. The protein was immobilized on a 5-ml Ni-NTA fast-flow column, washed with lysis buffer and eluted with a step gradient of imidazole, with fractions at 16%, 50% and 100% elution buffer (20 mM HEPES, pH 7.5, 500 mM KCl and 500 mM imidazole) collected for further purification. The protein was incubated with TEV protease for 32 h at 25 °C to remove the His 6 tag. Then, the protein was reapplied to the Ni-NTA column and washed with 20 mM HEPES, pH 7.5, and 500 mM KCl, and untagged protein was eluted with 20 mM imidazole. The protein was further purified using SEC in buffer containing 20 mM HEPES, pH 7.5, 500 mM KCl and 5% (vol/vol) glycerol. Peak fractions were collected and concentrated to ~15 mg ml –1 and flash frozen in aliquots with liquid nitrogen. Oligonucleotides RNA reporter oligonucleotides are composed of either a pentacytosine sequence (C5) for Cas13a–Csm6 or Csm6-only assays or a pentauridine sequence (U5) for Cas13a-only assays, and are labeled with a 5′-fluorescein and 3′-Iowa Black moiety; they were ordered HPLC-purified from IDT in bulk (250 nmol to 1 µmol) to avoid batch-to-batch variation in basal fluorescence. For direct comparisons of reactions done with different activators or target concentrations, the same lot of reporter from IDT was always used. Cas13a crRNAs, GJK_073 and R004 , were ordered as desalted oligonucleotides from IDT (25-nmol scale), and crRNAs targeting different regions of the SARS-CoV-2 genome (516, 517, 528, 604, 612, 542, 546, 564, 569, 588 and 596) were ordered as desalted oligonucleotides from Synthego (5-nmol scale). Unmodified and 2′-modified A 4 >P oligonucleotides were synthesized by IDT (250-nmol scale). Sequences of RNA oligonucleotides are listed in Supplementary Table 3 . The set of eight crRNAs bound to LbuCas13a (604, 612, 542, 546, 564, 569, 588 and 596) in Figs. 3b–d and 4d,e are able to detect all published SARS-CoV-2 strains in silico (51,631 complete SARS-CoV-2 genome sequences were downloaded from NCBI under the taxonomy ID 2697049 as of 4 March 2021). The inclusivity for each guide is listed in Supplementary Table 4 . Preparation and source of SARS-CoV-2 RNA targets An in vitro transcribed RNA corresponding to a fragment of the SARS-CoV-2 genome was used as a target in Fig. 2 and Extended Data Fig. 4 . A gene fragment (gBlock) corresponding to nucleotides 27,222–29,890 of the SARS-CoV-2 Wuhan-Hu-1 variant genome (MN908947.2) was PCR amplified with a 5′ primer bearing an extended T7 promoter sequence (GTCGAAA TTAATACGACTCACTATAGG ) before separating and extracting the template on an agarose (2% wt/vol) gel (0.5× TAE buffer) and further purifying it by phenol–chloroform extraction (pH 8.0) and ethanol precipitation. The highly purified and RNase-free template was used in a high-yield in vitro transcription reaction containing 1× transcription buffer (30 mM Tris-Cl, 25 mM MgCl 2 , 0.01% (vol/vol) Triton X-100 and 2 mM spermidine, pH 8.1), 5 mM of each NTP, 10 mM DTT, 1 µg ml –1 pyrophosphatase (Roche) and 100 µg ml –1 T7 RNA polymerase (purified in-house) incubated at 37 °C for 4 h. The reaction was quenched with the addition of 25 U of RNase-free DNase (Promega) at 37 °C for 30 min before the addition of two volumes of acidic phenol and subsequent phenol–chloroform extraction. The RNA was then ethanol-precipitated and flash frozen for storage at −80 °C. The sequence of the resulting transcript is provided in Supplementary Table 3 . Twist synthetic SARS-CoV-2 RNA control 2 (102024) was used as a target in Fig. 3a and Extended Data Fig. 5 . Extracted, externally validated SARS-CoV-2 genomic RNA from B.E.I. Resources (NIAID/NIH) was used in Figs. 3b–d and 4d and Extended Data Figs. 6 – 8 for better comparability to previous studies (lots 70034085 and 70034826) 21 . This reagent was deposited by the Centers for Disease Control and Prevention and was obtained through B.E.I. Resources, NIAID, NIH: Genomic RNA from SARS-Related Coronavirus 2, Isolate USA-WA1/2020, NR-52285. LbuCas13a–Csm6 plate reader assays In Fig. 1 and Extended Data Fig. 3 , LbuCas13a–TtCsm6 reactions contained 40 nM LbuCas13a, 20 nM crRNA, 100 nM TtCsm6, 100–200 pM RNA target and 200 nM C5 reporter, with either Csm6 activator or DEPC-treated water added (no activator control). The reactions were performed at 37 °C in buffer containing 20 mM HEPES (pH 6.8), 50 mM KCl, 5 mM MgCl 2 , 100 µg ml –1 bovine serum albumin (BSA), 0.01% Igepal CA-630 and 2% (vol/vol) glycerol. The LbuCas13a–crRNA complex was assembled at a concentration of 1 µM LbuCas13a and 500 nM crRNA for 15 min at room temperature. Twenty-microliter plate reader assays were started by mixing 15 µl of a master mix containing the LbuCas13a–crRNA complex and TtCsm6 in buffer with 5 µl of a second master mix containing target RNA, reporter and the Csm6 activator in buffer. The protein master mix was equilibrated to room temperature for ~15–20 min before the reaction. Fluorescence measurements were made every 2 min on a Tecan Spark plate reader ( λ ex , 485 nm; λ ex , 535 nm), and z position was optimized to a well containing 200 nM C5 reporter in buffer. For experiments in Fig. 1e , the reaction was initiated as in Fig. 1d with 1 µM of the A 4 -U 6 activator and allowed to proceed until it reached a plateau, at which point 1 µl of either 20 µM A 4 -U 6 , 4 nM target RNA or 2 µM TtCsm6 was added to double the amount of each reagent in the reaction. For controls, 1 µl of the reaction buffer was added instead. The assay plate was then replaced back into the plate reader to continue the assay. In Figs. 2b and 3 , LbuCas13a–TtCsm6 reactions contained 50 nM LbuCas13a, 50 nM crRNA, 100 nM TtCsm6, 1 U µl –1 murine RNase inhibitor (New England Biolabs), 2 µM TtCsm6 activator and 200 nM C5 reporter, with either target RNA or DEPC water added. The reaction buffer used for these assays contained 20 mM HEPES, pH 6.8, 50 mM KCl, 5 mM MgCl 2 and 5% (vol/vol) glycerol in DEPC-treated, nuclease-free water (Fisher Scientific or Invitrogen) supplemented with 1 U µl –1 murine RNase inhibitor, as previously described 21 . Reactions were started by mixing 15 µl of the protein master mix containing LbuCas13a, crRNA and TtCsm6 with 5 µl of the activator/reporter master mix containing TtCsm6 activator, target RNA and a C5 reporter. In Figs. 2b,c and 3a , measurements were taken on a Biotek plate reader at 37 °C ( λ ex , 485 nm; λ ex , 528 nm), with a z position of 10.0 mm. In Fig. 3c,d , measurements were taken on a Tecan Spark plate reader at 37 °C ( λ ex , 485 nm; λ ex , 535 nm), with a z position optimized to a well containing 200 nM reporter in buffer. For the 20-replicate experiments in Fig. 3d , the z position and gain settings on the Tecan Spark plate reader were closely matched between datasets (gain, 75; z position, 18,962–18,966 µm) to allow for greater consistency for comparison. For LbuCas13a–EiCsm6 assays in Fig. 2c and Extended Data Fig. 4 , the same buffer and concentrations of LbuCas13a, crRNA and reporter were used as for the LbuCas13a–TtCsm6 assays, except 10 nM EiCsm6 and 0.5 µM EiCsm6 activator were added instead of TtCsm6 and its activator. Twenty-microliter reactions were executed as described for LbuCas13a–TtCsm6 experiments on a Biotek plate reader, with measurements taken every 2 or 3 min. All plate reader assays were performed in low-volume, flat-bottom, black 384-well plates with a non-binding surface treatment (Corning, catalog no. 3820). The experiments in Extended Data Fig. 4 used a similar plate type without a non-binding surface treatment (Corning, catalog no. 3821). Fluorescence of Csm6 reactions was normalized where indicated by dividing all values by the initial value at t = 0 min ( F / F 0 ) for each replicate, except in Fig. 3c,d and Extended Data Figs. 7 and 8 , where they were normalized to t = 6 min ( F / F t = 6 ) to allow for ~5 min of temperature equilibration in the plate reader at the start of the assay. Optically clear ABsolute qPCR Plate Seals (Thermo Fisher Scientific, catalog no. AB1170) were used to seal the assay plates when using B.E.I. SARS-CoV-2 genomic RNA as the target. All LbuCas13a–Csm6 plate reader experiments in Figs. 1 , 2 and 3a,c and Extended Data Figs. 1 , 3 , 4 and 6 were performed in triplicate. Normalized fluorescence was determined by dividing each replicate by its initial value at 0 min ( F / F 0 ) or 6 min ( F / F t = 6 ) and then calculating the mean and s.e.m. of three replicates. Direct activation of TtCsm6 by A 4 >P oligonucleotides For direct activation experiments, 100 nM TtCsm6 in reaction buffer (20 nM HEPES, pH 6.8, 50 mM KCl, 5 mM MgCl 2 and 5% (vol/vol) glycerol in DEPC-treated, nuclease-free water) with 1 U µl –1 murine RNase inhibitor was mixed with varying concentrations of an A 4 >P oligonucleotide. Measurements were taken at 37 °C every 2 min in a Tecan Spark plate reader ( λ ex , 485 nm; λ ex , 535 nm), with z height optimized to a well containing 200 nM reporter in buffer. All reactions were performed in triplicate. Modeling of the Cas13–Csm6 detection reaction A kinetic scheme of chemical reactions was created and populated with known kinetic rates and equilibrium constants. Kinetic rates were used where known; where only equilibrium constants were known, the forward rates were assumed to be 1 nM –1 × s –1 , and a reverse rate was chosen to produce the known equilibrium constant. Rates and equilibrium constants were similar to those reported in previous publications 7 , 12 , 23 , 37 , 38 . For the CARF domain, a K m was known instead of a K d , so the CARF domain’s K cat was subtracted from the reverse rate to produce an approximate K d . The Cas13 background cleavage rate was set to the background rate observed in purified LbuCas13a preparations. This kinetic scheme was then converted into a system of ordinary differential equations modeling the rate of change in the concentration of each reaction component as a function of time and the concentrations of the components using Mathematica. A numerical solution to the system of ordinary differential equations was created with a time step of 0.001 s and a total time of 2,000 s using the Mathematica’s NDSolve at different target RNA concentrations and K cat values for the Csm6 CARF domain. The full kinetic scheme, system of ordinary differential equations and the starting conditions used are provided in Supplementary Note 1 in the Supplementary Information . LbuCas13a direct detection experiments and analysis These assays were performed and analyzed as described previously 21 with the following modifications. Either Twist synthetic SARS-CoV-2 RNA or externally validated B.E.I. SARS-CoV-2 genomic RNA was used as the target RNA. Data were collected on a Tecan Spark plate reader at 37 °C with a gain of 87 and a z position optimized to a well in the plate containing 400 nM reporter in buffer. Fluorescence was measured every 2 min for up to 2 h ( λ ex , 485 nm and 10- or 20-nm bandwidth; λ ex , 535 nm and 20-nm bandwidth). Linear regression and pairwise comparison of slopes using ANCOVA were done using GraphPad Prism (version 9.1.2), considering replicate y values as individual data points 21 . For determination of the limit of detection at 20 min, we used the first 20 min of measurements in the assay. All assays were done in triplicate. Plots showing mean normalized fluorescence were generated using the same normalization method as for the LbuCas13a–TtCsm6 plate reader assays ( F / F 0 or F / F t = 6 ). Liquid chromatography–mass spectrometry analysis The degradation of cA 4 , A 4 >P or single-fluoro A 4 >P was examined by incubating 5 µM TtCsm6 with 25 µM activator at 37 °C for 30 min in a total volume of 15 μl. The reaction was quenched by adding 1 µl of 0.5 M EDTA and heated at 95 °C for 5 min. Samples were centrifuged to remove any aggregated TtCsm6 protein. Oligonucleotide samples were then analyzed using a Synapt G2-Si mass spectrometer that was equipped with an electrospray ionization (ESI) source and a BEH C18 ionKey (length, 50 mm; inner diameter, 150 μm; particle size, 1.7 μm; pore size, 130 Å) and connected in line with an Acquity M-class ultra-performance liquid chromatography system (UPLC; Waters). Acetonitrile, formic acid (Fisher Optima grade, 99.9%) and water purified to a resistivity of 18.2 MΩ·cm (at 25 °C) using a Milli-Q Gradient ultrapure water purification system (Millipore) were used to prepare mobile phase solvents. Solvent A was 99.9% water/0.1% formic acid, and solvent B was 99.9% acetonitrile/0.1% formic acid (vol/vol). The elution program consisted of isocratic flow at 1% B for 2 min, a linear gradient from 1% to 99% B over 2 min, isocratic flow at 99% B for 5 min, a linear gradient from 99% to 1% B over 2 min and isocratic flow at 1% B for 19 min at a flow rate of 1.5 μl min –1 . Mass spectra were acquired in the negative ion mode and continuum format, operating the time-of-flight mass analyzer in resolution mode, with a scan time of 1 s, over the range of mass-to-charge ratios ( m / z ) from 100 to 5,000. MS data acquisition and processing were performed using MassLynx software (version 4.1, Waters). Limit of detection analysis for triplicate LbuCas13a–TtCsm6 experiments For determination of the limit of detection using the LbuCas13a–TtCsm6 assay data in Fig. 3c , an unpaired t -test with Welch’s correction was used to compare mean fluorescence ( F / F t = 6 , fluorescence divided by fluorescence measurement at t = 6 min) of three replicates to that of the control replicates without target RNA using GraphPad Prism (version 9.1.2). The limit of detection was determined using the triplicate data collected at 20 min and 118 min as time points for direct comparison to LbuCas13a detection assays. Comparisons with P ≤ 0.05 were considered significant. A one-tailed P value is shown because any values lower than the no-target control were not considered significant. Limit of detection analysis for 20-replicate LbuCas13a-TtCsm6 experiments For all 20-replicate experiments, samples were analyzed in batches of 10, because five experimental samples and five negative controls (0 copies per µl of RNA) were started simultaneously using a multichannel pipette. Reactions were set up as described above. Data for individual replicates were normalized by dividing fluorescence measurements by the fluorescence value at t = 6 min . Analysis was performed using Python (version 3.8.1) at the 16-, 30- and 60-min time points. At each time point, the mean and standard deviation of the negative controls within each batch were determined and used to fit a normal distribution. The distribution of negative control results was then used to calculate the probability of seeing a point equal to or greater than each experimental sample using the survival function of the normal distribution. This value is equivalent to a one-tailed P value associated with a null hypothesis in which the experimental value is not greater than the value of the negative control. The results show the number of experimental samples of 20 replicates that could be distinguished from the negative control distribution using a P value cutoff of 0.05. These cutoffs represent a predicted 5% false-positive rate. We did not perform multiple hypothesis correction on this analysis to mirror the analysis that would be performed in a clinical setting, where an individual’s sample would not be corrected based on the number of samples run, but where type 1 error (false-positive rate) would instead be controlled by setting an appropriate P value cutoff. The full time-course data used in this analysis are in Extended Data Fig. 7 , and the improved signal to noise from normalization at t = 6 min is shown in Extended Data Fig. 8 . Experiments using the compact fluorescence detector Premixed 60-µl reactions of the FIND-IT assay were loaded directly into inlets leading to the ~15-µl imaging chambers of a microfluidic chip using a pipette until the chambers and surrounding tubing were completely filled. The custom chips were fabricated from poly(methyl methacrylate) and pressure-sensitive adhesive. Reactions were assembled using the same conditions and reagent concentrations as the reactions in Fig. 3c,d , except a total reaction volume of 60 µl was used. The RNA sample constituted 10% of the total reaction volume in all reactions with extracted RNA from clinical samples or extracted B.E.I. SARS-CoV-2 genomic RNA. For control reactions, water was added instead of target RNA. The reaction chamber temperature was maintained at 37 °C with ~1–2 °C variation between runs. Reactions were imaged every 10 s for 30–60 min using the system camera gain of 2 dB and exposure setting of 150 ms or 100 ms. Light-emitting diode (LED) excitation was strobed in synchrony with the camera exposure to avoid photobleaching between data points. For imaging the ~15-μl sample chambers, we require a fairly large field of view (FOV) and a modest numerical aperture (NA), enabling a large depth of focus without images of adjacent sample wells overlapping. To accomplish this in a relatively low-cost, compact device, we designed a custom system using a pair of eyepieces (Edmund Optics, 66-208 and 66-210), yielding a system with an NA of 0.09, a FOV diameter of 12.0 mm and a magnification of 0.54 (chosen to match the sensor size of the Thorlabs CS165MU1 camera to the FOV; sampling at ≥Nyquist is unnecessary in this ‘light-bucket’ application). The overall system is compact, with a nominal track length (sample to camera) of ~75 mm (Fig. 4a,b ). Fluorescence filters were from Chroma Technologies (ET470/40x, T495lpxr and ET535/70m), with excitation provided by a 965-mW, 470-nm LED (Thorlabs M470L4), providing a maximum of ~225 mW into the 12-mm-diameter sample FOV in an epi-illumination Kohler geometry. Custom control of the imaging hardware was implemented in MATLAB (2020a), using Thorlabs drivers and SDK (ThorCam) to control the camera acquisition and serial communication to an Arduino Bluefruit Feather board to electronically trigger the LED illumination. To accurately compare signals between two sample chambers, we used a prospective correction method to account for non-uniform illumination and background effects 39 . Briefly, the measured pixel intensity of reaction chambers I meas ( x ) is related to actual signal I real ( x ) through $$I_{{\textrm{meas}}}\left( x \right) = S\left( x \right) \times \left( {B\left( x \right) + I_{{\textrm{real}}}\left( x \right)} \right)$$ (1) Here, S is a linear scaling factor that models distortions to an image due to illumination non-uniformity, and B is an illumination-dependent background signal that accounts for scattering and background fluorescence from the reaction chambers. Before actual measurements, we determined S and B by acquiring images of blank reaction chambers filled with 1× reaction buffer (20 mM HEPES, pH 6.8, 50 mM KCl, 5 mM MgCl 2 and 5% (vol/vol) glycerol in DEPC-treated, nuclease-free water) and a fluorescent slide, respectively. Experimental images were processed according to equation (1) to retrieve actual signals from each channel. The tone, brightness and contrast of two images shown in Fig. 4d were adjusted across the entire image using Adobe Photoshop 2021 (version 22.1.1). Unprocessed images are provided in the Source Data for Fig. 4 . Detection of positive signal in clinical samples First, illumination-corrected fluorescence values were divided by their initial measurement ( F / F 0 ) so that the clinical sample and background control curves (no target RNA added) start at the same value. Then, the normalized fluorescence of the clinical sample was divided by the normalized fluorescence of the paired background control at 20- and 40-min time points of the assay. This is referred to as the ‘relative fluorescence change’ of a sample compared to its corresponding control. This analysis was performed for every positive and negative clinical sample from the IGI testing lab that was run on the detector. Then, to determine the threshold for detection, we determined the mean and standard deviation of the relative fluorescence change for the negative clinical samples ( n = 6). We considered positive signal in a reaction as being over two standard deviations higher than the mean of the negative samples. This corresponds to signal higher than ~97.5% of the negative clinical sample distribution. Determination of C t values for clinical samples Human respiratory swab specimens, including mid-turbinate (MT) nasal, oropharyngeal (OP), nasopharyngeal (NP), and combined MT/OP swabs, were collected, processed and subjected to RNA extraction and qRT–PCR using primers for SARS-CoV-2 in the IGI testing laboratory, as described previously 30 . Briefly, sample C t values were determined using primers for the nucleocapsid protein ( N ) gene, spike protein ( S ) gene and open reading frame 1ab ( O RF 1ab ) locus of SARS-CoV-2 and for MS2 phage RNA (extraction process control) 30 . The average C t value of the SARS-CoV-2 targets for each sample was determined, rounded to the nearest whole number, and plotted in Fig. 4f . All positive samples, with C t values plotted in Fig. 4f , originated from combined MT/OP samples. The individual C t values for all positive samples tested are provided in the Source Data file for Fig. 4 . Determination of a standard curve for C t values using known concentrations of RNA Twofold dilutions of TaqPath COVID-19 Combo Kit positive control (Thermo Fisher Scientific) were made in nuclease-free water to generate indicated RNA concentrations. Five microliters of the diluted positive control was added to the multiplexed COVID-19 real-time PCR assay (Thermo Fisher Scientific), with a final reaction volume of 12.5 μl. Samples were amplified on a QuantStudio 6 per the manufacturer’s protocol and analyzed using the QuantStudio 6 Design and Analysis software, version 4.2.3 (Thermo Fisher Scientific). Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The plasmid used to express MBP-tagged LbuCas13a (p2CT-His-MBP-Lbu_C2c2_WT) is available from Addgene (83482). Plasmids used for expression of SUMO-tagged LbuCas13a (pGJK_His-SUMO-LbuCas13a), His-tagged EiCsm6 (pET28a_His-TEV-EiCsm6) and His-SUMO-tagged TtCsm6 (pET_His6-SUMO-TEV-TtCsm6) will be available from Addgene (172488, 172487 and 172486). Sequences of RNA oligonucleotides are provided in Supplementary Table 3 . Source data are provided with this paper. Code availability All custom data analysis for the limit of detection analysis for 20-replicate FIND-IT experiments and the code used for mathematical modeling of the Cas13–Csm6 reaction and illumination correction of images are available at . An archived version is available at (ref. 40 ). Change history 06 October 2021 A Correction to this paper has been published: | Frequent, rapid testing for COVID-19 is critical to controlling the spread of outbreaks, especially as new, more transmissible variants emerge. While today's gold standard COVID-19 diagnostic test, which uses qRT-PCR—quantitative reverse-transcriptase-polymerase chain reaction (PCR)—is extremely sensitive, detecting down to one copy of RNA per microliter, it requires specialized equipment, a runtime of several hours and a centralized laboratory facility. As a result, testing typically takes at least one to two days. A research team led by scientists in the labs of Jennifer Doudna, David Savage and Patrick Hsu at the University of California, Berkeley, is aiming to develop a diagnostic test that is much faster and easier to deploy than qRT-PCR. It has now combined two different types of CRISPR enzymes to create an assay that can detect small amounts of viral RNA in less than an hour. Doudna shared the 2020 Nobel Prize in Chemistry for invention of CRISPR-Cas9 genome editing. While the new technique is not yet at the stage where it rivals the sensitivity of qRT-PCR, which can detect just a few copies of the virus per microliter of liquid, it is already able to pick up levels of viral RNA—about 30 copies per microliter—sufficient to be used to surveil the population and limit the spread of infections. "You don't need the sensitivity of PCR to basically catch and diagnose COVID-19 in the community, if the test's convenient enough and fast enough," said co-author David Savage, professor of molecular and cell biology. "Our hope was to drive the biochemistry as far as possible to the point where you could imagine a very convenient format in a setting where you can get tested every day, say, at the entrance to work." The researchers will report their results online August 5 in the journal Nature Chemical Biology. Several CRISPR-based assays have been authorized for emergency use by the Food and Drug Administration, but all require an initial step in which the viral RNA is amplified so that the detection signal—which involves release of a fluorescent molecule that glows under blue light—is bright enough to see. While this initial amplification increases the test's sensitivity to a similar level as qRT-PCR, it also introduces steps that make the test more difficult to carry out outside of a laboratory. The UC Berkeley-led team sought to reach a useful sensitivity and speed without sacrificing the simplicity of the assay. "For point of care applications, you want to have a rapid response so that people can quickly know if they're infected or not, before you get on a flight, for example, or go visit relatives," said team leader Tina Liu, a research scientist in Doudna's lab at the Innovative Genomics Institute (IGI), a CRISPR-focused center involving UC Berkeley and UC San Francisco scientists. Aside from having an added step, another disadvantage of initial amplification is that, because it makes billions of copies of viral RNA, there is a greater chance of cross-contamination across patient samples. The new technique developed by the team flips this around and instead boosts the fluorescent signal, eliminating a major source of cross-contamination. The amplification-free technique, which they term Fast Integrated Nuclease Detection In Tandem (FIND-IT), could enable quick and inexpensive diagnostic tests for many other infectious diseases. "While we did start this project for the express purpose of impacting COVID-19, I think this particular technique could be applicable to more than just this pandemic because, ultimately, CRISPR is programable," Liu said. "So, you could load the CRISPR enzyme with a sequence targeting flu virus or HIV virus or any type of RNA virus, and the system has the potential to work in the same way. This paper really establishes that this biochemistry is a simpler way to detect RNA and has the capability to detect that RNA in a sensitive and fast time frame that could be amenable for future applications in point of care diagnostics." The researchers are currently in the process of building such a diagnostic using FIND-IT, which would include steps to collect and process samples and to run the assay on a compact microfluidic device. Employing tandem Cas proteins To remove target amplification from the equation, the team employed a CRISPR enzyme—Cas13—to first detect the viral RNA, and another type of Cas protein, called Csm6, to amplify the fluorescence signal. Cas13 is a general purpose scissors for cutting RNA; once it binds to its target sequence, specified by a guide RNA, it is primed to cut a broad range of other RNA molecules. This target-triggered cutting activity can be harnessed to couple detection of a specific RNA sequence to release of a fluorescent reporter molecule. However, on its own, Cas13 can require hours to generate a detectable signal when very low amounts of target RNA are present. Liu's insight was to use Csm6 to amplify the effect of Cas13. Csm6 is a CRISPR enzyme that senses the presence of small rings of RNA and becomes activated to cut a broad range of RNA molecules in cells. To boost Cas13 detection, she and her colleagues designed a specially engineered activator molecule that gets cut when Cas13 detects viral RNA. A fragment of this molecule can bind to and trigger Csm6 to cut and release a bright fluorescent molecule from a piece of RNA. Normally, the activator molecule is quickly broken down by Csm6, thus limiting the amount of fluorescent signal it can generate. Liu and her colleagues devised a way to chemically modify the activator so that it is protected from degradation and can supercharge Csm6 to repeatedly cut and release fluorescent molecules linked to RNA. This results in a sensitivity that is 100 times better than the original activator. "When Cas13 gets activated, it cleaves this small activator, removing a segment that protects it," Liu said. "Now that it's liberated, it can activate lots of different molecules of that second enzyme, Csm6. And so, one target recognized by Cas13 doesn't just lead to activation of its own RNA-cutting ability; it leads to the generation of many more active enzymes that can each then cleave even more fluorescent reporters." The team of researchers also incorporated an optimized combination of guide RNAs that enables more sensitive recognition of the viral RNA by Cas13. When this was combined with Csm6 and its activator, the team was able to detect down to 31 copies per microliter of SARS-CoV-2 RNA in as little as 20 minutes. The researchers also added extracted RNA from patient samples to the FIND-IT assay in a microfluidic cartridge, to see if this assay could be adapted to run on a portable device. Using a small device with a camera, they could detect SARS-CoV-2 RNA extracted from patient samples at a sensitivity that would capture COVID-19 infections at their peak. "This tandem nuclease approach—Cas13 plus Csm6—combines everything into a single reaction at a single temperature, 37 degrees Celsius, so it does not require high temperature heating or multiple steps, as is necessary for other diagnostic techniques," Liu said. "I think this opens up opportunities for faster, simpler tests that can reach a comparable sensitivity to other current techniques and could potentially reach even higher sensitivities in the future." The development of this amplification-free method for RNA detection resulted from a reorientation of research within IGI when the pandemic began toward problems of COVID-19 diagnosis and treatment. Ultimately, five labs at UC Berkeley and two labs at UCSF became involved in this research project, one of many within the IGI. "When we started this, we had hopes of creating something that reached parity with PCR, but didn't require amplification—that would be the dream," said Savage, who was principal investigator for the project. "And from a sensitivity perspective, we had about a ten thousandfold gap to jump. We've made it about a thousandfold; we've driven it down about three orders of magnitude. So, we're almost there. Last April, when we were really starting to map it out, that seemed almost impossible." | 10.1038/s41589-021-00842-2 |
Biology | Gene activity database could spare thousands of mice | Nature Communications (2019). DOI: 10.1038/s41467-019-10601-6 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-019-10601-6 | https://phys.org/news/2019-06-gene-database-thousands-mice.html | Abstract Understanding how immune challenges elicit different responses is critical for diagnosing and deciphering immune regulation. Using a modular strategy to interpret the complex transcriptional host response in mouse models of infection and inflammation, we show a breadth of immune responses in the lung. Lung immune signatures are dominated by either IFN-γ and IFN-inducible, IL-17-induced neutrophil- or allergy-associated gene expression. Type I IFN and IFN-γ-inducible, but not IL-17- or allergy-associated signatures, are preserved in the blood. While IL-17-associated genes identified in lung are detected in blood, the allergy signature is only detectable in blood CD4 + effector cells. Type I IFN-inducible genes are abrogated in the absence of IFN-γ signaling and decrease in the absence of IFNAR signaling, both independently contributing to the regulation of granulocyte responses and pathology during Toxoplasma gondii infection. Our framework provides an ideal tool for comparative analyses of transcriptional signatures contributing to protection or pathogenesis in disease. Introduction The host response during infection and inflammation, in both mouse models and human disease is complex, with a spectrum of responses having been reported across infections with intracellular pathogens, viruses, fungi, or allergy, often driven and dominated by specific groups of cytokines, activating protective responses or pathology 1 , 2 , 3 , 4 , 5 , 6 . There are few transcriptional studies or data resources on the global immune responses spanning different experimental models of diseases across distinct types of immune responses. While tissue transcriptomic approaches have been applied widely to different experimental models of disease individually, this has been reported to a lesser extent for the blood 7 , 8 . Conversely, in humans, transcriptomic approaches have been applied to whole blood or peripheral blood mononuclear cells (PBMC) 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , however, little is known about how immune responses in blood are reflected at disease sites. Moreover, application of whole blood transcriptomics has not always discerned signatures of disease, revealed only upon transcriptomic analysis of purified cells or PBMC 17 , 18 . Many published whole blood disease-specific signatures are dominated by IFN-inducible signatures and those attributable to innate immune responses, as broadly described in both experimental models and human diseases 9 , 10 , 11 , 12 , 15 , 19 . Although mainly dominated by a type I IFN transcriptional signature, some are accompanied by a cluster of genes that have been attributed to IFN-γ signaling, classically referred to as IFN-stimulated genes (ISGs) 20 , 21 . Cytokines, chemokines, signaling, and cell membrane molecules can also form part of the IFN signature 22 . How type I IFN and IFN-γ-inducible genes are expressed across a spectrum of different diseases and how the blood transcriptional signature reflects the tissue response are unclear. Effects of type I IFN are clearly not limited to antiviral responses, but also play a role in bacterial 3 , 4 , 9 , 11 , 23 , 24 , helminth, allergy 25 , and other inflammatory responses, with beneficial or detrimental effects 3 , 4 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 . IFN-γ is key to activating cell-mediated immune responses to control intracellular pathogens 29 , 34 , but dampens allergic and anti-helminthic responses 5 , 34 , and if uncontrolled can lead to immune pathology 35 , 36 . A blood signature of allergy, asthma, or helminth responses in humans, reflecting T H 2-type responses or a T H 17-type 37 , 38 signature in inflammation, have as yet not been reported. Whether analysis of such signatures has been attempted and such signatures are not detectable in blood, or as yet have not been investigated, is unclear. We purposefully chose pathogens and an allergen to yield a wide breadth of different types of immune response in the lung, representative of T H 1, type I IFN, T H 17, and T H 2 responses, hypothesizing that distinct responses underlying the immune response in each model could be determined by the transcriptional signature of unseparated lung cells. To this end, we have used bioinformatics approaches, including modular and cellular deconvolution analyses, to decipher the global transcriptional response in the lungs of mice infected or challenged with a broad spectrum of infectious pathogens, including parasites, bacteria, viruses, fungi, or allergens, and also to determine to what extent each of these responses is preserved in the blood. We demonstrate a unique global transcriptional signature for each of the different diseases against the controls in both lung and blood. The lung transcriptional signatures showed a gradation, ranging from IFN-inducible gene clusters, to those associated with granulocyte/neutrophil/IL-17 dominated genes, to responses dominated by expression of genes encoding T H 2 cytokines, mast cells and B cells, with only preservation of some signatures in the blood. Unique and overlapping regulatory functions of both type I IFNs and IFN-γ signaling pathways during infection with Toxoplasma gondii , and a role for both IFNs in regulating the T H 17/neutrophil-induced pathology, was observed. Our study provides a useful resource of the global differential immune responses in both blood and tissue across a broad spectrum of diseases, also providing translational knowledge on how the blood signature reflects the local tissue immune response. This resource is now easily accessible with the use of an online webapp: . Results Transcriptional signatures across diseases To determine the global changes in the host response to infection and allergens, we performed RNA-based next-generation sequencing (RNA-Seq) on RNA isolated from both lung and blood, at the pre-determined peak of the response of mice infected with T. gondii ; influenza A virus (influenza); respiratory syncytial virus (RSV); acute Burkholderia pseudomallei ( B. pseudomallei ); Candida albicans (C. albicans); or challenged with the allergen house dust mite (HDM), to capture the breadth of T H 1, to type I IFN, to T H 17, to T H 2 responses (Fig. 1a ; Supplementary Fig. 1a ; Supplementary Data 1a ). Principal component analysis of the RNA-Seq data depicted a unique global transcriptional signature for each of the different diseases as compared to controls (PC1) in lung (Fig. 1b ), and to a lesser extent in blood (Fig. 1c ). The total differentially expressed genes in all datasets are shown in Supplementary Data 1b . PC2 representing the second largest variation in the data, separated the different diseases in the lung, with B. pseudomallei infected mice positioned in between fungal and other infections (Fig. 1b ). Mice infected with RSV and HDM allergy, although distinct from each other, clustered more closely to the controls (Fig. 1b, c ). The blood transcriptional signatures were also investigated in a distinct set of mice infected with Plasmodium chabaudi chabaudi ( P. chabaudi , malaria), murine cytomegalovirus (MCMV), Listeria monocytogenes (Listeria) and chronic B. pseudomallei (Supplementary Fig. 1a ), and shown also to cluster away from the controls, with each disease clustering independently of each other, although transcriptional signatures of P. chabaudi and MCMV clustered closely to each other (Supplementary Fig. 1b ). Fig. 1 Global transcriptional analysis captures differences across infectious and inflammatory diseases. a RNA-seq analysis was performed on lung and blood samples (Supplementary Data 1 ) obtained from experimental mouse models of 6 infectious and inflammatory diseases. b , c Principal component analysis in lung ( b ) and blood ( c ) samples, depicting the variation in the global gene expression profiles across diseases. Principal components 1 (PC1) and 2 (PC2), which capture the greatest variation in gene expression, are shown. Circles and triangles represent lung and blood samples, respectively, empty and filled symbols represent control and disease samples, respectively, and color represents mouse models. d Stacked bar plots depicting in silico immune cell composition of lung and blood RNA-seq samples, derived using the CIBERSORT algorithm based on cellular signatures obtained from ImmuCC. Each bar represents percent fractions for 9 representative cell types for an individual mouse sample, with colors representing the different cell types. White and black bars at the bottom of each plot represent control and disease samples, respectively. ILC innate lymphoid cells, NK cells natural killer cells Full size image To infer the immune cellular composition from transcriptomic data, cellular deconvolution analyses 39 , 40 were first applied to the RNA-Seq dataset obtained from the ImmGen Consortium (GSE109125) on flow cytometry sorted cells, to verify the accuracy of the immune subsets being identified from the deconvolution analysis. Based on this comparative analysis, we grouped the 25 immune cell types from the cellular deconvolution analyses 39 , 40 into a broader set of 9 categories, representing the major immune cell types (Supplementary Fig. 2a ). Application of the validated cellular deconvolution analyses to the lung and blood transcriptional data (Fig. 1a ) identified a dominance of diverse cellular populations in the different diseases (Fig. 1d ). Natural killer (NK) cells were increased in T. gondii and influenza infection in both lung and blood, with only a weaker increase during RSV infection, but were reduced during B. pseudomallei and C. albicans infections, and remained unaltered in HDM allergy (Fig. 1d ). Neutrophils/granulocytes were significantly over-represented in the lungs and blood of mice infected with B. pseudomallei and C. albicans (Fig. 1d ). Cellular deconvolution analysis did not reveal an increase of mast cells or eosinophils in the lungs or blood from HDM-allergen-challenged mice (Fig. 1d ). Although accurate mast cell identification by the deconvolution analysis was confirmed using the ImmGen sorted cell dataset, eosinophils could not be verified using this approach since the ImmGen database lacked this population (Supplementary Fig. 2a ). To determine that the absence of eosinophil detection was not a limitation of the deconvolution analysis, we analyzed bronchoalveolar lavage cells (BAL) from HDM allergen-challenged mice, since this compartment has previously been reported to contain eosinophils after allergen challenge 41 . Indeed, this confirmed an increase in eosinophils in BAL from HDM allergen-challenged mice (Supplementary Fig. 2b ). Overall, these findings demonstrate clear distinctions in the transcriptional signatures and immune cell compositions across a spectrum of infectious and inflammatory diseases, from whole lung and blood. Modular transcriptional signatures across diseases We next applied Weighted Gene Co-expression Network Analysis (WCGNA) 42 , a modular approach, to identify groups of genes co-expressed together across the lung and blood samples obtained from the various mouse models of infectious and inflammatory diseases. These groups of genes, termed modules, were derived in an unbiased way based on the transcriptional profiles of the protein coding genes across all control and disease samples, resulting in 38 modules in lung (L1–L38, Supplementary Data 2 ) and 41 modules in blood (B1–B41; Supplementary Data 3 ). The genes within the modules were functionally annotated using Ingenuity Pathway Analysis (IPA), MetaCore, and Gene Ontology (GO), (Supplementary Data 4 and 5 ); those commonly identified by the three methods were retained and validated by further manual curation. Next, these modules were assessed in each dataset in lung (Fig. 2a , left panel) and in blood (Fig. 2b , left panel), using QuSAGE 43 to identify over-abundant (red) and under-abundant (blue) modules in disease datasets against respective controls (Fig. 2a, b , left panels). Cell types associated with each module were identified by comparing cell-type-specific signatures derived for 10 cell types using the ImmGen Ultra Low Input (ULI) RNA-seq dataset (Supplementary Figs. 2a and 3a , Supplementary Data 6 ) against the genes within each module, using a hypergeometric test (Fig. 2a, b , right panels). Fig. 2 Modular transcriptional signatures define a spectrum of immune responses across diseases. a , b Fold enrichment in disease compared to controls for modules of co-expressed genes derived using WGCNA in lung (L1–L38) ( a ) and blood (B1–B41) ( b ) samples. Module name indicates biological processes associated with the genes within the module, and number of genes within each module are shown. Fold enrichment scores were derived using QuSAGE, with red and blue circles indicating the cumulative over- or under-abundance of all genes within the module, for each disease compared to the respective controls. Color intensity of the dots represents the degree of perturbation, indicated by the color scale. Size of the dots represents the relative degree of perturbation, with the largest dot representing the highest degree of perturbation within the plot. Within each disease, only modules with FDR p -value < 0.05 were considered significant and depicted here. Cell types associated with genes within each module were identified using cell-type-specific signatures obtained for 10 cell types from the ImmGen ULI RNA-seq dataset (Supplementary Fig. 3 ). Cell-type enrichment was calculated using a hypergeometric test, with only FDR p -value < 0.05 considered significant and depicted here. Color intensity represents significance of enrichment. GCC glucocorticoid, K-channel potassium channel, Ox phos oxidative phosphorylation, TM transmembrane, Ubiq ubiquitination. c , d Fold enrichment for in vitro-derived T helper cell signatures for T H 1 cells treated with IL-27 (T H 1 + IL-27), T H 2 cells, and T H 17 cells in lung ( c ) and blood ( d ) samples across diseases. Fold enrichment scores were derived using QuSAGE, with red and blue circles indicating the cumulative over- or under-abundance of all genes within the module, for each disease compared to the respective controls. Color intensity of the dots represents the degree of perturbation, indicated by the color scale. Size of the dots represents the relative degree of perturbation, with the largest dot representing the highest degree of perturbation within the plot. Within each disease, only T helper cell signatures with FDR p -value < 0.05 were considered significant and depicted here Full size image The lung transcriptional signatures showed a gradation, ranging from IFN-inducible gene clusters (IFN-γ/Type II and Type I IFN), to those associated with granulocyte/neutrophil/IL-17 dominated genes, to responses dominated by expression of genes encoding T H 2 cytokines, mast cells and B cells. Two modules of interferon-related genes were identified—type I IFN inducible genes (Module L5), which included genes associated with innate immune responses, and classical ISGs, such as Ifnb1 , Ifit1 and 3, Oas1a, 2 and 3, Oasl1 and 2 , Mx1, Stat2, Irf7 and Irf9 ; and the IFN-γ-inducible (Type II) gene module (L7), which included Ifng , Irf1 , 2 and 8 , and other downstream targets Il12rb1 and b2 , Tap1 and 2, and genes associated with APC function such as H2-MHC molecules, and host defence, such as Gbps (Fig. 2a left panel; Supplementary Data 2 ). The majority of genes in each respective module (L5 and L7) were attributed to either Type I and/or Type II using the interferome database 44 (Supplementary Data 2 ). Both L5 and L7 modules, were over-abundant in lungs from mice infected with T. gondii , influenza, RSV, acute B. pseudomallei , albeit to different levels, and to a very low extent in HDM allergen-challenged mice. Conversely, in lungs of C. albican s infected mice, both L5 and L7 modules were under-abundant. Genes such as Irf7, Irf9 , Mx1, Ifit3, Oas1a (L5) were induced most highly in lungs of virally infected mice (influenza and RSV). (Supplementary Data 2 ). The type I IFN signaling module (L5) was accompanied by the Leukocytes/Myeloid/Signaling (L3, over-abundant across all diseases) module and both showed enrichment for cell types of the macrophage, dendritic cell and granulocyte lineages, with L3 also being enriched for innate lymphoid cells (ILCs) and αβ T cells (Fig. 2a , right panel). In contrast to the type I IFN-inducible module (L5), the IFN-γ-dominated module (L7) showed cell-type enrichment for αβ T cells, dendritic cells, and ILCs. The lungs from mice infected with C. albicans , were dominated by modules encoding myeloid cells (L10, L12–L15), granulocytes (L10, L11) and Il17 and IL-17-associated cytokines (L11) and Il1b /IL-1 signaling (L12, L13, respectively). However, IL-17 (including increased expressions of Il17a, Il1a and Il22 ) and granulocyte-associated (L10 and L11) responses were most pronounced during B. pseudomallei infection, with this infection also exhibiting increased IFN gene signatures (L5 and L7) in contrast to the C. albicans infection. These IL-17 (L11) and Il1b /IL-1 signaling (L12, L13) associated modules, and modules L10 and L14 showed significant enrichment for granulocytes and/or myeloid cells (Fig. 2a , right panel), indicating an increase in these cell types upon infection with B. pseudomallei and C. albicans , in keeping with the cellular deconvolution analysis (Fig. 1d ). The HDM allergen-challenged mice generally exhibited weaker responses in the lung, except for the L26 module containing genes associated with allergic manifestations, which dominated this response (Fig. 2a , left panel). This dominant module (L26) in HDM allergy, contained increased expression of genes such as Il4, Il5, Il13 , and Il33 , and the eosinophil-attracting chemokines, Ccl11 and Ccl24 , in keeping with the cellular deconvolution analysis (Supplementary Fig. 2b ), and genes associated with mast cell function 45 , in keeping with the significant enrichment for mast cells in this module (Fig. 2a , right panel) (Supplementary Data 2 ). The HDM allergy lung response was also accompanied by an over-abundance of immunoglobulin genes (L25) enriched for B cells (Fig. 2a ), which was absent or under-represented across all the other diseases. Independent modular and cell enrichment analysis in blood also revealed common and reciprocal signatures across diseases (Fig. 2b ; Supplementary Data 3 and 5 ), although the response in the blood appeared weaker than the lung (Fig. 2a, b ). IFN-signaling was observed across two modules: B11, containing genes including Gbps, Ido, Il10, Oas1a and Oas1g; and B14, containing Ifng , Gbps , H2 , Ifits , Irf1 and 7, Irgm1 and 2 , Mx1 , Oas3 , Oasl1 , 2 , Stat1 and Stat2 , Tap1 and 2 (Fig. 2b , left panel; Supplementary Data 3 ). Both modules were over-abundant in blood of mice infected with T. gondii , influenza, RSV, acute B. pseudomallei , albeit to different levels, with module B14 being enriched for T cells and ILCs, but module B11 showing no enrichment for specific cell types, possibly indicating a broader distribution across many blood cell types (Fig. 2b , right panel). Blood signatures from mice infected with malaria, MCMV, Listeria , and chronic B. pseudomallei , also demonstrated a strong contribution of both IFN signaling modules (B11 and B14) (Supplementary Fig. 1c ). In contrast to the lung, modular derivation directly from the blood did not reveal a detectable Il17a and IL-17-associated cytokine gene module, although myeloid cell/granulocyte-associated gene modules (B16–B18) were present in blood of acute B. pseudomallei , C. albicans , and T. gondii and to a lesser extent in influenza and RSV infected mice (Fig. 2b , left panel). This is in keeping with the cellular enrichment for macrophages and granulocytes (Fig. 2b , right panel). Additionally, modular derivation directly from the blood did not reveal a module showing perturbation of allergy-associated genes that had been detected in the lung (Fig. 2a , L26). Modules representing immunoglobulin or B cell-related genes were either unchanged or under-abundant in the blood (B28, B38), except for module B30, which contained B cell and myeloid-associated genes, and was over-abundant during HDM-allergen challenge and influenza infection (Fig. 2b ). Under-representation of modules associated with T and some B cell functions was observed, for the most part, in the blood across all diseases (Fig. 2b , left panel; Supplementary Fig. 1c , right panel; B28; and B36–B41), in keeping with previous studies 7 , 9 , 46 . The Cytotoxic/T cells/NK/ Tbx21/Eomes lung and blood modules (L35 and B15) showed unexpected discordancy from the Ifng modules (L7 and B14), with acute B. pseudomallei (Fig. 2 ; Supplementary Fig. 4 ) giving rise to this apparent discordancy in the modules (Fig. 2a, b ; Supplementary Fig. 4 ; Supplementary Datas 2 and 3 ). The Cytotoxic/T cells/NK/ Tbx21/Eomes modules (L35 and B15), were most abundant during infection with T. gondii and influenza but significantly under-abundant during B. pseudomallei infection (Fig. 2a , left panel), whereas the Ifng module, was abundant in all three infections. This disassociation of tbx21 and ifng expression (Supplementary Fig. 4 ) is in keeping with a previous report 47 . Overall, these findings, using both RNA-Seq (Fig. 2 ) and microarray platforms (Supplementary Figs. 5 and 6 ), demonstrate distinct modular transcriptional patterns in the lungs from the infected/challenged mice reflective of T helper (T H )1-type responses ( T. gondii , influenza, RSV; and more weakly B. pseudomallei infection), T H 17-type ( B.peusodomallei and C. albicans infection) and T H 2-type (HDM allergy) in vivo (Fig. 2a ). To test whether in vitro-differentiated cells from T H 1(+IL-27) cells, T H 2 cells and T H 17 cells reflect the in vivo responses, we assessed their in vitro-derived T H cell 48 signatures (Supplementary Data 7 ) in lung (Fig. 2c ) and blood (Fig. 2d ) RNA-Seq samples across all diseases. In vitro-derived T H 1(+IL-27) populations showed enrichment in blood and lungs from T. gondii , influenza, RSV, B. pseudomallei infected mice; T H 2 cells showed dominance in the lungs from HDM allergy challenged mice, but not in blood from HDM allergy challenged mice; and T H 17 cells showed a very strong enrichment in the lungs from B. pseudomallei infected mice, with weaker enrichments in C. albicans , influenza infected and HDM-allergen-challenged mice, and in blood of C. albicans infected mice (Fig. 2d ). Collectively this demonstrates that in vitro-derived T H cell subsets express genes reflective of the local in vivo responses to distinct pathogens or allergens, and that each T H cell subset is represented in specific diseases. Fidelity of some lung transcriptional profiles in blood Little has been reported on how different immune responses in blood are reflected at the site of disease. To assess the similarity between the co-expression patterns of genes in lung and blood, the reproducibility and robustness of gene network topology was tested as assessed by Z summary scores indicative of the degree of preservation, with scores >10 considered strongly preserved (Fig. 3a, b ; Supplementary Data 8 and 9 ). The cell cycle/DNA processes modules (L6 and B10) were highly preserved across tissues in their co-expression pattern (Fig. 3a, b , Supplementary Fig. 7 ). The IFN modules were also significantly conserved between lung and blood (L5, L7 and B11, B14) (Fig. 3a–d ; Supplementary Fig. 7 ). The lung granulocyte/myeloid modules (L11–L14) were only moderately preserved in blood (Fig. 3a, c ; Supplementary Fig. 7 ), whereas the equivalent blood modules B17 and B18 granulocyte/myeloid modules were strongly preserved in lung (Fig. 3b, d ; Supplementary Fig. 7 ). Testing the lung modular signature on the blood dataset did reveal that this T H 17 and granulocyte (L11 and L14) modules were actually preserved in blood (Fig. 3a, c ), although the expression of Il17a itself was extremely low in the blood (Supplementary Data 2 and 3 ). The lung allergy module (L26) was not preserved in the blood (Fig. 3a and Supplementary Data 8 ), in keeping with the inability to detect increased expression of Il4 , Il5 , Il13 , and Ccl11 (Supplementary Data 2 and 3 ) and showed weak correlations between the fold changes for genes within this module between lung and blood (Fig. 3c ). Overall, the global modular lung signature showed lower correlation in blood samples (Fig. 3c , right panel) than when the blood modular signature was assessed in lung samples (Fig. 3d , right panel). These findings highlight that certain infections, such as T. gondii and B. pseudomallei , are better preserved between lung and blood than others such as RSV and HDM allergy, and that certain immune responses, such as the IFN response, are better reflected between the lung and blood upon infection, suggesting which immune pathways can become systemic and those which remain local to the insult. Fig. 3 Comparison between the transcriptional profiles in lung and blood across diseases. a , b Modular preservation to assess the reproducibility and robustness of network topology of the lung modules in blood samples ( a ), and of the blood modules in lung samples ( b ) across the control and disease samples across all mouse models. Z summary scores indicative of the degree of preservation were calculated in WGCNA using permutation testing, with scores >10 considered strongly preserved. Each circle represents a module indicated by the module number, with colors assigned in WGCNA for visual distinction. c , d Assessment of fold enrichment of the lung modules in blood samples ( c ), and of the blood modules in lung samples ( d ) in disease compared to controls. Red and blue circles indicate the cumulative over- or under-abundance of all genes within the module, for each disease compared to the respective controls. Color intensity of the dots represents the degree of perturbation, indicated by the color scale. Size of the dots represents the relative degree of perturbation, with the largest dot representing the highest degree of perturbation within the plot. Within each disease, only modules with FDR p -value < 0.05 were considered significant and depicted here. Pearson correlation of foldchanges for genes within the module (disease samples compared to respective controls) between lung and blood is shown, with dark red and blue squares representing positively and negatively correlated gene perturbations, respectively. Significance was calculated using a two-tailed probability of t -test values for each correlation, and adjusted for multiple tests within each disease. Only FDR p -values < 0.05 were considered significant and depicted here. GCC glucocorticoid, K-channel potassium channel, Ox phos oxidative phosphorylation, TM transmembrane, Ubiq ubiquitination. e Volcano plots depicting differential gene expression for all genes, in HDM allergy compared to controls, in sorted CD4 T cells (total CD4 + , CD4 + CD44 hi , and CD4 + CD44 lo ) from lung and blood samples. Significantly differentially expressed genes (log 2 fold change >1 or <−1, and FDR p -value < 0.05) are represented as red (up-regulated) or blue (down-regulated) dots. The numbers of down- (in blue) or up-regulated (in red) genes are listed in the volcano plot. Below each volcano plot, the genes significantly differentially expressed in the 121 genes in the L26 Allergy module are shown in red (up-regulated) or blue (down-regulated). Heatmaps are shown for log 2 gene expression values for Il4 , Il5 , and Il13 . Gene expression values were averaged and scaled across the row to indicate the number of standard deviations above (red) or below (blue) the mean, denoted as row Z -score Full size image Lung allergy signatures are not conserved in blood Since the lung allergy module, including Il4 , Il5 , and Il13 genes, (L26) was not preserved in blood of HDM allergen-challenged mice (Figs. 2a, b and 3a, c ), we tested this module in the lungs (BAL) and blood from an alternative nasal sensitization mouse model of allergy to HDM (also, alternative vivarium CML, WB Imperial College) 41 , where similarly the Allergy module L26 was detected in BAL but not blood (Supplementary Fig. 8 ; Supplementary Data 10 ). It has been suggested that certain T-cell signatures of disease can only be detected in T cells purified from blood and not whole blood or PBMC 17 , 18 . To determine whether we could detect any genes from the Allergy Module (L26) in purified CD4 + T cells, or activated effector CD4 + CD44 hi versus CD4 + CD44 lo T cells from the blood, RNA-Seq was applied to flow cytometry purified blood populations and compared to the equivalent populations from the lungs of HDM allergen-challenged mice. T H 2-specific genes were detected at high level in CD4 + T cells and activated effector CD4 + CD44 hi from the lungs of HDM-allergen-challenged mice (Fig. 3e ; Supplementary Data 11 ) and now also in the blood of HDM-allergen-challenged mice (Fig. 3e ; Supplementary Data 11 ). These findings are in keeping with the cellular attribution of the T H 2-type cytokine genes over-expressed within the lung Allergy Module (L26) to αβ T cells and ILCs, with the majority of genes attributable to other cell types in the lung, perhaps explaining the absence of this module in the blood (Supplementary Fig. 9 ). Conservation of lung IFN-inducible signatures in blood Since the lung Type I IFN /Ifit/Oas (L5) and the Ifng / Gbp /Antigen presentation (L7) modules were conserved in the blood (Fig. 3a, c ; Supplementary Fig. 7 ), we examined the “hub” genes, i.e., genes most representative of the transcriptional profile of the module and most connected with all other genes within each module, in lung and blood (Supplementary Data 12 and 13 ; Fig. 4 ). Over-abundance of both Type I IFN (L5) and IFN-γ (L7) inducible genes was observed in mice infected with T. gondii , influenza, RSV, and B. pseudomallei infected mice correlating between lung and blood, but not in HDM allergen-challenged or C. albican s infected mice. Strikingly, high expression in both modules was observed in the lungs of T. gondii infected mice, with all genes within these modules correlating highly across tissues (Fig. 4 ). Fig. 4 Gene networks of IFN-associated lung modules in lung and blood across diseases. Gene networks for L5 (Type I IFN/ Ifit / Oas ) and L7 ( Ifng / Gbp /Antigen presentation) modules depicting the “hub” genes representing genes with high intramodular connectivity, i.e., genes most connected with all other genes within the module. For each module, a representative network is shown for Toxoplasma with gene names, followed by smaller gene networks for the 6 diseases. Each gene is represented as a circular node with edges representing correlation between the gene expression profiles of the two respective genes. Color of the node represents log 2 foldchange of the gene for each disease compared to its respective controls, in the lung (left panel) and blood (right panel) samples for both modules. Pearson correlation coefficients ( r ) for foldchanges for all genes (disease samples compared to respective controls) in the L5 and L7 modules between lung and blood are shown Full size image Type I IFN and IFN-γ signaling in T. gondii infection Because of the striking preservation of the type I IFN and IFN-γ pathways in blood and lungs of mice infected with T. gondii , we decided to further investigate their role in the global immune response to this pathogen as well as their contribution to disease. To this end, Wild type C57Bl/6, Ifnar −/− , Ifngr −/− and double Ifnar −/− × Ifngr −/− , were infected with T. gondii and tissues (lung, blood, liver, and spleen) harvested at peak disease for RNA-Seq and histology. PCA plots from the RNA-Seq data showed the biggest separation between tissues, over-and-above the separation observed between uninfected and infected mice, and Wild type versus the different IFN-receptor-deficient mice, with blood and spleen being the most closely associated (Fig. 5a ). To reveal within tissue differences, PCA plots were constructed for each tissue separately, where the uninfected mice of all genotypes separated from the T. gondii infected mice and accounted for the largest variation in the data (PC1) (Fig. 5b ). The second largest variation (PC2) was based on the deficiency of Ifngr , upon infection with T. gondii (Fig. 5b ). To investigate the potential changes in the modular response, which may result from each different IFN-receptor-deficient mice upon infection, we assessed the previously defined modular lung (from Fig. 2a ) and blood (from Fig. 2b ) signatures across all tissues from these mice, as compared to Wild type uninfected controls for each respective tissue (Fig. 5c ). Fig. 5 Changes in the transcriptional profiles across tissues following T. gondii infection, in the absence of Type I and/or II IFN signaling. a , b Principal component analysis across all tissues—lung, blood, liver, and spleen ( a ), and individually in each tissue ( b ), depicting the variation in the global gene expression profiles across tissue type, disease, and host genotype. Principal components 1 (PC1) and 2 (PC2), which describe the greatest variation in gene expression, are shown. Shape represents the different tissues, empty and filled symbols represent control and disease samples, respectively, and color represents host genotype: Wild type, Ifnar −/− , Ifngr −/− and double Ifnar −/− × Ifngr −/ − mice. c Modular transcriptional profiles of all tissues assessed using the lung modules in Wild type and IFN receptor knockout (KO) mice after infection. Red and blue circles indicate the cumulative over- or under-abundance of all genes within the module for each Wild type and IFN receptor KO disease group compared to the respective Wild type controls in each tissue. Color intensity of the dots represents the degree of perturbation, indicated by the color scale. Size of the dots represents the relative degree of perturbation, with the largest dot representing the highest degree of perturbation within the plot. Within each group, only modules with FDR p -value < 0.05 were considered significant and depicted here. GCC glucocorticoid, K-channel potassium channel, TM transmembrane, Ubiq ubiquitination Full size image In T. gondii infected Ifnar −/− mice, decreased modular over-abundance was observed in the Type I IFN module (L5), including lower gene expression for Ifit, Oas, Irf7 (Supplementary Data 14 – 18 ), in the lungs, blood, liver, and spleen, in comparison to the Wild type mice, with the greatest contrast observed in the blood and spleen (Fig. 5c ). Strikingly, Ifngr −/− and Ifnar −/− × Ifngr −/− mice, showed a much greater decrease in abundance of Type I IFN (L5)-inducible genes, including classical ISGs, such as, Ifit, Oas, Irf7 , across all tissues (Fig. 5c ; Supplementary Data 14 – 18 ). There was also highly decreased abundance of IFN-γ-inducible genes within the (L7) module in the Ifngr −/− and Ifnar −/− × Ifngr −/− mice across all tissues, but in Ifnar −/− mice this decreased abundance was only prominent in the spleens (Fig. 5c ). An under-abundance of the NK cell Module (L36), and the Cytotoxic/T cells/ILC/ Tbx21/Eomes /B cells (L35) was observed in the blood of Ifnar −/− , Ifngr −/− and Ifnar −/− × Ifngr −/− mice (Fig. 5c ). IL-17 pathway/granulocytes (L11), myeloid/granulocyte function (L10) and the inflammation/IL-1 signaling/myeloid Cells (L12) modules were more over-abundant in lungs and blood from Ifnar −/− , Ifngr −/− and Ifnar −/− × Ifngr −/− mice in contrast to the Wild type comparison, with only small changes observed in liver and spleen (Fig. 5c ). Additionally, increased abundance of the immunoglobulin module (L25) was observed in the spleen in the absence of IFN-γ signaling (Fig. 5c ). Similar findings were observed, when the blood-derived modular signature (from Fig. 2b ) was assessed in all tissues from T. gondii infected Wild type or IFN-receptor-deficient mice. A decreased modular over-abundance was observed in the equivalent IFN blood modules (B11 and B14) in the Ifngr −/− and Ifnar −/− × Ifngr −/− mice across all tissues (Supplementary Fig. 10 ). Ifnar −/− mice, again showed a slight decrease in the IFN-inducible modules (B11 and B14) including genes such as Oas, Mx1 , as well as a decrease in the cytotoxic/T cells/ILC/ Tbx21/Eomes module (B15) (Supplementary Fig. 10 ; Supplementary Data 14–18 ). In keeping with the results obtained from the lung-derived modules, the blood-derived modules also showed increased abundance, albeit weaker, in the granulocyte (B16) module from the Ifnar −/− , as well as Ifngr −/− and Ifnar −/− × Ifngr −/− mice, being more pronounced in the blood and lung (Supplementary Fig. 10 ). Further analysis of our data against the interferome database (Supplementary Datas 2 , 3 and 14a, b ), revealed that although many modules contain genes that are associated with type I and type II, the most significant enrichment was indeed found in L5 (Type I IFN/ifit/Oas) and L7 ( ifng/Gbp/ Antigen presentation) modules (Supplementary Data 19 ), in keeping with our annotation. Our data demonstrate partial dependence of genes in the L5 module on type I-inducible signaling, and complete dependence of all the genes in this module on IFNGR signaling during T. gondii infection (Fig. 5c ). This may be explained by our findings that type I and type II IFNs induce some genes in common (Supplementary Datas 2 , 3 and 14 ), and secondly by our data showing genes classically attributed to type I IFN-inducible signaling by the literature and the interferome data base are actually strongly dependent on IFNGR signaling (Fig. 5c ). The module L7 was not affected by abrogation of IFNAR signaling but was completely abrogated in the absence of IFNGR signaling (Fig. 5c ), demonstrating their dependence on Type II (IFN-γ) signaling, in keeping with our annotation. A negative role for IFNAR signaling in T. gondii infected mice, was observed with increase in the module L11 (IL-17 pathway/granulocytes) (Fig. 5c ), although a more profound negative effective was observed in the absence of IFNGR signaling, with increases in modules L10 (myeloid/granulocyte function) and L11 (IL-17 pathway/granulocytes), as well as the L25 (immunoglobulin h/k enriched) module in the spleen only (Fig. 5c ). Type I IFNs have been reported to be constitutively produced at low quantities in the absence of infectious insult, and yet exert profound effects 49 , with different sets of genes being affected under tonic or Type I IFN-stimulated conditions 22 . To investigate further a change in Type I IFN induced tonic signaling, we examined global and modular effects of the uninfected Ifnar −/− , Ifngr −/− and double Ifnar −/− × Ifngr −/− mice. Indeed, within the Type I IFN/Ifit/Oas module (L5), lungs from uninfected Ifnar −/− mice showed decreased expression as compared to Wild type controls for a subset of genes, which included Ifit1, Ifit3, Oas1a, Oas2, Oas3, Irf7, Irf9 , and Mx1 (Fig. 6a, b ; Supplementary Fig. 11 ; Supplementary Data 15 ). Upon T. gondii infection, these genes were upregulated in the lungs of Ifnar −/− mice, similarly to the Wild type mice, however to a lesser extent (Fig. 6b ), in keeping with the modular analysis (Fig. 5c ). These data suggest that other signaling pathways induced during infection can compensate for type I IFN in the induction of these genes. Strikingly, the upregulation of these type I IFN-inducible genes during infection was totally abrogated in the lungs of the Ifngr −/− and Ifnar −/− × Ifngr −/− mice, demonstrating the dependence of these genes, reported to be induced by type I IFN, on IFN-γ signaling during T. gondii infection (Fig. 6b ). In the Ifng /Gbp/antigen presentation module (L7), decreased expression of a subset of genes, including the known IFN-γ-induced genes such as Gbps, Tap1 , and MHC molecules, was observed in the lungs of uninfected Ifngr −/− mice and double Ifnar −/− × Ifngr −/− mice (Fig. 6a, b ; Supplementary Fig. 11 ; Supplementary Data 15 ). Even upon T. gondii infection, these tonically regulated genes by IFN-γ, were not increased in the lungs of Ifngr −/− and Ifnar −/− × Ifngr −/− mice (Fig. 6a, b ). Collectively these data demonstrate genes involved in the tonic type I IFN and IFN-γ signaling in lungs (Fig. 6a, b ; Supplementary Fig. 11 ; Supplementary Data 15 ). Similar findings were observed in the blood, liver and spleen of uninfected Ifnar −/− , Ifngr −/− and double Ifnar −/− × Ifngr −/− versus Wild type uninfected mice (Supplementary Figs. 12 – 17 ; Supplementary Datas 16 – 18 ). Fig. 6 Tonic activity of Type I and/or II IFN signaling in lung within the IFN-associated lung modules. a Correlation of gene expression within the L5 (Type I IFN/ Ifit / Oas ) and L7 ( Ifng / Gbp /Antigen presentation) modules in lung samples, in control mice: control Wild type group ( x -axis) compared to each of the control IFN receptor knockout (KO) groups ( y -axis), and in T. gondii infected mice: disease Wild type group ( x -axis) compared to each of the disease IFN receptor KO groups ( y -axis). Each dot represents the average log 2 gene expression value for a gene. Dashed gray line within each plot at the 45° slope represents identical expression levels between Wild type and IFN receptor KO groups, with genes above or below the line showing higher expression in the IFN receptor KO or Wild type group, respectively. Linear regression lines with 95% confidence interval, and Pearson correlation coefficients are shown for each plot. b Heatmap depicting the log 2 gene expression values of the differentially expressed genes between the IFN receptor KO controls compared to the Wild type controls in lung (Supplementary Fig. 11 ), in the L5 and L7 modules. Gene expression values were averaged and scaled across the row to indicate the number of standard deviations above (red) or below (blue) the mean, denoted as row Z -score. Dendrogram shows unsupervised hierarchical clustering of genes, with distances calculated using Pearson correlation and clustered using the complete linkage. Enrichment scores calculated on a single-sample basis using GSVA are shown for the differentially expressed genes below the heatmap for each module. Empty and filled circles represent control and disease samples, respectively, and the color of the circle represents Wild type and IFN receptor KO mice. Mean and standard deviation for each group are shown, with a dashed gray line indicating the mean of the Wild type control group. c Venn diagrams depicting the downregulated genes in each of the IFN receptor KO control groups compared to the Wild type control group, in lung (Supplementary Fig. 11 ), blood (Supplementary Fig. 12 ), liver (Supplementary Fig. 14 ), and spleen (Supplementary Fig. 15 ), depicting commonalities in the list of genes involved in tonic IFN signaling across tissues in the L5 and L7 modules Full size image The absence of type I IFN signaling in the Ifnar −/− mice showed a robust but not absolute decrease in the induction of the hub genes in the Type I IFN/ Ifit / Oas module (L5), across all tissues, with the greatest decrease observed in the liver and spleen upon T. gondii infection (Fig. 7 ). The genes in this network were abrogated in the absence of IFN-γ signaling during T. gondii infection in Ifngr −/− mice (Fig. 7 ), similarly to findings reported above (Fig. 6 ). The biggest decrease in genes in this network of the L5 module was observed in the double Ifnar −/− × Ifngr −/− mice, confirming the requirement for both Type I IFN and IFN-γ signaling in the induction of type I IFN-inducible genes (Fig. 7 ). Analysis of the network of hub genes in the IFN-γ inducible genes module (L7) demonstrated an absolute requirement for IFN-γ signaling but not type I IFN signaling for the induction of these genes upon T. gondii infection (Supplementary Fig. 18 ). Fig. 7 Type I and/or II IFN signaling regulates the expression of genes within the lung Type I IFN// Ifit / Oas (L5) module following T. gondii infection. Gene networks depicting the “hub” genes in the lung L5 module representing genes with high intramodular connectivity, i.e., genes most connected with all other genes within the module. Each gene is represented as a circular node with edges representing correlation between the gene expression profiles of the two respective genes. Color of the node represents log 2 foldchange of the gene for Wild type or IFN receptor knockout T. gondii infected mice compared to Wild type controls, across lung, blood, liver, and spleen Full size image Type I IFN and IFN-γ control pathology during infection Consistent with differences at the transcriptional level, major decreases in weight loss were observed during T. gondii infection in all IFN-receptor-deficient mice, as compared to the infected Wild type mice (data not shown). Since the Ifnar −/− mice also showed signs of increased pathology upon T. gondii infection despite the rescue, in part, of Type I IFN-inducible response in these mice upon T. gondii infection (Figs. 5c , 6 and 7 ), we investigated the modular response further to understand the mechanism underpinning the exacerbated disease in these mice. An increase in certain neutrophil-associated genes from the IL-17 pathway/granulocytes module (L11), was observed in all IFN-receptor-deficient mice upon infection (Cluster i and ii, Fig. 8a, b ), although ll17a itself was not detectable in T. gondii infected tissues, except in the absence of IFN-γ signaling (Supplementary Data 14 ). In keeping with this, Il1b , contained in Module L13, was also elevated in blood and other tissues of Ifnar −/− mice (Fig. 5c ; Supplementary Data 14 – 18 ). On the other hand, increased expression of genes in Cluster iii, such as Pf4 (reported as a chemokine for neutrophils and monocytes) were only increased in the lungs of Ifngr −/− and Ifnar −/− × Ifngr −/− T. gondii infected mice (Fig. 8a ). Additional to the observed increase of neutrophil-associated genes (Figs. 5c and 8a, b ), we additionally show an increase in the proportion of granulocytes/neutrophils in the lungs and blood of all Ifnar −/− , Ifngr −/− and Ifnar −/− × Ifngr −/− mice as compared to Wild type infected with T. gondii , using cellular deconvolution analyses of our RNA-Seq data (Fig. 8c ). Fig. 8 Type I and/or II IFN signaling differentially regulates granulocyte-associated genes and neutrophil recruitment during T. gondii infection. a , b Heatmaps depicting the log 2 expression values of all genes within the L11 (IL-17 pathway/granulocytes) lung module in lung ( a ) and blood ( b ) samples, across Wild type and IFN receptor knockout (KO) mice, and control and disease samples. Gene expression values were averaged and scaled across the row to indicate the number of standard deviations above (red) or below (blue) the mean, denoted as row Z -score. Dendrograms show unsupervised hierarchical clustering of genes, with distances calculated using Pearson correlation and clustered using the complete linkage. Four clusters of genes within the heatmaps are highlighted with roman numerals, showing distinct expression patterns across the groups in lung and blood samples. c Stacked bar plots depicting in silico immune cell composition of lung, blood, liver, and spleen RNA-seq samples, derived using the CIBERSORT algorithm based on cellular signatures obtained from ImmuCC. Each bar represents percent fractions for 9 representative cell types for an individual mouse sample, with colors representing the different cell types. White and black bars at the bottom of each plot represent control and disease samples, respectively. ILC innate lymphoid cells, NK cells natural killer cells. d Representative immunofluorescence confocal micrographs of thick lung sections depicting MPO-positive neutrophils (MPO, cyan) in Wild type and IFN receptor KO mice upon T. gondii infection. Scale bar represents 50 μm. Quantification of neutrophil numbers per field is shown, with each dot representing one field from one mouse ( n = 4–5 fields from n = 4–5 mice per group), with median and 95% confidence interval indicated. Significance was calculated using unpaired t -test for each IFN receptor KO compared to Wild type; ns not significant Full size image To verify differences observed at the transcriptional level in Ifnar −/− , Ifngr −/− , and double Ifnar −/− × Ifngr −/− as compared to Wild type mice infected with T. gondii , we examined the mice using histopathology. In the lungs of infected Wild type mice, mild histiocytic and neutrophilic interstitial pneumonia with mild necrosis of alveolar walls was generally detected (Supplementary Fig. 19a and b ). In the lungs of all of the infected IFN-receptor-deficient mice, interstitial pneumonia ranged from mild to moderate with frequently increased numbers of granulocytes (neutrophils and eosinophils) and/or mononuclear cells (Supplementary Fig. 19a and b ). The livers of T. gondii infected Wild type mice presented with multifocal random necrotizing and pyogranulomatous hepatitis. Infected Ifngr −/− additionally contained frequent foci of hepatocellular lytic necrosis, whereas Ifnar −/− infected mice presented with multiple large thrombi occluding vessels with associated hepatocellular coagulative necrosis, resulting from ischemic injury, both of which could be associated with the increased neutrophilic activity observed (Supplementary Fig. 19a ). Increased numbers of granulocytes were found in the absence of IFNR signaling in all tissues analyzed (lung, liver, spleen) from T. gondii infected mice (Supplementary Fig. 19a and b ). This was confirmed by an increase in neutrophil numbers in the lungs of T. gondii infected mice, as shown by staining of myeloperoxidase (MPO), quantified by the number of positive cells for nuclear morphology, with the greatest effect seen in the absence of IFNGR signaling (Fig. 8d ). Increased parasite loads, inferred by parasite RNA-readcounts were observed especially in blood and liver in the absence of both IFNAR or IFNGR signaling, although this was more pronounced in the Ifngr −/− mice (Supplementary Fig. 19c ), in keeping with our observed decrease in Nos2 gene expression (Supplementary Data 14 ). It is probable that in the absence of IFNR signaling macrophages are unable to control the infection which results in increased neutrophil recruitment, in keeping with increased neutrophil levels seen in blood and lung by deconvolution analysis (Fig. 8c ) and granulocyte-associated genes (Fig. 5c ; Supplementary Fig. 10 ). It should be noted that the wild type control mice infected with T. gondii either at NIAID, NIH, or The Francis Crick Institute, showed a similar modular lung and blood transcriptional signature demonstrating the robust nature of this global signature of infection, generated from a large number of high-quality samples (Supplementary Data 20 ) across different vivariums (Supplementary Fig. 20 ). Collectively, our transcriptome and histology data show that type I IFN and IFN-γ signaling are both involved in control of neutrophilic inflammation during T. gondii infection, likely contributing to the increased pathology seen in the Ifnar −/− , Ifngr −/− , and double Ifnar −/− × Ifngr −/− as compared to Wild type mice upon infection, over-and-above any type I IFN and IFN-γ induced pathways of microbial control. Discussion We have generated a comprehensive resource of modular transcriptional signatures from various infectious and inflammatory diseases to identify commonalities and differences in the immune response to specific infections or challenges to aid the discovery of pathways in disease. We show that distinct immune responses in experimental models of T H 1, type I IFN, T H 17, and T H 2 diseases could be determined by the whole transcriptional signature of unseparated lung cells. Type I IFN and IFN-γ pathways showed high expression in lungs of T. gondii , influenza A and RSV infected mice. Pathways driven by IL-17 including granulocyte/neutrophil-associated genes were abundant in the lungs of mice infected with B. pseudomallei and C. albicans only, and a signature of mast cells and T H 2-type cytokines was only abundant in the lungs of mice challenged with HDM allergen. IFN-inducible gene signatures were detectable in the blood-derived modules, similarly across the different diseases, with strongest preservation in T. gondii infected mice, but the IL-17 pathway module was not detectable. Testing the lung module in blood, revealed some IL-17 pathway genes, but these did not include Il17 family members. The T H 2 allergy module was not preserved in the blood of mice challenged with HDM allergen, but only detectable in purified blood CD4 + effector T cells. Our obervations that T H 2-associated genes were only detectable in purified blood CD4 + effector T cells are similar to reported findings that certain disease-specific signatures were only detectable in T cells purified from blood and not from whole blood or PBMC 17 , 18 . Collectively these findings may suggest that disease-specific signatures attributable to T cells, or other cells of low abundance in the blood, may not be revealed in whole blood or PBMC, but will require cell purification. However, a T H 2-associated gene signature can be found in whole lung where the immune response is generated to the allergen and there are sufficient effector T cell numbers. In contrast to the difficulty in detecting T H 2 allergy and T H 17 signatures in the blood, our findings suggest that cells contributing to IFN-inducible signatures are more abundant in the blood, in keeping with the numerous reports that IFN-inducible signatures are detectable in whole blood in human disease 6 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 19 . Type I IFN-inducible genes normally reflective of viral infections 16 , 19 were herein additionally observed in the lungs and blood of bacterial infections with B. pseudomallei as previously reported during tuberculosis disease 9 , 11 , 12 , 15 , 46 , and now also to a very high extent during T. gondii infection, on a par with influenza infection. For example, we observe genes such as Oas, Mx1, Ifit, as well as those not regarded as classic IFN-inducible genes, including cytokines, chemokines, signaling and cell membrane molecules to form part of the IFN signature 22 . Accompanying this response, were IFN-γ inducible genes such as Irf8, fundamental for an IL-12-driven T H 1 response 29 , and molecules involved in protection against intracellular bacteria and parasites such as Gbps and Nos2. The co-existence of IFN-γ and T H 17 responses during B. pseudomallei infection which we observed, demonstrate the complex immune response to this pathogen, where IFN-γ is essential for protection 50 yet the response is accompanied by increased granulocyte/neutrophils. The absence of type I IFN and IFN-γ-inducible signatures during C. albicans infection, could indicate a lack of induction of the signaling pathways required to induce these molecules, or active inhibition by cytokines or other regulatory factors. For example inhibition of type I IFN by IL-1 as has been reported during M ycobacterium tuberculosis infection 51 . Type I IFN blocks IFN-γ mediated protective responses to intracellular pathogens 4 , 51 , 52 , 53 , 54 , 55 , and yet a number of pathogens elicit both type I IFN-induced gene expression and IFN-γ responses 3 , 4 , 9 . However, it is unclear how type I IFN and IFN-γ-inducible gene pathways co-operate or regulate each other, especially when expressed to a very high level as we now show during T. gondii infection in vivo. To examine this relationship between type I IFN and IFN-γ-mediated regulation of gene expression and function we infected Ifnar −/− , Ifngr −/− and Ifnar −/− × Ifngr −/− mice with T. gondii . Classic IFN-γ-induced genes encoding APC processing and presentation function, such as MHC molecules and Tap1 and 2 , found in lung module L7, and Nos2 , were all abrogated in the absence of IFNGR signaling, reinforcing IFN-γ as the driver of this modular response. We demonstrate that induction of genes such as Oas, Mx1, Ifit reported to be regulated by type I IFN signaling, is totally dependent on IFN-γ signaling during infection with T. gondii . This may suggest a stronger contribution of IFN-γ signaling for the induction of genes known to be induced by type I IFN, in a setting such as T. gondii infection, where Ifng levels are at least three-fold higher in lungs of mice infected with T. gondii as compared to influenza. The expression of IFN-γ induced Nos2 was increased in livers and spleens of Ifnar −/− mice infected with T. gondii , but this increase was abrogated when IFN-γ signaling was absent in Ifnar −/− × Ifngr −/− mice as seen in Ifngr −/− mice. This demonstrates in vivo blockade of IFNGR signaling by type I IFN during a parasite infection, as has been reported for infections with bacteria such as L. monocytogenes 52 or M. tuberculosis 53 , 54 , 56 . Our findings collectively show the complexity of the relationship between type I IFN and IFN-γ signaling. Although as previously reported type I IFN can indeed negatively regulate IFN-γ-induced genes, type I IFN-induced gene expression appears to be partially dependent on IFNAR signaling, while totally dependent on IFN-γ signaling during infection with T. gondii . While IFN-γ has been shown to be a dominant cytokine in protection against T. gondii infection 29 , 57 , type I IFN has also been shown to offer protective effects 58 , 59 , with both of these IFNs exhibiting microbicidal effects in vitro 60 against T. gondii infection. Mechanisms of protection by IFN-γ against T. gondi infection have been broadly described 61 , but it is unclear how type I IFN controls disease. Supportive of a role for type I IFN in protection against T. gondii infection, enhancement of NK cell IFN-γ production by type I IFN has been reported 62 . TLR-12 signaling promoting type I IFN production by plasmacytoid dendritic cells, has also been shown to increase IL-12 production 63 . In both cases type I IFN would therefore augment T H 1 immunity, fundamental for protection against T. gondii infection 29 . Here we show that the absence of type I IFN results in increased parasite loads in the blood and liver of T. gondii infected mice accompanied by exacerbated pathology. This correlated with increased transcriptional modules encoding granulocyte-related genes and higher neutrophil numbers. Although neutrophils have been shown to dampen T. gondii infection, uncontrolled and very high numbers of neutrophils may contribute to inflammation, further exacerbating pathology and pathogen load, such as has been reported during M. tuberculosis infections 64 , 65 , 66 , 67 . Tonic activity of the type I IFN signaling pathway has been reported in the absence of infection 8 , 22 , 49 , which we also show here, although we additionally show that these genes, including classical ISGs, are highly upregulated during infection with T. gondii . We show here the findings that IFN-γ can also mediate a tonic effect on a small number of genes, such as Gbps, Cxcl9 and genes of the MHC-complex, distinct from ISGs associated with type I IFN signaling. These IFN-γ-induced tonic genes were additionally upregulated upon infection with T. gondii , and are totally dependent on IFNGR signaling. The observed tonic IFN-γ signaling may further impact the immune response to pathogens. In conclusion, using transcriptomic analyses of comprehensive datasets generated from in vivo models of infection and inflammation, we have captured a breadth of distinct immune responses. This spectrum consisted of distinct immune response patterns in the lung, ranging from very high IFN-γ expression, type I IFN-inducible gene expression, IL-17-induced neutrophil dominated signatures and expression of Il4, Il5, Il13 , and mast cell-associated genes. Type I IFN and IFN-γ signatures were found to be abundant across all the diseases, albeit to different levels, except during C. albicans infection. IFN-inducible signatures were preserved in blood, with strongest co-expression during T. gondii infection. Our unbiased transcriptomic analyses further revealed that although genes known to be inducible by type I IFN were decreased in the absence of type I IFN signaling as expected, they were completely abrogated in the absence of IFN-γ signaling, revealing an advanced layer of regulation in an IFN-γ-rich environment resulting from T. gondii infection. Additionally, both type I IFN and IFN-γ signaling were shown to each play a major role in the regulation of granulocyte responses and the control of parasite load and pathology during T. gondii infection. These findings, using transcriptomic analyses of blood and whole organ tissue to capture cellular interactions contributing to changes in gene expression during infection and disease outcome, together with mice deficient in IFNAR and IFNGR signaling, provide a framework for discovery of pathways of gene regulation in disease. Methods Experimental animals All mice were bred and maintained in specific pathogen-free conditions according to Home Office UK Animals (Scientific Procedures) Act 1986 unless otherwise stated and were used at 6–18 weeks of age. C57BL/6J Wild-type mice were bred at the MRC National Institute for Medical Research (NIMR) or The Francis Crick Institute unless otherwise stated. Ifnar −/− originally provided by Matthew Albert (Institute Pasteur, France) 33 and Ifngr −/− 68 on the C57BL/6 background were further inter-crossed to generate double Ifnar −/− × Ifngr −/− mice. All animal experiments were carried out in accordance with UK Home Office regulations unless otherwise stated, project licences: T. gondii infection of wild type control mice, Ifnar −/− , Ifngr −/− and double Ifnar −/− × Ifngr −/− mice (for Figs. 5 – 8 and Supplementary Figs. 10 – 19 ) 80/2616 (at The Francis Crick Institute); Influenza A infection, 70/7643 (MRC NIMR); RSV infection, 70/7554 (Imperial College London); B. pseudomallei infection, 70/6934 (London School of Hygiene and Tropical Medicine); C. albicans infection, 70/8811 (MRC NIMR); HDM allergy, 70/7643 (MRC NIMR), P5AF488B4 (The Francis Crick Institute) and 70/7463 (Imperial College London); Aspergillus fumigatus infection, 70/8811 (The Francis Crick Institute); P. chabaudi AS infection, 80/2358 (MRC NIMR) and 70/8326 (The Francis Crick Institute); MCMV infection, 30/2969 (Cardiff University); L. monocytogenes infection, 70/7643 (MRC NIRM) and were approved by the institutions’ Ethical Review Panels unless otherwise stated. C57BL/6 mice mice for T. gondii infection used for module derivation and further analysis (Figs. 1 – 4 and Supplementary Figs. 1 , 4 – 7 ) were maintained and infected at an American Association for the Accreditation of Laboratory Animal Care-accredited animal facility at National Institute of Allergy and Infectious Diseases (NIAID) and housed in accordance with the procedures outlined in the Guide for the Care and Use of Laboratory Animals under an animal study proposal approved by the NIAID Animal Care and Use Committee. Disease models For T. gondii infection , type II avirulent T. gondii strain ME-49 cysts were obtained from brains of chronically infected C57BL/6 mice (Taconic Biosciences, USA). Cyst preparations were pepsin treated to eliminate potential contamination with host cells and female C57BL/6 mice were inoculated intraperitoneally (i.p.) with an average of 15 cysts in phosphate buffered (PBS) as described 69 . On day 7 post infection, blood and lung samples were collected from individual mice using uninfected C57BL/6 mice as controls. T. gondii infection was similarly carried out of (1) C57BL/6 wild type mice at the NIAID, NIH, (for Figs. 1 – 4 and Supplementary Figs. 1 , 4 – 7 ) and (2) C57BL/6 wild type control mice, Ifnar, Ifngr −/− and double Ifnar −/− × Ifngr −/− mice (Figs. 5 – 8 and Supplementary Figs. 10 – 19 ) at The Francis Crick Institute, with blood, spleen, liver, and lung samples harvested at days 6/7 from infected mice. Whole blood was mixed with Tempus reagent (Life Technologies) at 1:2 ratio prior to freezing, lung samples were stored in RNA-later (Ambion) at −80 °C until RNA isolation. For Influenza A virus infection , Influenza A/X-31 (H3N2) strain (a kind gift from Dr. J. Skehel, MRC NIMR) was grown in the allantoic cavity of 10-day-embryonated hen’s eggs, stored at −80 °C and titrated on Madin-Darby Canine Kidney (MDCK) cells prior to infection. Female C57BL/6J mice (MRC NIMR) were infected intranasally (i.n.) with 8 × 10 3 TCID 50 in 30 μl of PBS. Control uninfected mice received PBS only. Whole blood and lung samples were collected from individual infected and control treated mice on day 6 post infection. For RSV infection , plaque-purified human RSV A2 strain originally obtained from the American Type Culture Collection (ATCC) was grown to high titer (≥10 7 focus-forming units (FFU) per ml) in Hep-2 cells, snap frozen, and assayed for infectivity prior to use. All virus preparations were free of mycoplasma (Gen-Probe, San Diego, CA). Female C57BL/6J mice (MRC NIMR) were infected i.n. with 1 × 10 6 FFU of RSV diluted in 100 μl PBS. Control uninfected mice received PBS only. Whole blood and lung samples were collected from individual mice on day 2 post infection. For B. pseudomallei infection , B. pseudomallei strain 576 originally isolated from a melioidosis patient was provided by Dr. T. Pitt (Health Protection Agency, London, UK) and cryopreserved as described 7 . All procedures using live bacteria were performed under Advisory Committee on Dangerous Pathogens containment level 3 conditions. Female C57BL/6 mice (Harlan Laboratories, UK) were infected intranasally with 50 μl containing 2500 colony forming units (CFU) (acute model) or 100 CFU (chronic model) 70 of B. pseudomallei derived from cryopreserved stocks diluted in pyrogen-free saline. Control uninfected mice received 50 μl pyrogen-free saline only. Whole blood and lung samples were collected from individual infected and control treated mice on day 3 post infection (acute model) and on days 27, 39, 49, 65, and 90 (chronic model). For C. albicans infection , C. albicans (clinical isolate SC5314, a kind gift from A. Zychlinsky, Max Planck Institute for Infection Biology, Berlin Germany) was cultured in yeast extract peptone dextrose medium at 37 °C overnight, subcultured for a further 4 h and resuspended in PBS immediately prior to infection. Female C57BL/6J mice (MRC NIMR) were infected intratracheally (i.t.) with 1 × 10 5 C. albicans diluted in 50 μl PBS. Whole blood and lung samples were collected from individual mice on day 1 post infection, using uninfected C57BL/6J mice as controls. For HDM allergy induction , female C57BL/6J mice (MRC NIMR) were sensitized with 10 mg HDM (Greer) and 2 mg Imject Alum (Thermo Scientific) in 200 μl PBS or Alum alone as control by i.p. injections on days 0 and 14, followed by i.t. challenge with 10 mg HDM in 20 μl of PBS or PBS on days 21 and 24 as described 71 . Whole blood and lung samples were collected from individual HDM and PBS control treated mice on day 25. For fluorescence-activated cell sorting (FACS) of T cells, pooled blood from four or five individual HDM or PBS-treated mice was collected into heparin sodium (Wockhardt) at 10–30 international units per ml of blood. PBMCs were isolated by density separation with Lympholyte®-Mammal (Cedarlane). Lung CD4 + T cells were enriched by positive selection (Miltenyi Biotech) from a corresponding pool of lungs. Blood and lung cells were stained with CD3 (145-2C11) APC, CD4 (RM4-5) eFluor450 and CD44 (IM7) PE (all from eBioscience) and CD3 + CD4 + , CD3 + CD4 + CD44 low and CD44 high cells were sorted on MoFlo XDP (Beckman Coulter) and BD FACSAria™ Fusion (Beckton Dickinson) flow cytometers and 15,000 per population collected into TRI-Reagent LS (Sigma-Aldrich). Alternatively, for HDM allergy induction via mucosal exposure (test HDM allergy dataset), female C57BL/6J mice (Charles River Laboratories) were administered 25 μg HDM (Greer) in 25 μl PBS or 25 μl PBS control i.n. under isoflurane anesthesia 5 days per week for 3 weeks. Whole blood and BAL samples were collected from individual HDM and PBS control treated mice 24 h after final allergen challenge. For BAL, airways were flushed 3 times with 1 ml of chilled 5 mM EDTA (Invitrogen, Thermo Fisher) in PBS via a tracheal cannula, before resting mice with 1 ml EDTA/PBS in the lungs for 5 min and lavaging a further 3 times with 1 ml EDTA/PBS. For P. chabaudi AS infection , the cloned P. chabaudi AS line was originally obtained from David Walliker, University of Edinburgh, UK and subsequently cryopreserved as described 72 . Female C57BL/6J (MRC NIMR) mice housed under reverse light conditions (light 19.00–07.00, dark 07.00–19.00 GMT) were infected by i.p. injection of 10 5 infected red blood cells derived from cryopreserved stocks. Whole blood samples were collected from individual mice on day 6 post infection, using uninfected C57BL/6J mice as controls. For mCMV infection , Smith strain mCMV originally obtained from the ATCC was prepared in salivary glands from BALB/c mice (Harlan, UK) and purified over a sorbitol gradient. Female C57BL/6 mice (Harlan, UK) were infected i.p. with 3 × 10 4 plaque forming units (PFU) of mCMV 73 . Whole blood samples were collected from individual mice on day 2 post infection, using uninfected C57BL/6 mice as controls. For L. monocytogenes infection , L. monocytogenes was originally obtained from Drs. H. Rogers, K. Murphy, and E. Unanue, DNAX Research Institute, USA. Bacteria were grown in BHI broth (BD BBL) to mid-log phase as determined by OD 560 and cryopreserved in 20% glycerol/PBS at −80 °C. Female C57BL/6 mice were infected by intravenous (i.v.) injection of 5 × 10 3 CFU of L. monocytogenes derived from cryopreserved stocks diluted in 200 μl of PBS 8 . Control uninfected mice received PBS only. Whole blood samples were collected from individual mice on day 3 post infection, using uninfected C57BL/6J mice as controls. RNA isolation Blood was collected in Tempus reagent (Life Technologies) at 1:2 ratio. Total RNA was extracted using the PerfectPure RNA Blood Kit (5 PRIME). Globin RNA was depleted from total RNA (1.5–2 µg) using the Mouse GLOBINclear kit (Thermo Fisher Scientific). Tissues were collected in TRI-Reagent (Sigma-Aldrich). Total RNA was extracted using the RiboPure™ Kit (Ambion). FACS sorted blood/lung cells were collected into TRI-Reagent LS (Sigma-Aldrich). Total RNA was extracted using the Purelink RNA microkit (Thermo Fisher). BAL cell pellets were obtained from pooled lavage fluid from each mouse, washed once in PBS and lysed in 350 μl RLT buffer. Lysates were passed through QIAshredder columns (QIAGEN). RNA was extracted using the RNeasy mini kit as per manufacturer’s instructions, including on-column DNase I digestion (both QIAGEN). All RNA was stored at −80 °C until use. Quantity and quality of RNA samples Quantity was verified using NanoDrop™ 1000/8000 spectrophotometers (Thermo Fisher Scientific). Quality and integrity of the total and the globin-reduced RNA were assessed with the HT RNA Assay Reagent kit (Perkin Elmer) using a LabChip GX bioanalyser (Caliper Life Sciences/Perkin Elmer) and assigned an RNA Quality Score (RQS) or RNA 6000 Pico kit (Agilent) using a BioAnalyzer 2100 (Agilent) and assigned an RNA Integrity (RIN) score. RNA with an RQS/RIN >6 was used to prepare samples for microarray or RNA-seq. Supplementary Data 20 provides details of each sample provided including QC data such as the RIN, RNA conc, 260/280 ratio, # reads sequenced, # reads aligned to genome, # reads aligned to genes by HTSeq, etc. Microarray cRNA was prepared from 200 ng globin-reduced blood RNA or 200 ng tissue total RNA using the Illumina TotalPrep RNA Amplification Kit (Ambion). Quality was checked using an RNA 6000 Nano kit (Agilent) using a BioAnalyzer 2100 (Agilent). Biotinylated cRNA samples were randomized; 1.5 µg cRNA was then hybridized to Mouse WG-6 v2.0 bead chips (Illumina) according to the manufacturer’s protocols. RNA-seq cDNA library preparation: for blood and tissues, total/globin-reduced RNA (200 ng) was used to prepare cDNA libraries using the TruSeq Stranded mRNA HT Library Preparation Kit (Illumina). For cDNA library preparation of FACS sorted cells, total RNA (30–500 pg) was used to prepare cDNA libraries using the NEBNext® Single Cell/Low Input RNA Library Prep Kit NEBNext® Multiplex Oligos for Illumina® #E6609 (New England BioLabs). Quality and integrity of the tagged libraries were initially assessed with the HT DNA HiSens Reagent kit (Perkin Elmer) using a LabChip GX bioanalyser (Caliper Life Sciences/Perkin Elmer). Tagged libraries were then sized and quantitated in duplicate (Agilent TapeStation system) using D1000 ScreenTape and reagents (Agilent). Libraries were normalized, pooled and then clustered using the HiSeq® 3000/4000 PE Cluster Kit (Illumina). The libraries were imaged and sequenced on an Illumina HiSeq 4000 sequencer using the HiSeq® 3000/4000 SBS kit (Illumina) at a minimum of 25 million paired-end reads (75 bp/100 bp) per sample. Histology Lung, liver, and spleen tissues from T. gondii infected C57BL/6/J, Ifnar −/− , Ifngr −/− and Ifnar −/− Ifngr1 −/− mice were fixed in 10% neutral-buffered formalin followed by 70% ethanol, processed and embedded in paraffin, sectioned (lung and liver, single lobe; spleen, longitudinal (or in fewer cases) transverse sections) at 4 µm and stained with hematoxylin and eosin (H&E). A single section from each tissue was viewed and scored as a consensus by two board-certified veterinary pathologists (E.W.H. and S.L.P.) blinded to the groups. A semi-quantitative scoring method was devised to assess the following histological features; inflammation (granulocytes and mononuclear cells), necrosis and presence of thrombosis with coagulative necrosis (for liver only): 0 = no lesion present, 1 = mild changes, and 2 = moderate or marked changes. Microscopy for neutrophil quantification Lung sections from T. gondii infected C57BL/6J, Ifnar −/− , Ifngr −/− and double Ifnar −/− × Ifngr −/− mice were de-waxed, re-hydrated, and treated with a standard antigen retrieval protocol (Target Retrieval Solution pH 9.0, Agilent Technologies at 97 °C for 45 min) before immunofluorescence staining. For neutrophil staining, sections were incubated with primary antibodies Goat anti-Human/Mouse Myeloperoxidase (AF3667, R&D), followed by Alexa Fluor 488-conjugated donkey anti‐goat (A11055, Life Technologies) and DAPI. Stained lung tissues were mounted with ProLong Gold Antifade Mountant (Life Technologies) and examined by confocal microscopy. Image analysis was performed using ImageJ. For neutrophil quantitation, 4–5 nonoverlapping fields per section were photographed at 40× magnification by Leica SP5 microscope and neutrophil numbers per field were counted based on myeloperoxidase (MPO) staining and neutrophil morphology (lobulated nuclei). Power calculation for modular derivation The rationale for the a priori power calculation for number of mouse samples required for derivation of modules: Mead's resource equation 74 was used for the a priori estimate of sample sizes for laboratory animals. An a priori statistical power analysis was not possible without information on the variability of transcriptomic experiments for all of the datasets, nor information on what magnitude of effect would be sufficiently significant. In addition, modular derivation is an exploratory approach that does not test any hypotheses. Mead's resource equation: E = N − B − T N : total number of mice in the study minus 1; B : blocking component, the number of environmental effects allowed for in the design minus 1; T : treatment component, the number of groups being used minus 1; E : degrees of freedom of the error component, and should be between 10 and 20. For our study, we used two study groups per dataset ( T = 1) and no differences in environment between groups ( B = 0). Using those numbers with the above equation, and setting E to a value between 10 and 20, N is determined to be a value between 11 and 21. Therefore, we could have used between 12 and 22 animals for each dataset. A rounded number was chosen at the high end of the range, taking into consideration the large number of variables being measured. The solved equation: (10–20) = (11 – 21) − 0 − 1. RNA-seq data analysis Raw paired-end RNA-seq data was subjected to quality control using FastQC (Babraham Bioinformatics) and MultiQC 75 . Trimmomatic 76 v0.36 was used to remove the adapters and filter raw reads below 36 bases long, and leading and trailing bases below quality 25. The filtered reads were aligned to the Mus musculus genome Ensembl GRCm38 (release 86) using HISAT2 77 v2.0.4 with default settings and RF rna-strandedness, including unpaired reads, resulting from Trimmomatic, using option -U. The mapped and aligned reads were quantified to obtain the gene-level counts using HtSeq 78 v0.6.1 with default settings and reverse strandedness. Raw counts were processed using the bioconductor package DESeq2 79 v1.12.4 in R v3.3.1, and normalized using the DESeq method to remove the library-specific artefacts. Variance stabilizing transformation was applied to obtain normalized log 2 gene expression values. Further quality control was performed using principal component analysis, boxplots, histograms and density plots. Differentially expressed genes were calculated using the Wald test in DESeq2 79 . Genes with log 2 fold change >1 or <−1 and false discovery rate (FDR) p -value < 0.05 corrected for multiple testing using the Benjamini–Hochberg (BH) method 80 were considered significant. For module generation, and modular fold enrichment, only protein coding genes were considered Ensembl gene biotypes—protein coding, immunoglobulin genes IG-C, -D, -J, -LV and -V, and T cell receptor genes TR-C, -D, -J and -V. Microarray data analysis Microarray data was processed in GeneSpring GX v14.8 (Agilent Technologies). Flags were used to filter out the probe sets that did not result in a “present” call in at least 10% of the samples, with the “present” lower cut-off of 0.99. Signal values were then set to a threshold level of 10, log 2 transformed, and per-chip normalized using 75th percentile shift algorithm. Next, per-gene normalization was applied by dividing each messenger RNA transcript by the median intensity of all the samples. Next, transcripts were filtered to select the most variable probes: those that had a minimum of 1.5-fold expression change compared with the median intensity across all samples, in greater than 10% of all samples. For modular fold enrichment analysis, Illumina IDs were converted to Ensembl IDs using the annotation file available from Illumina, retaining IDs with one to one mapping. Cellular deconvolution Deconvolution analysis for quantification of relative levels of distinct cell types on a per sample basis was carried out on normalized counts using CIBERSORT 39 . CIBERSORT estimates the relative subsets of RNA transcripts using linear support vector regression. Mouse cell signatures for 25 cell types were obtained using ImmuCC 40 and grouped into 9 representative cell types based on the application of ImmuCC cellular deconvolution analysis to the sorted cell RNA-seq samples from the ImmGen ULI RNA-seq dataset (Supplementary Fig. 1 ). Module generation Weighted gene co-expression network analysis was performed to identify lung and blood modules using the package WGCNA 42 in R. Modules were constructed independently in lung and blood samples, across all control and disease samples from the 7 mouse models of infectious and inflammatory diseases, using log 2 RNA-seq expression values. The lung modules were constructed using the 10,000 genes with the highest covariance across all lung samples, and the blood modules were constructed using 10,000 genes with the highest covariance across all blood samples. For subsequent analysis to generate modules, same parameters were used to construct the lung and blood modules in independent analyses. A signed weighted correlation matrix containing pairwise Pearson correlations between all the genes across all the samples was computed using a soft threshold of β = 22 to reach a scale-free topology. Using this adjacency matrix, the topological overlap measure (TOM) was calculated, which measures the network interconnectedness 81 and is used as input to group highly correlated genes together using average linkage hierarchical clustering. The WGCNA dynamic hybrid tree-cut algorithm 82 was used to detect the network modules of co-expressed genes with a minimum module size of 20, and deep split = 2. Lung modules were numbered L1–L38, and blood modules were numbered B1–B41, an additional “grey” module was identified in both lung modules (Supplementary Data 2 , module titled NA), and blood modules (Supplementary Data 3 , module titled NA) consisting of genes that were not co-expressed with any other genes. These grey modules were not considered in any further analysis. To create gene interaction networks, hub genes with high intramodular connectivity and a minimum correlation of 0.75 were calculated, with a cut-off of 50 hub genes, and exported into Cytoscape v3.4.0 for visualization. Modular annotation Lung and blood modules were enriched for biological pathways and processed using IPA (QIAGEN Bioinformatics), Metacore (Thomson Reuters), and the GO database. Significantly enriched canonical pathways, and upstream regulators were obtained from IPA (top 5). GO analysis was performed for the biological processes ontology domain, using the bioconductor package clusterProfiler 83 v3.0.5 in R. Over-representation analysis was performed using the BH method, with p valueCutoff = 0.01 and q valueCutoff = 0.05. Redundant GO terms were removed using the simplify function in the clusterProfiler package, using the Wang similarity measure and a similarity cut-off of 0.7, and the top 10 terms were considered. Modules were assigned names based on representative biological processes from pathways and processes from all three tools (Supplementary Data 4 and 5 ). Module preservation analysis Modular preservation of lung modules in blood, and of blood modules in lung was performed using modulePreservation function in the WGCNA package in R, to assess whether the density (how tight interconnections among genes in a module are), overlap in module membership, and connectivity patterns of individual modules defined in a reference data set are preserved in a test data set 84 . The modulePreservation function performs a permutation test ( n = 30 permutations) to generate a composite Z summary preservation statistic, which summarizes the evidence that the network connections of the module are more significantly preserved than those of random set of genes of equal size. Modules with a Z summary score > 10 are considered strongly preserved, a Z summary score between 2 and 10 indicates weak to moderate preservation, and modules with Z summary scores <2 are considered not preserved 84 . Chord diagrams to visualize module membership of the genes between the lung and the blood modules were constructed using the package circlize 85 v0.4.3 in R, for the 6,999 genes in common between the 10,000 genes used to construct the lung modules, and 10,000 genes used to construct the blood modules. Module enrichment analysis Fold enrichment for the WGCNA modules was calculated using the quantitative set analysis for gene expression (QuSAGE) 43 using the bioconductor package qusage v2.4.0 in R, to identify the modules of genes over- or under-abundant in a dataset, compared to the respective control group using log 2 expression values. The qusage function was used with default n.points parameter (2 12 ), expect when the analysis was performed in groups with smaller sample sizes ( n ≤ 5) (Test HDM allergy dataset, Toxoplasma WT and IFN KO dataset across all tissues; Supplementary Data 14 ), where the n.points parameter was set to 2 16 . Only modules with enrichment scores with FDR p -value < 0.05 were considered significant, and plotted using the ggcorrplot function in R. Single sample enrichment analysis Enrichment of modules and subset of genes within modules on a single sample basis was carried out using gene set variation analysis (GSVA) using the bioconductor package gsva in R 86 . The enrichment scores obtained were similar to those from Gene Set Enrichment Analysis (GSEA), but based on absolute expression to quantify the degree to which a gene set is over-represented in a particular sample, rather than differential expression between two groups. Cell-type-specific signatures and enrichment Raw RNA-seq counts for separated cells, representing 10 distinct cell type populations, were downloaded from the ImmGen ULI RNA-seq dataset from the Gene Expression Omnibus (GEO) database (GEO accession: GSE109125). Raw counts were processed, as described above, using the bioconductor package DESeq2 79 v1.12.4 in R, and normalized using the DESeq method to remove the library-specific artefacts. Variance stabilizing transformation was applied to obtain normalized log 2 gene expression values. Differentially expressed genes were obtained from the 5,000 genes with the highest variance across all samples by comparing each cell type against all other cell types, using the bioconductor package limma 87 v3.28.21 in R. Only upregulated genes with log 2 foldchange >1 and FDR p -value < 0.05 were considered cell-type specific. Cell-type enrichment analysis to identify over-represented cell types in lung and blood modules was performed using a hypergeometric test, using the phyper function in R. p -Values were corrected for multiple testing using the p.adjust function in R, using the BH method, to obtain FDR corrected p -values. In vitro-derived T helper cell signatures generation Raw single-end fastq files were downloaded from GEO database (GEO accession: GSE106464), for T H 1+IL-27, T H 2 and T H 17 cells at 6 h. Fastq files were processed as described above, but using unstranded mapping options, to obtain raw counts. Raw counts were processed, as described above, using the bioconductor package DESeq2 79 v1.12.4 in R, and normalized using the DESeq method to remove the library-specific artefacts. Variance stabilizing transformation was applied to obtain normalized log 2 gene expression values. Differentially expressed genes were obtained by comparing each cell type against all other cell types, using the bioconductor package limma 87 v3.28.21 in R. Only upregulated genes with log 2 foldchange >1 and FDR p -value < 0.05 were considered T helper cell-type specific. Mapping mouse samples to the Toxoplasma gondii genome The filtered RNA-seq fastq files from the mouse samples from Toxoplasma WT and IFN receptor KO dataset across all tissues (obtained as described above), were aligned to the T. gondii genome ToxoDB 7.1 Ensembl GRCm38 (release 35) using HISAT2 77 v2.0.4 with default settings and RF rna-strandedness, including unpaired reads resulting from Trimmomatic using option -U. The mapped and aligned reads were quantified to obtain the gene-level counts using HtSeq 78 v0.6.1 with default settings and reverse strandedness. Raw library sizes were calculated for each sample as the sum of read counts for all genes in that sample. Normalization factors, calculated from the original normalization analysis of the Toxoplasma WT and IFN receptor KO dataset across all tissues (using the M. musculus genome), using the estimateSizeFactors function from the DESeq2 package in R, were multiplied to the raw library sizes to obtain normalized library sizes, to quantify the presence of the T. gondii pathogen present in the mouse lung, blood, liver, and spleen samples in wildtype and IFN receptor KO mice. Interferome database analysis IFN response genes (type I, type II, and type I and II) listed in the Interferome database 44 (release v2.01; accessed December 2019) were identified with the blood and lung modules from Fig. 2 (Supplementary Datas 2 and 3 ). Method for use of online WebApp An online webapp accompanies the manuscript to visualize the findings of the study. The app is subdivided into 5 distinct pages that can be accessed through the tabs displayed on the top of the page, with a customized sidebar for user input on each page. Tab 1: “ Gene expression ” allows the user to input a gene of interest (either a gene symbol or a mouse Ensembl ID) to visualize its expression (either as counts or log2 expression values) across 5 different datasets consisting of mouse models of infection and inflammation, as described in the manuscript. Each dot represents an individual sample, grouped together as controls and disease samples across the different mouse models. Tab 2: “ Gene lookup in modules ” allows the user to input a gene of interest (either a gene symbol or a mouse Ensembl ID) and find out which lung and blood module it belongs to. Thirty-eight lung and forty-one blood modules were derived as part of the study from lung and blood samples obtained from six mouse models of infection and inflammation. Tab 3: “Lung modules” allows the user to visualize the expression of each lung module (L1–L38) across lung samples obtained from six mouse models of infection and inflammation. The enrichment score between −1 and 1 represents the combined overall expression of all genes within the module for each sample. A table below the plot displays all genes present within that module. Tab 4: “Blood modules” allows the user to visualize the expression of each blood module (L1–L41) across blood samples obtained from either the six mouse models used for module derivation, or the four distinct mouse models used for validation. The enrichment score between −1 and 1 represents the combined overall expression of all genes within the module for each sample. A table below the plot displays all genes present within that module. Tab 5: “Download data” allows the user to download the genes present in all lung and blood modules, and the biological annotation of these modules. For all plots in the app, the user can manually set the width and height of the plot, and download them as png files. Additionally, the user can interact with the plot by hovering over the data points to obtain detailed information for each sample point, as well as summary statistics for each group. Data availability The materials, data, code, and any associated protocols that support the findings of this study are available from the corresponding author upon request. The Microarray and RNA-seq datasets have been deposited in the NCBI Gene Expression Omnibus (GEO) database with the primary accession number GSE119856. Publically available datasets used in this study include GSE109125 (sorted cells from Immunological Genome Project), GSE106464 (in vitro differentiated T helper cells), and GSE61106 ( Burkholderia pseudomallei (acute) microarray). | A comprehensive database of gene activity in mice across ten disease models, which could significantly reduce animal use worldwide, has been developed by scientists at the Francis Crick Institute, which gives a full picture of the immune response to different pathogens. The data, published in Nature Communications and available through an online app, shows the activity of every mouse gene—more than 45,000 genes—in the blood of mice with ten different diseases. For the six diseases that involve the lung, samples from lung were also examined. Previously, researchers would have to create, infect, cull, obtain samples from mice and extract and sequence the RNA to study genes that they are interested in. Using a new app which the lab created for this study, researchers will be able to check the activity of any gene across a range of diseases without needing their own mice. This could prevent thousands of mice being used in individual experiments. The research team, led by Crick group leader Anne O'Garra and co-ordinated by Christine Graham, worked with many collaborators from the Crick, UK and the USA. They used next-generation sequencing technology, 'RNA-seq', to measure gene activity across the different diseases. As genes need to transcribe their DNA into RNA in order to function, analysing the RNA reveals how active each gene is—in this case after infection or allergen challenge. "Gene activity can show us how the body responds to infections and allergens," explains Anne. "There are thousands of genes involved in any immune response, so Akul Singhania, a Bioinformatics Postdoc, in our lab used advanced bioinformatics approaches to cluster the genes into modules. These modules represent clusters of genes that are co-regulated and can often be annotated to determine their function and known physiological roles. For example, of 38 lung modules there is a module associated with allergy, and seen only in the allergy model, containing over 100 genes and another module associated with T cells containing over 200 genes." "By sequencing both lung tissue and blood, we can also see how the immune response in the blood reflects the local response in the lung, and vice versa. This will help us to understand what we can learn from genetic signatures in the blood, since for most diseases doctors can't realistically get lung samples from patients." A panoply of pathogens Using the new app, researchers anywhere in the world can look up gene activity in the lungs and blood of mice infected with a range of pathogens: the parasite Toxoplasma gondii, influenza virus and Respiratory Syncytial Virus (RSV), the bacterium Burkholderia pseudomallei, the fungus Candida albicans, or the allergen, house dust mite. They can also see gene activity in the blood of mice with listeria, murine cytomegalovirus, the malaria parasite Plasmodium chabaudi chabaudi, or a chronic Burkholderia pseudomallei infection. In the study, the research team analysed the genetic signatures associated with these diseases to help understand the immune response. They discovered a broad range of immune responses in the lung, where discrete modules were dominated by genes associated with Type I or Type II interferons, IL-17 or allergy type responses. Type I interferons are known to be released in response to viruses, while Type II interferon (IFN-?) activates phagocytes to kill intracellular pathogens, and IL-17 attracts neutrophils causing early inflammatory immune responses. Interestingly, interferon gene signatures were present in blood modules similarly to the lung, but IL-17 and allergy responses were not. Surprisingly, genes associated with type I interferon were highly active in both the lungs and blood of mice infected with the Toxoplasma gondii parasite and also seen in response to the Burkholderia pseudomallei bacterium, albeit to a lesser extent. This challenges the view that type I interferon-associated genes are necessarily indicative of viral infections, as the lab had previously shown in tuberculosis. "We found that mice without functioning interferon pathways were less able to fight off Toxoplasma infection. This was true for both Type I and Type II interferons, which have a complex relationship with each other. We found that both play a key role in protection against the parasite in part by controlling the neutrophils in the blood which in high numbers can cause damage to the host." From obsolescence to opportunity The research project began in 2009, using a technique known as microarray to detect gene activity in lung and blood samples and was almost complete and ready to be analysed by 2015. Microarray was then a well-established technique, but the necessary reagents were suddenly discontinued by the manufacturer before the final samples had been processed. Without the equipment to finish the sequencing, the project was in trouble. With this microarray technology no longer possible, the team needed a different approach. At this time, a technique called RNA-Seq had come onto the market, offering a better way to quantify gene activity. Following negotiations between Anne and the manufacturer, her team was offered cutting-edge RNA-Seq reagents free of charge, to re process the samples starting in late 2016. They were also provided storage space for the huge amounts of data generated. As the tissue and blood samples from the microarray experiments were all frozen in storage, Christine Graham in Anne's lab was able to go back to the same samples and heroically process them again, this time for RNA sequencing. Thanks to the excellent storage of the samples, this was possible without use of additional animals. Although time-consuming and a huge task for Christine, by 2018 the team had all the sequencing data they needed. With a huge amount of data to process, Akul Singhania set about making sense of it all. Using advanced bioinformatics techniques, he clustered the thousands of genes and millions of data points into a meaningful and visual form which we refer to as modules, and created the app to make the data accessible to anyone. "Ten years since the project began, we now have an open access resource of gene expression that anyone in the world can use to look up their favourite genes and also see if they are regulated by type I or type II interferon signalling," says Anne. "Nobody said science was easy, but it's certainly worthwhile." | 10.1038/s41467-019-10601-6 |
Medicine | Cancer cells resist chemotherapy by helping their neighbors cheat death | Florian J. Bock et al, Apoptotic stress-induced FGF signalling promotes non-cell autonomous resistance to cell death, Nature Communications (2021). DOI: 10.1038/s41467-021-26613-0 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-021-26613-0 | https://medicalxpress.com/news/2021-11-cancer-cells-resist-chemotherapy-neighbors.html | Abstract Damaged or superfluous cells are typically eliminated by apoptosis. Although apoptosis is a cell-autonomous process, apoptotic cells communicate with their environment in different ways. Here we describe a mechanism whereby cells under apoptotic stress can promote survival of neighbouring cells. We find that upon apoptotic stress, cells release the growth factor FGF2, leading to MEK-ERK-dependent transcriptional upregulation of pro-survival BCL-2 proteins in a non-cell autonomous manner. This transient upregulation of pro-survival BCL-2 proteins protects neighbouring cells from apoptosis. Accordingly, we find in certain cancer types a correlation between FGF-signalling, BCL-2 expression and worse prognosis. In vivo, upregulation of MCL-1 occurs in an FGF-dependent manner during skin repair, which regulates healing dynamics. Importantly, either co-treatment with FGF-receptor inhibitors or removal of apoptotic stress restores apoptotic sensitivity to cytotoxic therapy and delays wound healing. These data reveal a pathway by which cells under apoptotic stress can increase resistance to cell death in surrounding cells. Beyond mediating cytotoxic drug resistance, this process also provides a potential link between tissue damage and repair. Introduction The cellular decision to live or die is fundamentally important in biology. Inappropriate cell survival has been causally linked to various diseases including cancer and autoimmunity 1 . In cancer, many therapies act by engaging apoptosis, and the degree of apoptotic sensitivity or apoptotic priming often correlates with therapeutic efficacy 1 , 2 . Therefore, understanding how cancer cells survive therapy should provide new ways to circumvent this and improve tumour cell elimination. Mitochondrial apoptosis represents a major form of regulated cell death 3 . During apoptosis, mitochondria are permeabilised through a process called mitochondrial outer membrane permeabilisation or MOMP. Widespread MOMP effectively acts as cellular death sentence by releasing mitochondrial proteins, such as cytochrome c , that activate caspase proteases leading to rapid apoptosis 3 . Even in the absence of caspases, MOMP typically commits a cell to death, and is thus considered a point-of-no-return. Consequently, mitochondrial outer membrane integrity is tightly regulated by pro- and anti-apoptotic BCL-2 family proteins. Anti-apoptotic BCL-2 proteins prevent apoptosis by binding pro-apoptotic BAX, BAK and BH3-only proteins. During apoptosis, BH3-only proteins activate BAX and BAK, which subsequently promote MOMP. This process is exploited by BH3-mimetics, a new class of anti-cancer drugs 1 . By binding anti-apoptotic BCL-2 proteins, BH3-mimetics antagonise BCL-2 pro-survival function, sensitising cells to apoptosis 4 . Various BH3-mimetics have been developed that target select or multiple anti-apoptotic BCL-2 family members. Amongst them, the BCL-2 specific BH3-mimetic venetoclax 5 shows considerable clinical promise and is approved for the treatment of chronic lymphocytic leukaemia (CLL) 6 and in combination therapy to treat acute myeloid leukaemia (AML) 7 , 8 . However, in solid tumours, BH3-mimetics are typically less effective, implying that additional survival mechanisms must be targeted in order to maximise their potential. We set out to identify mechanisms of apoptotic resistance using BH3-mimetics as tool compounds. Selecting for cells surviving venetoclax treatment, we found that resistance was associated with increased anti-apoptotic BCL-2 and MCL-1 expression. Surprisingly, resistance occurred in a non-cell autonomous manner. We find that under apoptotic stress, cells can release FGF2. In turn, FGF2 triggers MEK-ERK signalling, resulting in increased anti-apoptotic BCL-2 and MCL-1 protein expression and apoptotic resistance. In certain cancer types, we found a correlation between FGF-signalling, BCL-2 and MCL-1 expression and poorer patient prognosis. Furthermore, we find FGF-dependent signalling results in upregulation of MCL-1 during wound healing and promotes tissue repair. Together, these findings unveil a non-cell autonomous mechanism of apoptotic resistance, where apoptotic stress—via FGF signalling—promotes cell survival. As we discuss, this process may have wide-ranging roles in health and disease. Results BH3-mimetics and BH3-only proteins upregulate BCL-2 and MCL-1 causing apoptotic resistance We initially sought to define mechanisms of cell death resistance using BCL-2 targeting BH3-mimetics. For this purpose, we used our recently developed method called mito-priming 9 . In this system, cells co-express a pro-apoptotic BH3-only protein and an anti-apoptotic BCL-2 family member at equimolar levels and are therefore highly sensitive to BCL-2 targeting BH3-mimetic drugs (Fig. 1a ). HeLa cells were used that stably express tBID together with BCL-2 (HeLa tBID-2A-BCL-2, hereafter called HeLa tBID2A). Cell viability was determined using livecell imaging following venetoclax treatment using Syto21 to label all cells and propidium iodide to label dead cells. As expected, the majority of cells died rapidly following venetoclax treatment, nevertheless some cells failed to die (Fig. 1b ). To investigate the mechanisms of venetoclax resistance, HeLa tBID2A cells were cultured continuously in venetoclax to select for resistant cells. Increased expression of pro-survival BCL-2 family proteins is a common means of apoptotic resistance 10 . Indeed, cells that were continuously or intermittently cultured in venetoclax displayed higher expression of anti-apoptotic BCL-2 and MCL-1 (Supplementary Fig. 1a ). Surprisingly, following culture in regular medium post-venetoclax treatment, the resistant cells became sensitive to venetoclax again over time (Supplementary Fig. 1b ). This decrease of resistance was accompanied by a decrease of BCL-2 and MCL-1 expression back to basal levels (Supplementary Fig. 1c ). We next investigated whether short-term treatment with venetoclax was sufficient to promote BCL-2 and MCL-1 upregulation. Indeed, 3 h of venetoclax treatment led to increased levels of BCL-2 and MCL-1 (Fig. 1c ). As before, BCL-2 and MCL-1 levels decreased following removal of venetoclax, demonstrating a reversible upregulation (Fig. 1d ). This effect was not restricted to venetoclax, because an increase in BCL-2 and MCL-1 expression was also observed following treatment with other BH3mimetics (navitoclax and ABT-737) (Supplementary Fig. 1d ). Given that venetoclax-induced upregulation of BCL-2 and MCL-1 is reversible, this suggests that it is not genetically based. We noted an initial resistance of HeLa tBID2A cells cultured in venetoclax to re-treatment with venetoclax (Supplementary Fig. 1b ), presumably due to increased levels of BCL-2 and MCL-1. Since increased levels of BCL-2 and MCL-1 were also observed in response to acute treatment, we investigated whether this was also sufficient to protect from apoptosis. Indeed, treatment with venetoclax for 48 h could protect from re-treatment with venetoclax and S63845, a specific inhibitor of MCL-1 11 (Fig. 1e ). This protection was dependent on the increased levels of BCL-2 and MCL-1, because increasing the dose of venetoclax and S63845 could overcome the resistance (Fig. 1e ). The pro-apoptotic proteins BAX and BAK are essential for mitochondrial outer membrane permeabilization (MOMP) during apoptosis 12 . To determine the role of apoptosis in the upregulation of BCL-2 and MCL-1, we generated HeLa tBID2A cells deficient in BAX and BAK using CRISPR-Cas9 genome editing. As expected, BAX BAK-deleted HeLa tBID2A cells were completely protected from mitochondrial apoptosis and caspase activation in response to venetoclax treatment (Supplementary Fig. 1e, f ). Nevertheless, despite an inability to undergo apoptosis, BAX BAK-deleted HeLa tBID2A cells still upregulated BCL-2 and MCL-1 following venetoclax treatment (Fig. 1f , Supplementary Fig. 1g ). Similarly, upregulation of BCL-2 and MCL-1 was also observed when caspase activity was blocked using the caspase inhibitor qVD-OPh (Fig. 1f ). These data demonstrate that while the upregulation of BCL-2 and MCL-1 requires BH3-mimetic treatment, it occurs irrespective of apoptosis. Finally, to determine whether upregulation of BCL-2 and MCL-1 was specific to BH3-mimetics, we examined if a comparable effect could be observed by expressing BH3-only proteins. Control or BAX and BAK deleted HeLa cells were transfected with BH3-only proteins (tBID, PUMA, tBID (BIM BH3, with the BID BH3 domain replaced with the BIM BH3 domain)) and analysed for MCL-1 expression by western blot (Fig. 1g , Supplementary Fig. 1h ). In all cases, MCL-1 expression was upregulated, indicating that BH3-only proteins can have similar effects as BH3-mimetics. Collectively, these data demonstrate that BH3-mimetics and BH3-only proteins can promote apoptotic resistance by increasing pro-survival BCL-2 protein expression. Fig. 1: BH3-mimetics and BH3-only proteins upregulate BCL-2 and MCL-1 causing apoptotic resistance. a Schematic model of the mitoprimed system. b HeLa tBID2A cells were treated with venetoclax (ven) and imaged over time. Percentage of dead cells was determined by staining all cells with Syto 21 and dead cells with propidium iodide. n = 3 independent experiments; mean values ± s.e.m.; unpaired, two-sided t -test at 24 h. c HeLa tBID2A cells were treated with 500 nM venetoclax, harvested at the indicated time points and protein expression was analysed by western blot (representative blot of three independent repeats). d HeLa tBID2A cells were treated for 48 h with 500 nM venetoclax followed by replacement with regular growth medium (washout). At the indicated times post medium change cells were harvested and protein expression was analysed by western blot (representative blot of three independent repeats). e HeLa tBID2A cells were treated with or without venetoclax as indicated for 48 h followed by treatment with venetoclax and the MCL-1 inhibitor S63845. Cell death was then monitored by Sytox Green staining and Incucyte imaging. n = 3 independent experiments; mean values ± s.e.m. f Control or BAX BAK CRISPR HeLa tBID2A cells were treated with 500 nM venetoclax in combination with 10 μM qVD-OPh as indicated for 48 h, harvested and protein expression was analysed by western blot (representative blot of three independent repeats). g Control or BAX BAK CRISPR HeLa cells were transfected with tBID-GFP plasmid, harvested after the indicated times and protein expression was analysed by western blot. Fold change normalised to loading control is stated below (representative blot of three independent repeats). Full size image BH3-mimetics and BH3-only proteins can upregulate anti-apoptotic BCL-2 proteins in a non-cell autonomous manner To understand how venetoclax treatment causes upregulation of BCL-2 and MCL-1, we determined whether increases in protein or mRNA stability might contribute. HeLa tBID2A cells were treated with venetoclax for 24 h, after which inhibitors of protein synthesis (cycloheximide) or transcription (actinomycin D) were added for varying times. Neither MCL-1 nor BCL-2 protein or mRNA stability was increased after venetoclax treatment (Supplementary Fig. 2a, b ), indicating that mechanisms besides protein or mRNA stability are likely responsible. Given these results, we investigated if venetoclax might upregulate BCL-2 and MCL-1 in a non-cell autonomous manner. Control or BAX BAK deleted HeLa tBID2A cells were treated for 3 h with 500 nM venetoclax, followed by exchange to regular medium for 45 h. Media from treated cells was then transferred to recipient cells, which were examined for BCL-2 and MCL-1 expression after 48 h (Fig. 2a ). Importantly, media from venetoclax treated HeLa tBID2A cells promoted upregulation of BCL-2 and MCL-1 in recipient cells (Fig. 2b ). Similarly, supernatant from BAX and BAK deficient cells also promoted MCL1 and BCL-2 upregulation, supporting earlier findings that cell death is not required for this effect (Fig. 2b ). Supernatant from venetoclax treated cells failed to induce apoptosis in recipient cells, demonstrating the absence of potentially residual venetoclax (Supplementary Fig. 2c ). Additionally, media from HeLa tBID2A cells treated with a different BCL-2 inhibitor, S55746 13 , also upregulated MCL-1 and BCL-2 in recipient cells (Supplementary Fig. 2d ). To investigate the mechanism of this non-cell autonomous effect, we first characterised the signal causing upregulation of anti-apoptotic BCL-2 proteins. Supernatant from venetoclax treated HeLa tBID2A cells was subjected to centrifugal filtration using a filter with a 3 kDa cut-off. Flow-through and concentrate were added to recipient cells for 48 h, and MCL-1 and BCL-2 expression was determined by western blot (Supplementary Fig. 2e ). Only the concentrate (containing molecules above 3 kDa) was capable of increasing BCL-2 and MCL-1 expression, suggesting that small molecules such as metabolites and lipids are not responsible. Importantly, Proteinase K treatment of supernatant from BH3-mimetic treated cells abolished the ability to upregulate MCL-1 and BCL-2, consistent with the factor(s) being proteinaceous (Fig. 2c ). Finally, we investigated whether BH3-only proteins can also have a similar non-cell autonomous effect. HeLa or 293T cells were transfected with tBID and the supernatant was transferred onto recipient cells. Consistent with earlier results, supernatant transferred from tBID transfected cells also caused an up-regulation of BCL-2 and MCL-1 expression (Fig. 2d , Supplementary Fig. 2f ). Together, these data demonstrate that BH3-mimetics and BH3-only proteins can upregulate BCL-2 and MCL-1 expression in a non-cell autonomous manner. Fig. 2: BH3-mimetics and BH3-only proteins can upregulate anti-apoptotic BCL-2 proteins in a non-cell autonomous manner. a Schematic of supernatant transfer experiments: HeLa tBID2A cells were treated with 500 nM venetoclax for 3 h, followed by exchange to regular growth medium for 45 h. Supernatant was harvested, filtered and added onto recipient cells before analysis. b HeLa tBID2A cells were treated directly with venetoclax or with supernatant from untreated or venetoclax treated control or BAX BAK CRISPR cells as described in a . After 48 h cells were harvested and protein expression was analysed by western blot (representative blot of three independent repeats). c Supernatant from control or venetoclax treated cells was digested with Proteinase K before addition onto recipient cells and protein expression was analysed by western blot after 48 h (representative blot of three independent repeats). d Supernatant from HeLa cells transfected with tBID-GFP was collected after 48 h and transferred onto recipient HeLa cells. After 48 h, recipient cells were harvested and protein expression analysed by western blot. Fold change normalised to loading control is stated below (representative blot of two independent repeats). Full size image Non-cell autonomous upregulation of anti-apoptotic BCL-2 proteins requires MEK-ERK signalling We sought to define the non-cell autonomous mechanism causing anti-apoptotic BCL-2 protein upregulation. For this purpose, HeLa tBID2A cells were treated with venetoclax together with inhibitors targeting pathways previously implicated in anti-apoptotic BCL-2 regulation 14 , 15 , 16 , 17 . After co-treatment for 48 h, cell lysates were probed for BCL-2 and MCL-1 expression by western blot. Of all the tested inhibitors, only trametinib (a MEK kinase inhibitor 18 ) potently blocked venetoclax induced BCL-2 and MCL-1 expression (Fig 3a, b ). The decrease in phosphorylation of ERK1/2, a direct target of MEK 19 , validated trametinib activity (Fig. 3b ). Upregulation of BCL-2 and MCL-1 was transcriptional, because venetoclax treatment increased RNA levels, which could be inhibited by trametinib co-treatment (Fig. 3c ). The upregulation of MCL-1 by BH3-mimetics was not limited to HeLa cells, since it was also observed in IMR90 lung fibroblasts and CWR-R1 prostate cancer cells (Supplementary Fig. 3a, b ). We next investigated whether non-cell autonomous upregulation of BCL-2 and MCL-1 required MEK-ERK signalling. Trametinib was added to supernatant from venetoclax treated HeLa tBID2A or CWR-R1 cells before the supernatant was added to recipient cells. BCL-2 and MCL-1 upregulation was effectively blocked by trametinib (Fig. 3d , Supplementary Fig. 3c ). Again this effect was transcriptional, because upregulation of MCL-1 RNA in recipient cells was inhibited by trametinib addition to the supernatant (Supplementary Fig. 3d ). Furthermore, supernatant of venetoclax treated cells could directly stimulate MEK activity in recipient cells as determined by increased pERK1/2 levels (Fig. 3e ). To further investigate these findings, we generated ERK1/2 deficient HeLa tBID2A cells by CRISPR-Cas9 genome editing, hereafter called ERK1/2 CRISPR cells (Fig. 3f ). Media from control or ERK1/2 CRISPR deleted cells following venetoclax treatment was transferred onto control or ERK1/2 CRISPR cells, and after 48 h BCL-2 and MCL-1 expression was determined by western blot. BCL-2 and MCL-1 expression increased following incubation with media from venetoclax treated cells or after direct treatment with venetoclax in control cells but was severely attenuated in ERK1/2 CRISPR cells (Fig. 3f ). Finally, we investigated whether MEK signalling, by enabling BCL-2 and MCL-1 upregulation, contributed to venetoclax resistance. HeLa tBID2A cells were incubated with venetoclax ± trametinib for 48 h, after which they were treated with venetoclax and S63845 and cell viability was determined by Sytox Green exclusion and Incucyte live-cell imaging (Supplementary Fig. 3e ). Whereas venetoclax pre-treated cells were resistant, trametinib co-treatment completely abolished this resistance, supporting a functional role for MEK signalling in mediating apoptotic resistance via BCL-2 and MCL-1 upregulation. Similarly, transfer of venetoclax treated supernatant conferred resistance to the recipient cells. The resistance was dependent on BCL-2 and MCL-1 upregulation because supplementing the venetoclax treated supernatant with trametinib before addition to recipient cells re-sensitised those cells to the cytotoxic treatment (Fig. 3g ). We next investigated if non-cell autonomous upregulation of BCL-2 and MCL-1 was specific to BH3-mimetics or could also be triggered by conventional chemotherapies. Similar to previous experiments, we treated HeLa cells with different chemotherapeutic drugs (etoposide, doxorubicin and paclitaxel) before transferring the supernatant to recipient cells. All three drugs promoted BCL-2 and MCL-1 upregulation, suggesting that apoptosis-inducing stresses can generally induce this effect (Fig. 3h ). Supplementing the supernatant with trametinib before addition to recipient cells prevented upregulation, demonstrating that MEK-ERK signalling is essential for the upregulation, consistent with our earlier data. Collectively, these data show that apoptosis-inducing stresses activate MEK-ERK signalling, causing upregulation of BCL-2 and MCL1 and apoptotic resistance in a non-cell autonomous manner. Fig. 3: Non-cell autonomous upregulation of anti-apoptotic BCL-2 proteins requires MEK-ERK signalling. a HeLa tBID2A cells were untreated or treated with 500 nM venetoclax in combination with the indicated inhibitors for 48 h, harvested and protein expression was analysed by western blot (representative blot of two independent repeats). b HeLa tBID2A cells were untreated or treated with 500 nM venetoclax in combination with 500 nM trametinib for 48 h, harvested and protein expression was analysed by western blot (representative blot of three independent repeats). c HeLa tBID2A cells were untreated or treated with 500 nM venetoclax in combination with 500 nM trametinib, harvested and RNA expression was analysed by RT-qPCR. n = 3 independent experiments; mean values ± s.e.m.; Tukey corrected one-way ANOVA. d Supernatant from untreated or venetoclax treated HeLa tBID2A cells was supplemented with 500 nM trametinib before addition onto recipient cells. After 48 h of incubation, recipient cells were harvested and protein expression analysed by western blot (representative blot of three independent repeats). e Supernatant from untreated or venetoclax treated HeLa tBID2A cells was added onto recipient cells. After the indicated times, recipient cells were harvested and protein expression analysed by western blot. Treatment with EGF served as a positive control (representative blot of two independent repeats). f Control or ERK1/2 CRISPR HeLa tBID2A cells were treated directly with venetoclax or with supernatant from control or ERK1/2 CRISPR HeLa tBID2A cells as indicated before harvesting and western blot analysis for protein expression (representative blot of three independent repeats). g Supernatant from control or venetoclax treated HeLa tBID2A cells was supplemented with 500 nM trametinib as indicated before addition onto recipient cells. After 48 h of incubation, the recipient cells were treated with 500 nM venetoclax and 500 nM S63845 and survival was monitored by Sytox green exclusion and livecell imaging. n = 3 independent experiments; mean values ± s.e.m.; Tukey corrected one-way ANOVA. h HeLa cells were treated with the indicated drugs (Etoposide 50 µM, Doxorubicin 2 µM, Paclitaxel 1 µM) for 3 h, followed by 45 h incubation in regular medium. Then the supernatant was harvested, filtered and supplemented or not with trametinib (500 nM) before addition onto recipient cells. After 48 h, cells were harvested and protein expression analysed by western blot. Fold change normalised to loading control is stated below (representative blot of three independent repeats). Full size image FGF signalling mediates non-cell autonomous upregulation of BCL-2 proteins Various ligands can bind to receptors that signal through MEK-ERK, with receptor tyrosine kinases (RTK) being prominent activators of this pathway. Therefore, to determine the paracrine mediator(s) causing BCL-2 and MCL-1 upregulation, we initially focussed on RTK pathways. To identify potential ligands present after BH3-mimetic treatment, supernatant from control or venetoclax treated HeLa tBID2A cells was incubated with a human growth factor antibody array enabling detection of 41 different growth factors (Fig. 4a ). Of the growth factors present on the array, FGF2 was increased following venetoclax treatment (Fig. 4a ). Upregulation of FGF2 in the supernatant was confirmed by subsequent ELISA analysis (Fig. 4b ). Addition of recombinant FGF2 to cells at a concentration similar to what we measured in venetoclax treated supernatant was sufficient to upregulate BCL-2 and MCL-1 expression (Fig. 4c ). To directly test the importance of FGF2 in mediating paracrine upregulation of BCL-2 proteins, we generated FGF2 deficient cell lines by CRISPR-Cas9 (Supplementary Fig. 4a ). Loss of FGF2 completely suppressed the ability of venetoclax to upregulate BCL-2 and MCL-1 in a paracrine manner, supporting a key role for FGF2 (Fig. 4d ). Consistent with activation of FGF-signalling, known target genes of FGF receptors (CDX2 20 , DUSP6 21 and SPRY4 22 ) were also upregulated in response to supernatant from venetoclax treated cells (Fig. 4e ). This upregulation could be blocked by adding trametinib to the supernatant before addition to recipient cells. To investigate the requirement of FGF receptors for upregulation of BCL-2 and MCL-1, we used two different FGFR inhibitors (AZD4547 and PRN1371 23 , 24 ). Co-treatment of HeLa tBID2A cells with either inhibitor and venetoclax prevented BCL-2 and MCL-1 upregulation (Supplementary Fig. 4b ). Similarly, supernatant from venetoclax treated HeLa tBID2A cells supplemented with FGFR inhibitors prevented upregulation of BCL-2 and MCL-1 on recipient cells (Fig. 4f ). Reduced upregulation of MCL-1 was also observed upon co-treatment of MRC-5 lung fibroblast cells with venetoclax and PRN1371 (Supplementary Fig. 4c ). The FGF receptor family is composed of four different receptors 25 , of which RNAseq analysis showed that FGFR2 was barely expressed in the HeLa cells used here (Supplementary Fig. 4d ). To determine which receptors were responsible to signal the upregulation of BCL-2 and MCL-1 by FGF2, we used RNAi to knock down their expression individually or in combination. Knocking down FGFR1 and FGFR3, either individually or in combination, prevented upregulation of BCL-2 and MCL-1 either after direct treatment (Fig. 4g , Supplementary Fig. 4e ) or with venetoclax treated supernatant (Fig. 4h ). In contrast, knockdown of FGFR4 failed to affect BCL-2 and MCL-1 expression (Supplementary Fig. 4f–h ). We next aimed to understand how FGF2 is regulated in response to BH3-mimetic treatment. A minor, yet significant increase in FGF2 mRNA was detected in both wild type and BAX BAK deficient HeLa tBID2A cells following venetoclax treatment (Fig. 5a ), however there was a decrease in FGF2 protein levels over time (Fig. 5b ). Given the lack of increase in FGF2 protein level, we investigated whether inhibiting transcription in donor cells affected the upregulation of BCL-2 and MCL-1 in recipient cells. Supernatant from cells co-treated with venetoclax and actinomycin D still led to upregulation of MCL-1 and BCL-2 after transfer (Fig. 5c ), suggesting that transcriptional regulation is not necessary. In contrast, actinomycin D prevented upregulation of MCL1 in supernatant-treated recipient cells (Fig. 5d ), corroborating transcriptional upregulation of MCL-1 in recipient cells. These experiments demonstrate that the activation of FGF2 in response to apoptotic stress is independent of induced expression, but instead may be due to increased release from the cell. Collectively, these data demonstrate that FGF-signalling, triggered by release of FGF2 from BH3-mimetic treated cells, is required and sufficient for non-cell autonomous upregulation of anti-apoptotic BCL-2 proteins. Fig. 4: FGF signalling mediates non-cell autonomous upregulation of BCL-2 proteins. a A growth factor membrane ligand array was probed with supernatant from control or venetoclax treated HeLa tBID2A cells. Spots for FGF2 are indicated. b Levels of FGF2 were determined in supernatant from control or venetoclax treated HeLa tBID2A cells by ELISA. n = 3 independent experiments; mean values ± s.e.m.; unpaired, two-sided t -test. c HeLa tBID2A cells were treated with recombinant FGF2, harvested after 6 h and protein expression analysed by western blot (representative blot of three independent repeats). d Control or FGF2 CRISPR HeLa tBID2A cells were directly treated with venetoclax or the respective supernatant for 48 h as indicated before harvesting and analysis of protein expression by western blot (representative blot of three independent repeats). e Supernatant from untreated or venetoclax treated HeLa tBID2A cells was supplemented with 500 nM trametinib before addition onto recipient cells. After 3 h, recipient cells were harvested and RNA expression of FGF target genes analysed by RT-qPCR. n = 4 independent experiments; mean values ± s.e.m.; * p < 0.0001 compared to SN Ven; Dunnetts corrected one-way ANOVA. f Supernatant from control or venetoclax treated HeLa tBID2A cells was supplemented with decreasing doses of FGFR inhibitors as indicated (AZD: 5 µM, 2.5 µM, 1.25 µM; PRN1371: 10 µM, 5 µM, 2.5 µM) before addition onto recipient cells for 48 h and analysis of protein expression by western blot (representative blot of three independent repeats). g HeLa tBID2A cells were transfected with siRNA targeting FGFR1 and FGFR3 alone or in combination for 24 h before addition of 500 nM venetoclax, harvesting after 48 h and analysis of protein expression by western blot (representative blot of three independent repeats). h HeLa tBID2A cells were transfected with siRNA targeting FGFR1 and FGFR3 alone or in combination for 24 h before addition of control or venetoclax treated supernatant from control cells, harvesting after 48 h and analysis of protein expression by western blot (representative blot of three independent repeats). Full size image Fig. 5: Non-cell autonomous upregulation of BCL-2 proteins is independent of transcription. a HeLa tBID2A cells were treated with 500 nM venetoclax for the indicated time, RNA was harvested and expression of FGF2 mRNA analysed by qPCR. n = 4 independent experiments; mean values ± s.e.m.; * p < 0.0001; unpaired, two-sided t -test. b HeLa tBID2A cells were treated with 500 nM venetoclax for the indicated time, harvested and protein expression analysed by western blot. Fold change normalised to loading control is stated below (representative blot of three independent repeats). c HeLa tBID2A cells were treated with venetoclax ± actinomycin D for 3 h, followed by 45 h incubation in regular medium. Then the supernatant was harvested, filtered and added onto recipient cells. After 48 h, recipient cells were harvested and protein expression analysed by western blot (representative blot of three independent repeats). d HeLa tBID2A cells were treated with venetoclax for 3 h, followed by 45 h incubation in regular medium. Then the supernatant was harvested, filtered, supplemented with actinomycin D as indicated and added onto recipient cells. After 3 h, recipient cells were harvested and protein expression analysed by western blot (representative blot of three independent repeats). Full size image FGF signalling is essential for non-cell autonomous apoptotic resistance We next investigated the contribution of FGF-mediated BCL-2 and MCL-1 upregulation to apoptosis resistance. First, HeLa tBID2A cells were co-treated with venetoclax and inhibitors of FGF signalling (PRN1371, AZD4547) or MEK (trametinib). Whereas the venetoclax treated cells were resistant to apoptosis induced by re-treatment with venetoclax and S63845, cells co-treated with FGF signalling inhibitors were re-sensitised to apoptosis (Fig. 6a , Supplementary Fig. 5a ). As observed previously, increasing the dose of venetoclax and S63845 could restore sensitivity to venetoclax pretreated cells. Next, the potential of FGF signalling to promote non-cell autonomous apoptotic resistance was investigated. Supernatant was harvested from venetoclax treated HeLa tBID2A cells and supplemented with inhibitors of the FGF signalling pathway before addition onto recipient cells. Consistent with our earlier data, supernatant from venetoclax treated cells conferred apoptotic resistance to recipient cells (Fig. 6b , Supplementary Fig. 5b ). Crucially, apoptotic sensitivity was restored upon inhibition of either FGF or MEK signalling. Again, increasing the dose of venetoclax and S63845 could kill the venetoclax pre-treated resistant cells. To investigate this further, we assessed effects on long-term clonogenic survival. Supporting earlier data, supernatant from venetoclax treated cells could confer long-term protection from apoptosis, which was restored by supplementation of FGF signalling inhibitors (Fig. 6c ). To further investigate these findings, we transferred supernatant from venetoclax treated control or FGF2 deficient cells (Supplementary Fig. 4a ) and tested survival in response to venetoclax and S63845. While the supernatant from the wildtype cells protected from apoptosis, this protection was attenuated in cells treated with supernatant from FGF2 deficient cells (Fig. 6d ). Increasing the concentration of venetoclax and S63845 could overcome the protective effect of supernatant from venetoclax treated cells. These data support a model whereby following apoptotic stress, cells can signal non-cell autonomous apoptotic resistance by FGF-signalling dependent on the upregulation of BCL-2 and MCL-1 (Fig. 6e ). Finally, we investigated whether a similar mechanism may be evident in cancer. Interrogating a total of 25 different TCGA cancer datasets, we first removed patients with mutations in components of the FGFR signalling pathway. These patients might show an altered activation of the FGF signalling pathway independent of the non-cell autonomous pathway described in this work. We next calculated an FGF pathway activation score by determining the mean expression of ten FGF target genes as a proxy for activation of FGF signalling. The FGF score was then correlated with MCL-1 and BCL-2 expression to determine if increased activation of FGF signalling correlates with increased levels of prosurvival BCL-2 family gene expression. Using this approach, we identified several cancer types that displayed a correlation between FGF activation and BCL-2 and/or MCL-1 expression (Fig. 6f , Supplementary Fig. 6a, b ). Next, we determined whether this correlation had an influence on disease progression by stratifying patients into groups based on FGF score and BCL-2 or MCL-1 expression. Strikingly, in three out of 25 tested cancer types, survival of the high scoring group (high activation of FGF signalling and high expression of BCL-2 or MCL-1) was significantly decreased when compared to the low scoring group (Fig. 6f , Supplementary Fig. 6a, b ). These data suggest that FGF-mediated resistance might have a protective effect on cancer cell survival and therefore worsen prognosis. Fig. 6: FGF signalling is essential for non-cell autonomous apoptotic resistance. a HeLa tBID2A cells were treated with 500 nM venetoclax in combination with RTK pathway inhibitors (AZD4547 (5 μM), PRN1371 (10 μM) or trametinib (500 nM)) as indicated for 48 h. Then the cells were treated with the indicated concentrations venetoclax + S63845 and cell survival was monitored by Incucyte. Heatmap colours show cell death at 24 h (mean from n = 3 independent experiments), corresponding survival curves are shown in Supplementary Fig. 5a . b HeLa tBID2A cells were incubated with 500 nM venetoclax treated supernatant supplemented with RTK pathway inhibitors (AZD4547 (5 μM), PRN1371 (10 μM) or trametinib (500 nM)) as indicated before addition onto recipient cells for 48 h. Then the cells were treated with the indicated concentrations venetoclax + S63845 and cell survival was monitored by Incucyte. Heatmap colours show cell death at 24 h (mean from n = 3 independent experiments), corresponding survival curves are shown in Supplementary Fig. 5b . c HeLa tBID2A cells were plated at low density (1000 cells per 6 well) and incubated with 500 nM venetoclax-treated supernatant supplemented with RTK pathway inhibitors (AZD4547 (5 μM), PRN1371 (10 μM) or trametinib (500 nM)) as indicated for 48 h. Cells were then treated with 2.5 μM venetoclax + S63845 for another 48 h before replacement with regular growth medium. After an additional 5 days colonies were visualised by crystal violet staining and quantified. n ≥ 3 independent experiments; mean values ± s.e.m.; * p < 0.05 compared to all other treatments; Tukey corrected one-way ANOVA. d Control or FGF2 deficient cells were treated for 3 h with 500 nM venetoclax followed by 45 h regular medium. Then the supernatant was harvested and added to control cells for 48 h. After that, the cells were treated with venetoclax + S63845 with the indicated doses and survival monitored by Incucyte. n = 3 independent experiments; mean values ± s.e.m.; * p < 0.05 compared to venetoclax treatment at 24 h; Tukey corrected one-way ANOVA. e Working model. f Pearson correlation between FGF Score and BCL-2 (upper left) or MCL-1 (lower left) in the TCGA Thymoma dataset (± s.e.m, 105 patients). FGF score, FGF receptor target gene and BCL-2 (upper middle) or MCL-1 (lower middle) expression in the TCGA Thymoma dataset. Survival of TCGA Thymoma patients stratified by FGF score and BCL-2 (upper right) or MCL-1 (lower right) expression ( p value was calculated with a log-rank test). Full size image FGF signalling induces MCL-1 in the skin and regulates wound healing dynamics Oncogenic processes are often subverted homoeostatic mechanisms. We therefore sought to define physiological roles for FGF-signalling induced anti-apoptotic BCL-2 expression. Because our data are consistent with a model in which stressed cells can alert the microenvironment to increase their threshold against apoptosis, we decided to test this hypothesis in a physiologically relevant and experimentally tractable model. In the epidermis, apoptosis has been found to play an important role in skin repair 26 , 27 . Notably, re-epithelization of the skin relies heavily on FGF signalling 28 . We therefore set out to examine whether FGF signal transduction might also modulate MCL-1 expression in the skin upon wound infliction. We hypothesised that an injured tissue might increase its apoptotic threshold to limit damage due to cell death in areas that require regeneration. This increased resistance to apoptosis could potentially protect against excessive cell death which could hinder tissue regeneration. Indeed, subjecting the dorsal skin of mice to 1 cm 2 full-thickness excisional wounds promoted upregulation of MCL-1 in keratinocytes in the vicinity of the wound (Fig. 7a ). We next examined whether inhibition of FGF signalling affected MCL-1 expression by utilising the FGFR inhibitor AZD4547 and the MEK inhibitor trametinib. We administered the inhibitors for four consecutive days by subcutaneous injection prior to infliction of the wound and harvested the skin three days post wound infliction (PWI). Our results indicate that administration of either inhibitor dramatically decreased the number of MCL-1 + cells following injury (Fig. 7b, c ), suggesting that upregulation of MCL-1 in response to skin injury is dependent on FGF signalling. Fig. 7: FGFR and MEK inhibition decreases MCL-1 expression and hinders wound repair. a H & E and immunostaining of MCL-1 in unwounded skins and 3 or 7 days post wounding (PWI). Representative image of n = 15 mice, Scale bar: 200 µm, Inset 50 µm. The wound edge is indicated with a black broken line. b H & E and immunostaining of MCL-1 in skins 3 days post wounding in DMSO or inhibitor treated mice as indicated. Representative image of n = 15 mice per group, Scale bar: 200 µm. The wound edge is indicated with a black broken line. c Quantification of MCL-1 positive cells in the area surrounding the wound site. n = 15 mice per group, each dot represents an individual mouse, mean values ± s.e.m.; * p < 0.0001 compared to DMSO; Tukey corrected one-way ANOVA. d Quantification of dorsal wound coverage in DMSO or inhibitor treated mice over a 7-day period. N ≥ 2 mice per group, each dot represents an individual mouse, mean values ± s.e.m.; Tukey corrected one-way ANOVA at day 7. Full size image To investigate the physiological role of FGF mediated MCL-1 upregulation, we monitored wound closure dynamics. For this aim, we inflicted full-excisional wounds on dorsal skins of control or inhibitor treated mice and evaluated wound coverage at specific time points PWI (Supplementary Fig. 7a ). In control mice, the wound size was reduced by ~40% after just one day, whereas in inhibitor treated mice no coverage was seen at this time point (Fig. 7d ). This delay was accompanied by a decreased area of re-epithelization (Fig. 7d , Supplementary Fig. 7b, c ). To examine the contribution of epidermal keratinocytes to the observed phenotypes, we next evaluated proliferation in the wound bed. We harvested wounds at three and seven days PWI. FGFR and MEK-inhibition resulted in decreased proliferation evident by Mcm2 and Ki67 immunostaining (Supplementary Fig. 7b, d, e ). The attenuation in wound closure phenotypes seen upon FGFR-inhibition could potentially be facilitated via decreased basal cell expansion and suprabasal migration capacity. Elegant studies have shown that in early stages of wounding healing the leading edge of the wound is mostly composed of non-proliferative migratory cells that can drive rapid re-epithelialization 29 , 30 . Our analysis revealed that FGFR-inhibited wounds displayed a less pronounced leading edge by seven days PWI, when compared to control (Supplementary Fig. 7b, c ). These findings suggest that suprabasal keratinocytes may also contribute to the attenuated wound healing phenotypes seen upon inhibition of FGF receptors and MEK. Our data indicate that FGF signalling induces the expression of MCL-1 in the skin and affects the contribution of both basal and suprabasal keratinocytes to the wound repair process. Overall, our results suggest a mechanism in which cells protect their tissue integrity by increasing the apoptotic threshold in response to stress by FGF2 mediated upregulation of pro-survival proteins. Discussion Innate or acquired resistance to cell death is of fundamental importance in health and disease. For instance, evasion of apoptosis can both promote cancer and inhibit treatment response, leading to tumour relapse 31 . To understand how cells can resist cell death, we used BCL-2 targeting BH3-mimetics as tool compounds. Unexpectedly, we uncovered a non-cell autonomous mechanism that enables apoptotic resistance. We found that upon apoptotic stress, cells can release the growth factor FGF2. By activating MEK-ERK signalling, FGF2 upregulates anti-apoptotic BCL-2 protein expression 32 , 33 in neighbouring cells, protecting against apoptosis. Importantly, resistance can be overcome by co-treatment of BH3-mimetics with FGFR-inhibitors, demonstrating the functional significance of the pathway. Accordingly, in an in vivo injury model inhibition of FGFR signalling prevents MCL-1 upregulation, apoptosis and successively wound healing. Finally, we describe a correlation between increased FGF signalling, anti-apoptotic BCL-2 protein expression and poor patient prognosis, suggesting its relevance in vivo. As we discuss further, the process we describe here, where cell death promotes life, may have various pathophysiological functions. Most cancer therapies work by killing tumour cells, consequently resistance to cell death profoundly impacts on therapeutic efficacy 31 . Typically, cancer cells can evade apoptosis through inactivating mutations in pathways that sense damage or activate cell death 31 . For instance, in chronic lymphocytic leukaemia (CLL), BCL-2 mutations have recently been reported that render BCL-2 unable to effectively bind the BH3-mimetic venetoclax, causing apoptotic resistance 34 , 35 , 36 , 37 . The resistance mechanism we report here is not mutation based, but instead is due to dying cells releasing FGF2 that causes transient apoptotic resistance in surrounding cells. This mechanism fits the concept of persistence, which has emerged to explain transient, non-heritable resistance of tumour cells to therapy 38 . Cancer cells that transiently evade cell death, called persister cells, blunt the effectiveness of chemotherapy and provide a cell pool from which drug-resistant tumours may arise. The mechanism we describe herein represents a non-cell autonomous way of generating persistence (via FGF2 release) that requires activation of the core apoptotic pathway. Suggestive of its relevance in vivo, we find that in certain cancer types there is a correlation between FGF-signalling, anti-apoptotic BCL-2 protein expression and poor prognosis. Importantly, we find that inhibition of FGF receptor signalling or of downstream MEK-ERK signalling greatly impedes the ability of stressed cells to promote survival in a non-cell autonomous manner. Our data show that apoptotic resistance is transient lasting multiple days, after which cells become sensitised again to BH3-mimetics. Extrapolating these findings to a clinical setting, one possibility may be to combine apoptosis inducing cancer therapy with FGFR inhibitors. Additionally, our data suggest that intermittent dosing of apoptosis inducing therapies, employing so-called drug holidays, may help circumvent apoptotic resistance. Some molecular and cellular mechanisms are known to be shared by both wounds and cancers, bringing forward the concept of tumours as over-healing 39 . Tissue integrity is essential for the proper functioning of any multicellular organism. To maintain this integrity, breaches have to be refilled, for instance by increased migration and proliferation. Epithelial tissue like skin is often subjected to injury, and efficient repair is necessary to close the wound and to restore proper barrier functions. This wound repair is facilitated by growth factors like FGF2 to promote cell proliferation and angiogenesis 40 , 41 . Interestingly, secretion of FGF2 was shown to be induced by shear stress 42 , and activation of ERK signalling is observed in several instances of tissue regeneration 43 , 44 . Pro-survival ERK signalling engaged by EGF released by apoptotic cells has recently been found to promote tissue integrity 45 , 46 . Importantly, while we find that FGF2 can be released from viable cells under conditions of apoptotic stress, EGF release required apoptosis. This highlights that pro-apoptotic signalling can engage multiple pro-survival mechanisms. The role of BCL-2 proteins in skin injury has not been extensively studied, however MCL-1 seems to be involved in keratinocyte survival and differentiation 47 . Our data now provide a potential connection between FGF2 and MCL-1 in wound healing: In response to cell stress due to injury, cells secrete FGF2 to induce pro-survival MCL-1 expression in neighbouring cells and to increase their apoptotic threshold. Increasing resistance to death in this setting might have several reasons, for example to limit the sustained damage by preventing extensive cell death or to promote regeneration of the tissue by protecting heavily proliferating cells from death. Inhibiting FGF signalling during wounding or decreasing the apoptotic threshold with BH3-mimetics therefore delays wound healing. An intriguing avenue for the future would be to examine how signals released from apoptotic cells can be harnessed to facilitate tissue regeneration. A remaining question is how apoptotic stress leads to FGF2 release in dying cells. FGF2 is secreted from cells in a non-conventional manner that remains to be fully elucidated 48 . Importantly, while FGF2 release occurs during caspase-dependent apoptosis, it is not dependent on apoptosis - neither inhibition of caspase function nor MOMP prevents FGF2 release. This suggests that neutralisation of anti-apoptotic BCL-2 proteins exerts a non-lethal signalling function, leading to FGF2 release. Indeed, a variety of non-apoptotic functions for BCL-2 have been reported previously, for instance roles in calcium signalling or metabolism 49 . Alternatively, BH3-only proteins may be responsible for this FGF2 release. FGF2 is secreted in a non-canonical manner, which involves its binding to PI(4,5)P 2 at the plasma membrane followed by insertion of FGF2 oligomers through the membrane 50 . Although tBID was described to be able to interact with various membranes like artificial liposomes 51 , lysosomes 52 or mitochondria 53 , it remains to be determined if this function of tBID can be extended to FGF2 secretion. Our data further emphasise that cell death exerts a plethora of non-cell autonomous effects. These include context dependent pro-proliferative, inflammatory and apoptotic activities 54 , 55 , 56 . The mechanism we describe here represents an FGF-driven pro-survival effect of apoptotic stress. As discussed, this effect may have important implications for dictating therapeutic efficacy of apoptosis-inducing cancer therapy. Beyond this, we show that apoptotic stress induced survival signalling may also have a physiological role linking tissue stress to tissue repair. Methods Cell lines and chemicals HeLa and 293T cells were purchased from ATCC, HeLa tBID2A BCL-2 were previously described 9 , MRC5 and IMR90 cells were a gift from Peter Adams, Beatson Institute, and CWR-R1 cells were a gift from Arnaud Blomme, Beatson Institute. Cell lines were not authenticated. Cells were regularly tested negative for mycoplasm. HeLa, HeLa tBID2A BCL-2 cells 9 , IMR90, MRC-5 and 293 T cells were cultured in DMEM high‐glucose medium supplemented with 10% FCS and 2 mM glutamine. CWR-R1 cells were cultured in RPMI high‐glucose medium supplemented with 10% FCS and 2 mM glutamine. MRC5 and IMR90 cells were cultured in 3% oxygen. To select for venetoclax resistant cells, HeLa tBID2A BCL-2 cells were cultured continuously in the indicated dose of venetoclax for 14 days or cultured for 8 h in venetoclax followed by 16 h normal medium daily. The following drugs and chemicals were used: ABT-199/venetoclax (AdooQ BioScience, A12500-50), ABT-263/Navitoclax (ApexBio, A3007), ABT-737 (ApexBio, A8193), Actinomycin D (Calbiochem, 114666), AZD4547 (Selleck, S2801), Chir99021 (GSK3 inhibitor, final concentration 3 µM, Gift from D. Murphy), Cycloheximide (Sigma, 1810), Doxorubiucin (Sigma D1515), EGF (Sigma, E4127), Etoposide (Sigma, E1383), FGF2 (Thermo, PHG0263), GSK690693 (AKT inhibitor, final concentration 1 uM, gift from Daniel Murphy, University of Glasgow), Paclitaxel (Sigma, T7191), PRN1371 (Selleck, S8578), Propidium iodide (Sigma, P4170), Proteinase K (Thermo, 25530049), Pyr41 (E1 Ubiquitin ligase inhibitor, final concentration 50 µM, Sigma, N2915), qVD-OPh (AdooQ BioScience, A14915-25), Rapamycin (mTOR inhibitor, final concentration 1 µM, Santa Cruz, sc-3504), S55746 (ProbeChem, PC-63502), S63845 (Chemgood, C-1370), Sytox Green (Thermo, S7020), Syto 21 (Sigma, S7556) and trametinib (MEK inhibitor, Cambridge Bioscience, HY-10999). Lentiviral transduction CRISPR-Cas9‐based genome editing was performed using LentiCRISPRv2‐puro (Addgene #52961) or LentiCRISPRv2‐blasti 9 using the following guide sequences: hBAX, 5′-AGTAGAAAAGGGCGACAACC-3′; hBAK, 5′-GCCATGCTGGTAGACGTGTA-3′; hERK1, 5′-CAGAATATGTGGCCACACGT-3′; hERK2, 5′-AGTAGGTCTGATGTTCGAAG-3′; hFGF2.1, 5′-TATGCAAGTCCAACGCACTG -3′ and hFGF2.2, 5′-CGAGCTACTTAGACGCACCC-3′. For stable cell line generation, 5*10^6 293FT cells were transfected in 10 cm dishes using 4 μg polyethylenimine (PEI, Polysciences) per μg plasmid DNA with lentiCRISPR_V2 (Addgene 52961): gag/pol (Addgene 14887): pUVSVG (Addgene 8454) at a 4:2:1 ratio. After 48 and 72 h of transfection, virus containing supernatant was filtered, supplemented with 1 μg/ml polybrene (Sigma) and added to 50.000 recipient cells in a 6 well plate. Selection with appropriate antibiotics (1 μg/ml puromycin (Sigma) or 10 μg/ml blasticidin (InvivoGen)) was started 24 h after the last infection and continued for one week 57 . Supernatant assays Generally, cells were treated for 3 h with 500 nM venetoclax, then the medium was replaced with regular growth medium. After an additional 45 h the supernatant was harvested, filtered and added onto recipient cells. For Proteinase K treatment, supernatant was treated with 200 μg/ml Proteinase K for 60 min at 50 °C, followed by 5 min at 95 °C. After cooling down, the treated supernatant was added to recipient cells. For centrifugal filtration, supernatant was filtered and added into an Amicon Ultra 15 ml tube (Merck) with a 3 kDa cut-off. After spinning at 4000 g for 60 min, the concentrate was diluted with regular growth medium to its original volume and the concentrate or the flowthrough was added onto recipient cells. The FGF2 ELISA was performed using the Human FGF-basic ELISA MAX Deluxe (Biolegend) according to the manufacturer’s instructions after 50x concentration of the supernatant by centrifugal filtration (see above). The final FGF2 concentration was determined using a standard of recombinant FGF2, taking into account the concentration step. Plasmid and siRNA transfection For plasmid transfection, Lipofectamine 2000 was used according to the manufacturer’s instructions. Transfection of siRNA was performed using Oligofectamine according to the manufacturer’s instructions. The following siGENOME SMARTpool siRNAs from Dharmacon were used: Non-targeting, D0012061305; hFGFR1, M-003131-03-0005; hFGFR3, M-003133-01-0005 and hFGFR4, M-003134-02-0005. Western blotting Cell lysates were prepared using lysis buffer (1% NP-40, 0.1% SDS, 1 mM EDTA, 150 mM NaCl, 50 mM Tris pH7.5, supplemented with Complete Protease Inhibitors (Roche) and PhosSTOP (Roche)). Protein concentration was determined with Bio-Rad Protein Assay Dye (5000006), and lysates were separated by SDS-PAGE followed by blotting onto nitrocellulose membranes and incubation with primary antibody (1:1000 in 5% milk) overnight. After washing in TBS/T and incubation with secondary antibody (Li-Cor IRDye 800CW donkey anti-rabbit, 926-32213, dilution 1:20000), blots were developed on a Li-Cor Odyssey CLx system and acquired using Imagestudio (Li-Cor). The following primary antibodies were used: Actin (A4700, Sigma), BAK (12105, Cell Signaling), BAX (2772, Cell Signaling), BCL-2 (4223, Cell Signaling), ERK1/2 (4695, Cell Signaling), basic FGF (20102, Cell Signaling), FGFR1 (9740, Cell Signaling), FGFR3 (4574, Cell Signaling), FGFR4 (8562, Cell Signaling), GFP (In house), HSP60 (4870, Cell Signaling), MCL-1 (5453, Cell Signaling), pERK1/2 (4370, Cell Signaling), Caspase 3 (9662, Cell Signaling), Caspase 9 (9502, Cell Signaling), PARP1 (9532, Cell Signaling) alpha-Tubulin (T5168, Sigma) and cleaved Caspase 3 (9664, Cell Signaling). Quantitative RT-PCR RNA from cultured cells was isolated with the GeneJET RNA purification kit according to the manufacturer’s instructions. cDNA synthesis was performed according to the manufacturer’s instructions using the High Capacity cDNA Reverse Transcription Kit (Thermo), and qPCR was performed with the Brilliant III Ultra‐Fast SYBR Green qPCR Master Mix (Agilent Technologies) on a QuantStudio 3 machine (Applied Biosystems) with the following programme: 3 min at 95 °C, 40 cycles of 20 s at 95 °C, 30 s at 57 °C, 30 s at 72 °C and a final 5 min at 72 °C. Results were analysed using the 2 –∆∆Ct method. Primer sequences can be found in Supplementary Table 1 . Cell death assays Short-term cell death was determined with an Incucyte FLR or Zoom imaging system (Essen Bioscience) 58 . Cells were treated as indicated in the Figure legend together with 30 nM Sytox green and imaged every 1 or 2 h. Analysis was performed using the Incucyte software and the number of dead (Sytox green positive) cells was normalised to the confluency at t = 0. Alternatively, cells were pre-treated for 1 h with 50 nM Syto 21, followed by cytotoxic treatments as indicated together with 5 μg/ml propidium iodide. Long-term colony formation assay was performed by plating 1000 cells per 6 well and treatment as described in the Figure legend. After 48 h of treatment with supernatant, the medium was changed to 2.5 μM venetoclax/S63845. Medium was changed to regular growth medium after an additional 48 h, and resulting colonies were stained with crystal violet after an additional 5 days. Cell death analysis via FACS was performed using Annexin V - propidium iodide staining 59 . In short, treated cells were harvested and stained with 5 μg/ml propidium iodide and Annexin V (Biolegend) in Annexin V-binding buffer (10 mM Hepes pH 7.4, 140 mM NaCl, 2.5 mM CaCl 2 ) for 15 min. Flow cytometry was conducted on a BD FACSCalibur machine with BD CellQuest software and analysed using Flowing software; cells negative for propidium iodide and Annexin V were considered alive. Membrane ligand array Supernatant was harvested from HeLa tBID2A cells as described above and used to probe a Human Growth Factor Antibody Array (Abcam, ab134002) according to the manufacturer’s instructions. Bioinformatic analysis Mutation, survival and gene expression data (Thymoma TCGA Firehose Legacy) was downloaded from cBioPortal ( ). Samples with alterations (Deletions, mutations or amplifications) in components of the FGFR signalling pathway (FGFR1, FGFR3, GRB2, FRS2, SOS1, HRAS, RAF1, MAPK1, MAPK3) were removed. The FGF score was determined by averaging the expression of the FGF2 induced gene set (CDH2, CDX4, FES, FGF2, FRS2, FYN, HOXA10, INHBA, MAP2K1, MAPK8, MMP7, PF4, RUNX2, SERPINF1) from the Harmonizome database ( ). Pearson correlation was calculated between FGF score and BCL-2 or MCL-1 expression. Samples were stratified into thirds based on FGF score, BCL-2 or MCL-1 expression. Survival was analysed comparing samples with high FGF score and high expression of BCL-2 or MCL-1 with low FGF score and low expression of BCL-2 and MCL-1. The used code is available as a supplementary software file (Supplementary Software 1 ). Wound healing experiments For all wound repair experiments, C57BL/6 J mice were sex and age matched (8-weeks-old) and randomly assigned to different treatment groups. Mice were first shaved and full-thickness 1 cm 2 excision wounds were performed on the dorsal skin. Following wound infliction, mice were euthanized with CO 2 at various time points and wounded skins were embedded in OCT for analysis. Housing, care and wounding experiments were approved by the ethical committee of the Technion - Israel Institute of Technology Immunofluorescence and Hematoxylin and Eosin staining Skins frozen in OCT were sectioned at 12 µm and fixed in 4% paraformaldehyde for 20 min at room temperature. Samples were then blocked for 2 h, followed by incubation with primary antibodies diluted in blocking buffer overnight at 4 °C. Following washing, samples were incubated with secondary antibodies (Alexa Fluor 488/546/633, Thermo) for 1 h at room temperature. The following primary antibodies were used: MCL-1 (1:800, 5453, Cell Signaling), Ki67 (1:100, 9882, eBioscience), Mcm2 (1:500, 4461, Abcam), active ItgB1 (1:100, BD, 550531) and Alexa Fluor 568 phalloidin. Sample analysis was performed on a Zeiss LSM 880 confocal microscope and analysed using Zen software. Sections were first treated with Hematoxylin followed by H 2 O, differentiator (0.3% alcoholic HCl), H 2 O, 95% EtOH, Eosin, 95% EtOH, twice in 100% EtOH and three times in xylene before mounting in xylene based mounting medium. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Source data are provided with this paper. The TCGA data used in this study are available in the cBioPortal database under [ ], [ ] and [ ]. The FGF score was based on the fgf2induced dataset from the Harmonizome [ ]. The remaining data are available within the Article and Supplementary Information . Source data are provided with this paper. Code availability Bioinformatic analysis was conducted in R, the used code is available as supplementary software file. | Glasgow scientists have uncovered important information about how cancer cells become resistant to certain treatments, according to research published in Nature Communications. Scientists at the University of Glasgow and Cancer Research UK Beatson Institute have found that dying cancer cells release a protein called FGF2, which helps neighboring cancer cells temporarily cheat death and gives them the opportunity to regrow into new drug-resistant tumors. The scientists believe that this crucial information could help make chemotherapy more effective, reducing the chances of relapse in cancer patients. In the lab, the scientists tested cancer cells with a drug called venetoclax, which stops cancer from growing by activating the cell's self-destruct mechanism, known as programmed cell death. Venetoclax is commonly used to treat chronic lymphocytic leukemia (CLL) and acute myeloid leukemia (AML). The scientists found that adding dead cancer cells, which had been given venetoclax, to live cancer cells resulted in those live cancer cells becoming resistant to the drug. They identified that FGF2 was responsible for protecting the live cancer cells from death caused by venetoclax. Further experiments showed that removing FGF2 made the cells susceptible to the drug again. Professor Stephen Tait from the Cancer Research UK Beatson Institute, who led the research, said: "Drug resistance is a major complication in many different types of cancer. Our study has shown for the first time how cancer cells can become resistant to treatment and how we can overcome that resistance. "Cancer will take every opportunity it can find to cheat death, but we believe we can stop it from escaping by using existing treatments more effectively." Interestingly, when the scientists looked at tissue samples from cancer patients and examined the levels of FGF activation in those samples, they found that those patients with higher levels of FGF activation were given a poorer prognosis by their doctor. This finding suggests a potential link between the cancer's ability to cheat death and the patient's chances of long-term survival. Professor Tait said: "Understanding how drug resistance builds in cancer will allow us to use existing treatments more effectively, as well as helping us to develop new treatments that minimize the chance of cancer coming back. "Further research is needed to confirm the clinical significance of our findings, but we believe our research could help us target treatments more effectively to the characteristics from each patient's tumor." Leukemia is the 12th most common cancer in the UK. There are around 9,907 new cases in the UK each year. Graeme Sneddon, Cancer Research UK spokesperson for Scotland, said: "This research increases our understanding of how drug resistance develops in cancer at a cellular level. "It shows for the first time how cancer cells can temporarily cheat death and regroup, with significant implications for how we use chemotherapy. With this vital information, we can stop cancer from fighting back by smarter use of existing treatments. "Further studies are now needed in patients to see which drug combinations and doses work best to overcome potential resistance and help more people survive cancer." | 10.1038/s41467-021-26613-0 |
Medicine | Ingestible sensor could help doctors pinpoint gastrointestinal difficulties | Saransh Sharma, Location-aware ingestible microdevices for wireless monitoring of gastrointestinal dynamics, Nature Electronics (2023). DOI: 10.1038/s41928-023-00916-0. www.nature.com/articles/s41928-023-00916-0 Journal information: Nature Electronics | https://dx.doi.org/10.1038/s41928-023-00916-0 | https://medicalxpress.com/news/2023-02-ingestible-sensor-doctors-gastrointestinal-difficulties.html | Abstract Localization and tracking of ingestible microdevices in the gastrointestinal (GI) tract is valuable for the diagnosis and treatment of GI disorders. Such systems require a large field-of-view of tracking, high spatiotemporal resolution, wirelessly operated microdevices and a non-obstructive field generator that is safe to use in practical settings. However, the capabilities of current systems remain limited. Here, we report three dimensional (3D) localization and tracking of wireless ingestible microdevices in the GI tract of large animals in real time and with millimetre-scale resolution. This is achieved by generating 3D magnetic field gradients in the GI field-of-view using high-efficiency planar electromagnetic coils that encode each spatial point with a distinct magnetic field magnitude. The field magnitude is measured and transmitted by the miniaturized, low-power and wireless microdevices to decode their location as they travel through the GI tract. This system could be useful for quantitative assessment of the GI transit-time, precision targeting of therapeutic interventions and minimally invasive procedures. Main Localization and tracking of wireless microdevices in the gastrointestinal (GI) tract with high spatiotemporal accuracy is of high clinical value 1 . It can enable the continuous monitoring and transit-time evaluation of the GI tract, which is essential for accurate diagnosis, treatment and management of GI motility disorders such as gastroparesis, ileus and constipation 2 , 3 . GI motility disorders are also increasingly associated with a variety of metabolic and inflammatory disorders such as diabetes mellitus and inflammatory bowel disease. Together, these GI disorders affect more than one-third of the population globally and impose a considerable burden on healthcare systems 3 . High resolution and real-time tracking of wireless microdevices in the GI tract can also benefit anatomically targeted sensing and therapy, localized drug delivery, medication adherence monitoring, selective electrical stimulation, disease localization for surgery, three-dimensional (3D) mapping of the GI anatomy for pre-operative planning and minimally invasive GI procedures 1 , 2 , 3 , 4 . The current gold-standard solutions for these procedures include invasive techniques such as endoscopy and manometry, or procedures that require repeated use of potentially harmful X-ray radiation such as computerized tomography (CT) and scintigraphy 1 , 2 , 3 , 4 , 5 . These techniques also require repeated evaluation in a hospital setting, which can confound observations given the recognized variability in motility and activity. Ideally, GI monitoring would be carried out in real-world ambulatory settings through portable and non-invasive procedures without causing patient discomfort. Alternative approaches—including video capsule endoscopy (VCE) and wireless motility capsules—allow monitoring of the GI tract in real-world settings without interruption to daily activities 6 , 7 , 8 , 9 , 10 , 11 . Wireless motility capsules are orally administered and track the pH, pressure and temperature along the GI tract, whereas VCE can augment the measurements by also acquiring video. However, these methods lack direct measurement of the capsule’s location in the GI tract, which needs to be inferred from the acquired data, thus allowing only large-scale organ mapping 6 , 7 , 8 . X-ray radiographs, in comparison, can measure the ingested capsule’s real-time location with an accuracy of around 500 µm 12 . Another drawback of VCE is the limited acquisition time of 12 h, which is much shorter than the total GI transit time (around 24–72 h) 6 , 7 , 8 . Electromagnetic (EM)-field-based tracking approaches have also been reported for the localization of sensors and devices in vivo. Two-dimensional (2D) localization of a magnetic sensor in a field-of-view (FOV) spanning less than 2 cm has been achieved using magnetic field gradients generated by bulky permanent magnets 13 . However, the sub-Tesla-level magnetic field produced by permanent magnets in this approach poses a high safety risk when magnetic materials are used in the vicinity. Alternative approaches that localize a magnet moving through the GI tract using an external array of magnetic sensors have also been explored 14 , 15 , 16 , 17 , 18 but have insufficient FOV and poor spatial resolution, which sharply degrades with using multiple magnets. Commercial systems using EM-based tracking of sensors have also been developed 19 , 20 . However, the simultaneous requirements of high FOV, planarity and efficiency of the field generator, safety with magnetic materials and metals, fully wireless operation and miniaturization of the devices, high spatiotemporal resolution, automized and real-time data analysis and system scalability with the number of devices have not been met by existing systems in the context of GI monitoring. In this Article, we report a platform for localizing and tracking wireless microdevices inside the GI tract in real time and in non-clinical settings, and with millimetre-scale spatial resolution and without using any X-ray radiation. This is achieved by creating 3D magnetic field gradients in the desired FOV using high-efficiency planar coils, which uniquely encode each spatial point. It is challenging to generate 3D field gradients using planar EM coils in the absence of a strong background field. We overcome this by employing gradients in the total magnitude of the magnetic field instead of only the Z component, and by using a combination of gradient fields to produce monotonically varying field magnitudes in a large and scalable FOV. We design miniaturized and wireless devices—termed ingestible microdevices for the anatomical mapping of GI tract (iMAG)—to sense and transmit their local magnetic field to an external receiver. The receiver maps the field data to the corresponding spatial location, allowing real-time position tracking of the iMAG devices as they move through the GI tract. Although the concept of frequency encoding similar to magnetic resonance imaging (MRI) has been previously explored 13 , we use direct spatial encoding with magnetic field gradients to create a more accurate and energy-efficient system (Supplementary Fig. 1 ). System concept Our system uses high-efficiency planar EM coils to generate 3D magnetic field gradients in an FOV spanning the entire GI tract (Fig. 1 ). The field gradients are generated in a time-multiplexed sequence such that at any given time, the principal magnetic-field gradient occurs along a single axis. Using the field measurements along three orthogonal axes, the 3D position of the device can be unambiguously decoded. The complete iMAG system can be readily deployed in various non-clinical settings such as smart toilets, wearable jackets or portable backpacks, thus allowing real-time GI tract monitoring without disrupting the daily activities of the patient. A prototype animal chute with gradient coils is designed in this work for evaluation in large animal models, as discussed later. Another prototype with the gradient coils attached to a toilet seat is designed for an at-home system (Extended Data Fig. 6 ), demonstrating the use of our technology for chronic and non-invasive human applications. Fig. 1: System overview. Overview of the complete magnetic-field-gradient-based tracking system is shown. A wireless iMAG is shown inside the patient. An external smartphone/receiver sends a wireless ping signal to the iMAG to measure its local magnetic field. The measured field value is transmitted by the iMAG to the receiver, which maps it to the corresponding spatial coordinates and displays the 3D location in real time. The magnetic field is generated by planar EM coils placed behind the patient’s back, which can be customized to form a wearable jacket, put into a backpack with batteries or attached to a toilet seat for continuous GI tract monitoring. The field generated by the electromagnets is strictly monotonic in nature, resulting in a magnetic field gradient that uniquely encodes each spatial point. Full size image The spatial localization resolution obtained by our system in each dimension (Δ x ) is given by $$\Delta x = \Delta B/G,$$ (1) where Δ B is the iMAG’s field-measurement resolution and G is the applied magnetic field gradient along the corresponding axis. The goal for the iMAG is to have Δ B = 3 µT and G > 3 mT m –1 across the entire FOV to achieve a localization resolution of 1 mm. To localize the devices along each axis (Fig. 2a ), a monotonically varying magnetic field is generated that has a gradient in its total magnitude along the same axis. A simplified view for the X axis is shown in Fig. 2b that results in equation ( 2 ). The three orthogonal components of the field ( \({\hat {x}}B_X,{\hat {y}}B_X,{\hat {z}}B_X\) ) measured by each device (Fig. 2b ) are used for computing the field magnitude at the device’s location, as described in equation ( 3 ). The field magnitude can then be mapped to the corresponding spatial coordinate. $$\left\| {B_{X1}} \right\| < \left\| {B_{X2}} \right\| < \left\| {B_{X3}} \right\|,$$ (2) $$\left\| {B_{Xi,\,i = 1,2,3}} \right\| = \sqrt {({\hat {x}}B_{Xi})^2 + ({\hat {y}}B_{Xi})^2 + ({\hat {z}}B_{Xi})^2}$$ (3) Fig. 2: iMAG device architecture and characterization. a , The iMAG device is shown inside the patient’s GI tract with magnetic field gradients present along the three axes (shown only along the X and Y axes for simplicity). b , Conceptual overview of the 3D localization of magnetic sensing devices D 1 −D 3 . A monotonically varying magnetic field is generated to result in a field gradient along the X axis. Each device measures the total field magnitude at its location, which is unique for each point along the X axis, thereby allowing one-to-one mapping from field to position. This process is repeated for localization along the Y and Z axes. c , The iMAG consists of a 3D magnetic sensor, a BLE microprocessor to communicate with the sensor, an antenna to communicate with the external receiver and coin-cell batteries for power. d , e , Top ( d ) and bottom ( e ) views of the iMAG PCB showing the placement of critical circuit blocks. f , The iMAG is encapsulated to form a cylindrical pill measuring 20 mm in length and 8 mm in diameter. g – i , Complete communication protocol between the smartphone, receiver board, iMAG devices and gradient coils is illustrated. j – m , In vitro characterization of the communication range between the iMAG and receiver board. The range was measured for 250 ml HCl with varying pH ( j ), 500 ml NaCl solution with varying salt concentrations ( k ), SGF ( l ) and SIF ( m ) with varying quantities. All the range values greater than 1.0 m are labelled as 1.1 m since that is the sufficient range here. Full size image Since the magnetic field B X has a net gradient in its magnitude along the X axis, G X is defined as $$X - \mathrm{Gradient} = G_X = \partial \left\| {B_{X}} \right\| /\partial x.$$ (4) The process is then repeated for the Y and Z axes to localize the devices along them. By employing gradients in the magnitude of the field along each axis, our localization technique is immune to potential inaccuracies caused by device misalignment relative to any coordinate system. iMAG design and characterization We designed the 3D magnetic-field-sensing iMAG devices with the following specifications: high-resolution field measurement (3 µT per least significant bit or LSB); wireless operation at 2.4 GHz Bluetooth frequency; ultralow power for prolonged battery life (2–4 weeks); small form factor (20 mm length and 8 mm diameter) and biocompatibility. The iMAG device (Fig. 2c–e ) consists of a 3D magnetic sensor that can measure and digitize magnetic field values to a 16-bit digital vector with 3 µT accuracy ( Methods ). A Bluetooth low-energy (BLE) microprocessor interfaces with the sensor over the inter-integrated circuit (I2C) protocol. The digitized field vectors received by the microprocessor are sent to a 2.4 GHz Bluetooth antenna for wireless transmission to the external receiver. Coin-cell batteries are used for power. The iMAG is fabricated by assembling these low-cost and commercially available components on a custom-designed printed circuit board (PCB) and packaged into a biocompatible polydimethylsiloxane mould with the size of an ingestible 000 capsule (Fig. 2f ). Supplementary Fig. 15 summarizes the power consumption of the iMAG under different operating modes. The transmit power was set to 4 dBm to ensure maximum connectivity with the receiver. An advertising rate of 2.5 s and a connection interval of 50 ms were chosen as a compromise among the connection speed, stability and battery life. All non-necessary peripherals were deactivated to ensure that the maximum continuous current was within the discharge limits of the battery. To measure the 3D location of the iMAG, an external smartphone sends a wireless ping signal to an nRF52 development board, used as the receiver in this work (Fig. 2g ). The ping signal is relayed by the receiver to the iMAG devices to trigger magnetic field measurements at their appropriate times (Fig. 2h ). With its input/output pins connected to the gradient coils’ ENA (enable) switches, the receiver also activates the required sequence of coil combinations to generate the magnetic field gradients. On reception of the measured magnetic-field data values from the connected iMAGs (Fig. 2i ), the receiver displays them on a computer screen using the universal asynchronous receiver/transmitter (UART) protocol. The receiver concurrently runs a search algorithm to retrieve the 3D spatial coordinates corresponding to the magnetic field data ( Methods ). The magnetic field value at each point in the FOV is measured and stored in a look-up table (LUT) during a prior characterization phase ( Methods ). The search algorithm uses the LUT from the characterization phase for position retrieval. The communication range is defined as the longest distance between the iMAG and receiver before losing connection (when received signal strength at the receiver is less than −96 dBm), and is evaluated under various in vitro settings. First, the iMAG was submerged in an HCl solution to mimic the acidic and fluid-filled gastric cavity. For a pH of 2–6, the range was >1 m (Fig. 2j ). For a typical gastric pH from 1.2 to 3.5, the range varied from 30 cm to >1 m, respectively. At lower pH values, the concentration of the freely dissociated H + and Cl − ions increases exponentially in the solution. These ions absorb the 2.4 GHz radio-frequency (RF) signal, with the absorption being proportional to the ionic concentration, and lead to a lower signal strength at the receiver 21 . Intestinal pH is 4.5–6.5, for which the range was >1 m. Second, the range was tested in different concentrations of NaCl solution (saline). For an NaCl concentration of 0.2% w/v (similar to gastric fluid), the range was >1 m (Fig. 2k ). For NaCl concentrations from 0.6 to 1.0% w/v (similar to intestinal fluid), the range varied from 60 cm to >1 m, respectively. Third, the range obtained in simulated gastric fluid (SGF) was >50 cm up to 250 ml SGF (Fig. 2l ). Finally, the range in simulated intestinal fluid (SIF) was >1 m for up to 250 ml SIF (Fig. 2m ) ( Methods ). Here 250 ml SGF and SIF are chosen as they represent the mean GI fluid volume under fasting and fed conditions 22 , 23 . We could similarly communicate with the iMAG over 1 m away when placed in 250 ml porcine gastric juice. The next step is to evaluate the communication range under in vivo settings in the presence of several layers of body tissue—skin, fat, muscle and organs—which not only cause the absorption of RF signal by the ionic and dipole concentrations in these layers but also result in multipath reflections due to the heterogeneous nature of tissue 24 . When the iMAG was endoscopically placed in the gastric cavity of an anaesthetized pig, the in vivo range was >1 m. Thus, the iMAG achieves a sufficient communication range for evaluation in a large animal model. For evaluating the communication time, the iMAG was submerged in an HCl solution (pH 1.5) for two weeks, which is towards the higher end of GI transit time in pigs, and was communicated with every few hours ( Methods ). EM coils for 3D magnetic field gradients We designed EM coils to generate the gradients G X , G Y and G Z to be ≥3 mT m –1 across the entire FOV (40 × 40 × 20 cm 3 ; Fig. 3a ) to ensure a resolution of 1 mm across all the three spatial dimensions. To create a Z -axis gradient in the magnetic field magnitude, a planar circular coil is designed to carry current in the counterclockwise direction (Fig. 3a , Z coil), producing a monotonically decaying magnetic field magnitude as the Z distance from the coil is increased (Fig. 3b ). The d.c. current is chosen as 15 A to get G Z > 3 mT m –1 at all the boundary planes of the FOV. Fig. 3: Magnetic-field-gradient generation and characterization. a , Magnetic-field-gradient-generating coils for X , Y and Z are shown. The Z coil consists of a single spiral carrying current in one direction. The X coil consists of two elongated spirals carrying currents in opposite directions. The magnitude function causes both spirals to produce identical field values. The Y coil is identical and orthogonal to X . All three coils are stacked together concentrically, resulting in a planar coil structure with an effective FOV of 40 × 40 × 20 cm 3 . b , Magnetic field profile produced by the Z coil, plotted along the Z axis as the X coordinate is varied (at Y = 20 cm). Identical plots are obtained as the Y coordinate is varied since the field is symmetric about the X and Y directions. c , X -axis magnetic field profile, plotted for varying Z coordinates (at Y = 20cm), when both X and Z coils are on together. d , X -axis field profile as the Y coordinate is varied (at Z = 10 cm). Similar plots are obtained along the Y axis when both Y and Z coils are on together. e , Global timing diagram showing the measurement phases with the on/off times of the gradient coils. The Z coil is kept on during the X and Y measurements to produce monotonically varying magnetic field magnitudes along the respective axes. f , Fully assembled X , Y and Z gradient coils using 50/32 AWG Litz wire. g , Peak d| B |/d t values plotted for the entire measurement phase. All the values are considerably lower than the recommended safety threshold of 20 T s –1 . h , Error in the decoded position of a single iMAG localized with respect to the global origin (0,0,0 on the coils) at n = 30 different locations chosen uniformly in the FOV. The error as mean ± standard deviation (s.d.): 1.07 ± 1.44 mm ( X ), 0.77 ± 1.07 mm ( Y ), 1.13 ± 1.20 mm ( Z ). i , Error in the decoded position of an iMAG localized with respect to another iMAG at a known location serving as a relative reference, eliminating the need for a global reference. Here n = 30 different locations were uniformly chosen in the FOV. Error as mean ± s.d.: 1.34 ± 1.68 mm ( X ), 1.13 ± 1.38 mm ( Y ), 0.97 ± 1.55 mm ( Z ). Full size image The X coil (Fig. 3a ) consists of two halves carrying currents in opposite directions to produce a magnetic field B X that points into (right, negative) and out of (left, positive) the plane. As the magnitude of B X is computed, the sign information is lost and results in identical values from both the halves, shown below the X coil in Fig. 3a , making 75% of the coil area unusable. Since the Z coil produces an always-positive field (Fig. 3b ), it can be used to correct for the non-monotonicity in the X -coil’s field 25 ( Methods ). With both coils simultaneously powered during the X -measurement phase, the resultant magnetic field along the X axis is strictly monotonic over the entire X FOV (Fig. 3c,d ). Variations in the X gradient in the FOV (Fig. 3c,d ) are explained in Methods . The Y coil is identical to the X coil (Fig. 3a ), except for a 90° rotation. During the Y -measurement phase, both Y and Z coils are simultaneously powered (Fig. 3e ). Figure 3f shows the fully assembled gradient coils. The Z coil consists of two layers, each with 80 turns, and each elongated half of the X and Y coils consists of two layers, with 68 turns per layer (Extended Data Fig. 1 , Embodiment-1). A major safety consideration of our system is the potential for peripheral nerve stimulation (PNS) resulting from gradient switching by the coils. The 10 ms rise time for the gradients (Fig. 3e ) used in this work is much slower than the 0.1–1.0 ms rise time used in fast MRI scanners. The PNS threshold is commonly defined as the peak d| B |/d t value, reported to be 43.0–57.0 T s –1 (ref. 26 ), which is more than an order of magnitude higher than our peak value of 1.5 T s –1 (Fig. 3g ). In addition, the PNS effects are considered negligible when the switching time is >5 ms and | B | < 100 mT (ref. 27 ). Both these metrics are satisfied by our system. Considering the rise time along with the peak d| B |/d t value, the International Electrotechnical Commission thresholds for PNS and cardiac stimulations have a common asymptotic value of 20.0 T s –1 at long rise times (>5 ms) (ref. 28 ), which is much higher than the 1.5 T s –1 gradient switching employed here. Additionally, no mechanical movement was observed in any of the magnetic equipment placed in the vicinity of the coils, ensuring minimal risk due to the induced magnetic force and torque. Another safety consideration for the gradient coils is the heat generated during measurements. From Fig. 3e , it is evident that a single measurement cycle lasts less than 300 ms. Given the several-hour-long transit through the stomach, small intestine (SI) and colon 2 , 3 , 7 , 8 , and 1–2 contractions per minute in each of these organs on average 8 , one measurement per minute is sufficient for the accurate monitoring of transit time and motility. For such sparse measurements, the average heat produced in the coils is only 3.3 W, which is easily dissipated by the large coil area. The power, heat, weight and other specifications of the coils are listed under Embodiment-1 (Extended Data Fig. 1d ). Although the heat generated is negligible, the transient power during each measurement is 800 W, which is not suitable for portable prototypes. Another challenge for portability is the high weight of the coils. To circumvent these, a more portable-friendly prototype (Extended Data Fig. 1 , Embodiment-2) can be fabricated. Using a copper wire with 0.25 mm diameter, almost nine times more turns can be fitted into the same footprint as the current prototype. The large number of turns considerably relax the current requirement, with only 350 mA d.c. current and 0.25 W heat for Embodiment-2, which leads to <0.1 °C rise in the surface temperature during the measurements. However, the mean position resolution is lowered to 7.5 mm, which is still acceptable for localization in the GI tract. For stationary prototypes used in walls or toilet seats, 15 A current-carrying Embodiment-1 can be used for higher accuracy of localization. The coils can also be operated at the theoretically maximum sampling frequency of 3.3 Hz (1/300 ms) for applications requiring a higher refresh rate, and the heat generated can be alleviated by using thermal insulators around the coils. Spatiotemporal resolution and system characterization The value of Δ B found experimentally using the 3D magnetic sensor is 15 µT, which corresponds to Δ x ranging from <1 mm (when G > 15 mT m –1 ) to 5 mm (when G = 3 mT m –1 ). The iMAG’s lowest resolution of 5.0 mm occurs only at the boundary planes of the FOV and is 1.0–2.0 mm elsewhere, resulting in a volume-averaged mean value of 1.5 mm (Extended Data Fig. 2 ). The precision and repeatability of the gradients play an important role in the error performance. We achieved <5% error in the gradients during each measurement by custom designing a coil controller (Supplementary Fig. 13 ) and extensive characterization ( Methods ). For applications requiring a higher spatial resolution, several measurements at a single location can be averaged to reduce the effect of sensor noise 25 , 29 . The same can be achieved by increasing the gradient G at the cost of higher current and/or more layers of coils. For GI monitoring applications, sub-centimetre resolution is acceptable since the GI system exhibits centimetre-scale relative motion even under still external conditions 30 . The temporal resolution of iMAG is dictated by the total delay between sending a wireless ping and completion of 3D position decoding, which is <300 ms (Fig. 3e ), providing sufficiently real-time position update for applications in this work. We first tested the system in vitro to demonstrate functionality and verify the theoretical localization resolution. The 3D position of a single iMAG submerged in a saline tank was found with respect to the global origin of the coils (0,0,0; Fig. 3a ). The error is defined as the difference between the actual and decoded position. The absolute peak and mean error magnitudes in X , Y and Z (Fig. 3h ) were ≤5.0 mm and ≤1.2 mm, respectively, as the location of the iMAG was uniformly varied in the FOV. To eliminate the fixed reference (global origin), we added another iMAG at a known location in the tank (Supplementary Fig. 2 ) to serve as a relative reference for the main iMAG. The peak and mean errors in the decoded distance between the main and reference iMAG were ≤5.0 mm and ≤1.4 mm, respectively, at all the locations (Fig. 3i ). We performed in vivo testing and characterization in porcine models as they represent a reliable analogue for human application, given their anatomy and size 31 . A custom wooden chute was designed with two sets of gradient coils, each comprising the X , Y and Z coils (Fig. 4a and Supplementary Figs. 11 and 12 ). The two coil sets are needed to generate 40 × 40 × 40 cm 3 of FOV spanning the porcine GI tract. Unwanted interference between the fields produced by the two coil sets was eliminated by sequential powering. We tested the system’s accuracy in vivo using a test fixture (Fig. 4b ) with two iMAG devices positioned a fixed distance apart (81.12 mm). The fixture was endoscopically deployed into the gastric cavity (Fig. 4c ). The decoded distance between the two devices was 83.6 ± 0.7 mm, falling within our desired error margin of 5.0 mm (Fig. 4d,e and Supplementary Fig. 3 ). Fig. 4: In vivo localization of iMAG in the GI tract under acute and chronic conditions. a , Custom-designed wooden chute with two sets of gradient coils on each side to provide a 40 × 40 × 40 cm 3 FOV with adjustable height. b , Test fixture with two iMAG devices positioned a fixed distance apart and lodged into the gastric cavity of the pig. c , X-ray scan of the animal showing the position of the test fixture. d , Decoded interdevice distance from the iMAG is 83.60 ± 0.70 mm (mean ± s.d.), plotted alongside the ground-truth distance of 81.12 mm. e , Individual errors in the X , Y and Z components of the decoded distance are plotted. f , For chronic studies, two iMAG devices are used, with one serving as the ingested test iMAG and the other attached externally on the skin in the abdominal region to serve as the reference iMAG. Localization of the two devices is performed on each test day (Monday/Wednesday/Friday). g , To avoid the animal chewing on the test iMAG, it is endoscopically administered. h , Signal strength (plotted as mean ± s.d.) detected by the receiver as the ingested iMAG is localized on different test days. When located in the stomach and SI, the signal strength is –80 to –100 dBm, which increases to –70 dBm or higher when located in the colon or rectum. i – k , To compare the decoded distance of the ingested iMAG with the reference, X-ray scans were conducted when the iMAG was located in the stomach ( i ), colon ( j ) and rectum ( k ). Position of the external reference iMAG is shown in each scan. l – q ,The error between the distance given by the X-ray and iMAG is found to be (reported as mean ± s.d.) 0.54 ± 0.21 mm for the stomach ( l and m ), 6.04 ± 3.70 mm for the colon ( n and o ) and 1.97 ± 1.30 mm for the rectum ( p and q ). The iMAG devices remained fully functional on excretion, confirming their long-term viability in a chronic setting. All the iMAG-decoded distance bars are plotted as mean ± s.d. Also, n = 3 for all the distance and error measurements. Furthermore, n = 4 for all the signal strength measurements. Full size image In vivo evaluation We first sought to emulate a real-world setting where an iMAG would be ingested and its position would be tracked relative to a reference iMAG located externally on the skin of the ambulatory animal. The iMAG was endoscopically administered (Fig. 4f ) and evaluated on passage to assess the electrical and mechanical integrity. The signal strength detected by the receiver as the ingested iMAG is localized is plotted in Fig. 4g . The signal strength when the iMAG is located in the stomach and SI is –80 to –100 dBm, approaching the noise floor of the receiver when less than –95 dBm. The signal strength increased to over –70 dBm when located in the colon or rectum. The ingested iMAG was localized in different regions of the GI tract: (1) stomach (Fig. 4h ), (2) colon (Fig. 4i ) and (3) rectum (Fig. 4j ). The error in the decoded distance between the ingested and reference iMAG devices, compared with the distance obtained from the X-ray scans, was found to be <5 mm for the stomach (Fig. 4k,l ) and rectum (Fig. 4o,p ) and <10 mm for the colon (Fig. 4m,n ). The in vivo error values reported here are overestimations ( Methods ). The ingested iMAG devices remained functional on excretion (signal strength of more than −60 dBm), thus confirming their applicability for chronic use (Supplementary Figs. 4 – 6 ). We next investigated the utility of our system in a faecal incontinence (FI) model 32 . To monitor the movement of faeces in the distal colon, our FI model comprised a freely moving iMAG in the lumen of the distal colon and a reference iMAG located on the skin surface near the anal sphincter. The objective was to detect the presence of the moving iMAG when within a specific distance (chosen here as 10 cm) of the anal sphincter. We placed an iMAG 16 cm proximal to the anal sphincter in the colon and fixed two reference iMAG devices externally (Fig. 5a ). The iMAG was pulled out in increments of 5 mm, with a measurement being made at every step and the reconstructed trajectory shown in Fig. 5b,c . The consecutive X-ray scans performed during the measurements are shown in Extended Data Fig. 3 . When the iMAG was 10 cm inside, the error in the distance between the reference and moving iMAG was <3 mm, which validates the functionality of the system as an accurate (>97%) indicator of defecation. We were also able to map the iMAG trajectory to successfully reconstruct colonic anatomy (Fig. 5b–g and Extended Data Fig. 5 ) across multiple animals. Fig. 5: Application of iMAG in FI and magnetic label tracking. a , The iMAG placed 16 cm proximal to the anal verge of the pig using a catheter and two reference iMAG devices fixed externally on the skin for the evaluation of our FI model. The iMAG was pulled out in increments of 5 mm and a measurement was made at every step to reconstruct the trajectory. b , c , Top view ( b ) and side view ( c ) of the reconstructed 3D trajectory, which was successfully mapped to the colonic anatomy. On comparing the decoded distance traversed by the iMAG with the actual distance moved by the catheter, our system serves as an accurate (>97%) indicator of defecation. d , Colonic passage study of the iMAG in the presence of magnetic barium beads to evaluate our system’s specificity to magnetic labels. e , Placement of barium beads inside a specific location in the colon and their relative position when the iMAG comes close to them during passage. f , g , Top view ( f ) and side view ( g ) of the iMAG’s trajectory in the absence of barium beads, showing close resemblance to the reconstructed colonic anatomy in b and c for a different pig. h , i , Top view ( h ) and side view ( i ) of the trajectory in the presence of barium beads, distinctly showing the errors in the decoded position due to the distortion of magnetic field when the beads are in close proximity (<5 cm) to the iMAG. This demonstrates that our system is not only sensitive to the presence of magnetic labels but also immune to their presence when located sufficiently away (>5 cm in this case) from the iMAG being localized. Full size image Finally, we evaluated the use of iMAG as an in vivo sensor of pre-labelled locations within the GI tract. We placed magnetic barium beads at a specific location in the colon (Fig. 5d ) and used our system to sense when the ingested iMAG passed this location (Fig. 5e ). We then mapped the iMAG’s trajectory with (Fig. 5f,g ) and without (Fig. 5h,i ) the barium beads. The highly magnetic beads interfere with the local magnetic field, thus impacting the magnetic field readings by the iMAG. The error in the decoded position was appreciable (>1 cm, Fig. 5h,i ) when the iMAG was within 5 cm of the beads, thus demonstrating our system’s specificity to magnetic labels. The consecutive X-ray scans performed during the measurements are shown in Extended Data Fig. 4 . In a clinical scenario where the location of such labels is unknown, the iMAG can make an additional measurement each time before the gradient coils are switched on. The magnetic field produced by the labels can be sensed by the iMAG and distinguished from the relatively low background Earth’s field (<60 µT). A prior anatomy map obtained from an existing imaging modality (X-ray/MRI/CT) that shows the relative location of the labels can be used as an additional reference. We have also evaluated the effect of interference using non-magnetic polyethylene beads for negative control and compared with the interference caused by the magnetic barium beads (Supplementary Fig. 10 ). Clinical applications The real-time and millimetre-scale localization resolution of iMAG holds the potential for important clinical benefit. A quantitative assessment of the GI transit time is vital in the diagnosis and treatment of pathologies related to delayed or accelerated motility such as gastroparesis, Crohn’s disease, functional dyspepsia, regurgitation, constipation and incontinence 1 , 2 , 3 , 4 . Other applications that could benefit from the high spatiotemporal accuracy of our system are therapeutic interventions like the anatomical targeting for drug delivery, monitoring of medication adherence, delivery of macromolecules and electrical stimulation to specific regions of the GI tract in a wireless fashion 11 , 33 , 34 , 35 . We have shown our system to be a highly accurate indicator of defecation, which is of clinical significance for patients suffering from FI. An iMAG pill could be ingested with each meal to track its progress through the GI tract in patients with FI. The reference iMAG could be incorporated into ‘smart’ clothing for monitoring the bowel movements. The successful reconstruction of colonic anatomy (Extended Data Fig. 5 ) also shows that the iMAG can delineate complex and curved trajectories of the retroperitoneally fixed parts of the GI tract, which are hard to acquire through other imaging modalities (X-ray/CT). To localize the moving iMAG with respect to the patient’s anatomical features (like bones, muscles or other relatively fixed internal body parts), a prior scan of the patient with the reference iMAG attached at a known external location can be conducted using the existing imaging modalities such as MRI, CT or X-ray scans. Using this one-time scan, the location of internal organs with respect to the reference iMAG can be known. The reference iMAG can be attached to the patient’s skin using a Tegaderm patch (or other adhesives) anywhere in the abdominal region. The water-resistant patch will ensure that the reference iMAG stays at its location for the duration of the internal iMAG’s passage. We have also demonstrated the iMAG’s usage as an in vivo sensor to detect the location of magnetic particles and beads. This approach can be used to label injection sites, polyps, fistulas, stomas or strictures requiring localized therapy, using anatomical markers such as magnetic beads or staples. Additional capabilities could be added to the iMAG devices, enabling them to measure and report pH, temperature, pressure, deliver drug payloads and perform electrical/mechanical stimulation 1 , 4 , 36 , 37 . The stimulation/actuation can be performed after localization of the ingested iMAG in the vicinity of the marker. The in vivo experiment using magnetic beads also demonstrates our system’s robustness to field distortion caused by magnetic objects when located sufficiently away (>5 cm in this case) from the devices being localized (Supplementary Fig. 10 ). Sensing capabilities could enable iMAG to generate a spatiotemporal map for comprehensive patient diagnosis and assisting further in anatomical targeting using the pH and pressure profile. From a consumer electronics standpoint, the iMAG offers the potential for non-invasive and location-specific measurement of physiological markers and vital parameters along the gut, which could be of interest in the field of fitness and smart medicine 36 . Conclusions Our system offers a high FOV, high spatial resolution in three dimensions and fully wireless operation of the ingestible microdevices (Table 1 ). It also supports concurrent multidevice usage. We use safe magnetic fields generated by non-obstructive planar electromagnets and have demonstrated system functionality in large animals, illustrating its potential for use in non-clinical settings without the need for harmful radiation. Table 1 Comparison with existing EM-based tracking methods Full size table Prior magnetic tracking systems that localize a moving magnet inside the GI tract use an external array of magnetic sensors to reconstruct the magnet’s position 14 , 15 , 16 , 17 , 18 , 38 . Since the reconstruction is based on the received magnetic field strength and direction at each sensor’s location, it is susceptible to field distortions produced by nearby magnetic materials. As a result, these approaches lack scalability across the number of magnets due to the increasingly distorted field produced by each moving magnet when multiple of them are simultaneously used. For single-magnet localization, the mean spatial resolution is around 5 mm that rapidly degrades to more than 1 cm as the distance from the sensor array is increased to more than 15 cm, thus limiting the effective FOV 39 , 40 . Furthermore, the size of the magnet required to achieve centimetre-scale precision in GI FOV approaches that of a 000 capsule and is larger than the FDA-approved daily dosed osmotic-controlled release oral delivery system, leaving little room for additional components (for actuation or stimulation) that can be fitted in an ingestible pill with minimal risk of obstruction 41 , 42 . An EM-induction-based system has been previously reported to excite wireless LC coils (used as markers), using an externally located pickup coil array to track the markers 43 . A single marker is composed of three LC coils (each measuring 15 × ϕ 4 mm 3 ) arranged in a ring fashion to achieve six-dimensional (6D) tracking, rendering a size that is unsuitable for an ingestible device. When a single LC coil is used as a marker to fit into an ingestible footprint, it only achieves five-dimensional (5D) tracking and suffers from an inherent dead-angle problem 43 , which makes tracking reliability uncertain. In contrast, iMAG is capable of achieving 6D tracking with the current footprint 25 . The system discussed in the other work 43 is also lacking in vivo evaluation and achieves a lower penetration depth than our work. Furthermore, RF-based localization methods have an order of magnitude lower resolution than our approach due to heavy dependence on body tissue and multipath effects 44 . Their penetration depth is also limited due to heavy attenuation of RF signals by body tissue. A current limitation of our system is the achievable distance of the receiver board (≤50 cm) when the iMAG is located deep inside the GI tract. This is due to the thick gastric and intestinal walls that cause excessive loss in the 2.4 GHz RF signal strength. Future devices could use a lower frequency of 401–406 MHz (Medical Implant Communication System band) or 915 MHz (Industrial, Scientific and Medical radio band) for communication to achieve a longer distance from the receiver board to the devices. Tissue absorption is attenuated at lower frequencies, leading to a higher signal strength at the receiver. The overall size of the iMAG could also be reduced by using a custom-designed application-specific integrated circuit (ASIC) that has 3D magnetic sensing and wireless communication capabilities. Such ASICs could be used to create highly miniaturized and low-power devices, which can exploit energy harvesting from GI fluids and eliminate the need for batteries for power 45 , 46 , 47 . Future successful translation of our system will require extensive safety studies in large animal models to enable human trials. From a manufacturing perspective, the iMAG can be mass produced at a low cost per device since all the components are off the shelf and inexpensive. With an ASIC implementation of iMAG, the cost can be further reduced. The iMAG’s four-week battery life provides sufficient time for evaluation in chronic settings and can be further enhanced by using higher-energy-density batteries. The EM coils for gradient generation incur a single-time manufacturing cost and can be repeatedly used for iMAG monitoring. The FOV produced by the coils is scalable with the coil size, number of layers and d.c. current used. The coils can also be customized for various patient-specific requirements (Fig. 1 ). For instance, a conformal coil structure can be made into a jacket or incorporated into a backpack and powered with batteries (Extended Data Fig. 1 ). The coils can also be attached to a toilet seat (Extended Data Fig. 6 ) or mounted on a rigid wall for regular motility monitoring of patients not comfortable wearing or carrying the coils. This is especially useful for patients with pre-existing disabilities impeding movement and locomotion. Owing to their complete planarity, the coils can easily slide beneath the bed for GI monitoring during sleep 10 . The iMAG technology could, thus, be used to advance current capabilities in GI tract monitoring, diagnosis and treatment. Methods iMAG assembly The PCB for iMAG was fabricated on a standard four-layer 0.062″ FR4 substrate, measuring 14.0 mm × 7.2 mm. The nRF BLE module (nRF52832-CIAA-R), matching-network circuit, 2.4 GHz antenna (2450AT18B100) and 32 MHz crystal (CX2016DB32000D0WZRC1) were soldered on top, whereas the 3D magnetic sensor (AK09970N) and the 11 mF storage capacitor (CPH3225A) were soldered on the bottom of the PCB. Two coin-cell batteries (SR626W) are stacked together is series and attached to one end of the PCB (closer to the antenna and away from the sensor). Matching network for interfacing between the nRF module and antenna was designed using simulations in Microwave Office software (V13-03-8415-1-64bit) and later tuned in real time on the fabricated board (due to manufacturing variations) using a microwave vector network analyser (N9918A, Keysight). The final matching-network circuit for the nRF comprises a 1 pF shunt capacitor and a 3.3 nH series inductor, and the matching-network circuit for the antenna comprises a 1 pF shunt capacitor and a 4.7 nH series inductor (Fig. 2c,d ). Since transmission through the antenna is the most power-hungry phase of the entire operation, it necessitates that the entire matching-network circuit be very accurately tuned to avoid power losses from the nRF to the antenna (Supplementary Figs. 7 – 9 ). The AKN 3D magnetic sensor is based on the Hall effect, has a 16-bit data out for each of the three-axis magnetic components, high sensitivity (1.1–3.1 μT per LSB) and measurement range (±36 mT), and a footprint of 3.00 × 3.00 × 0.75 mm 3 . It communicates with the nRF module over the I2C protocol. The sensor consumes 2.2 mA of current for 850 μs in the low-noise mode (1.1 µT per LSB) and 1.5 mA for 250 μs in the low-power mode (3.1 μT per LSB). For the iMAG device, the sensor is operated in the low-power mode. Given the high current consumption by the sensor during field measurements, it is imperative to use a high current drive battery and a large storage capacitor, as done for the iMAG. With future implementations of a low-power complementary-metal–oxide–semiconductor-integrated 3D magnetic sensor, both power and area of the iMAG pill can be drastically reduced. The SR626W silver oxide batteries were chosen because of their compact size (6.8 mm × 2.6 mm) and high current drive capacity (28 mAh), which is sufficient to power the devices for two continuous weeks. For chronic tests, since we wanted to extend the battery life to four weeks, we used SR927W instead, which measures 9.5 mm × 2.7 mm and has a current drive capacity of 60 mAh (more than twice that of SR626W). Additionally, the BLE antenna 2450AT18B100 (Johanson Technology) was replaced by 0479480001 (Molex) due to the latter’s higher gain to help achieve a higher signal strength during the chronic experiments. The resultant increase in the device size to 20 mm × 12 mm was still acceptable for an ingestible electronic. iMAG configuration The BLE software for the device was implemented as an event-driven application, with both nRF and magnetic sensor operated in the ultralow-power modes at all times, except for the advertising and magnetic-field measurement phases. A BLE custom service application for the sensor was developed to configure iMAG as a peripheral BLE server. The custom application initializes and instantiates all the necessary BLE service modules and advertising schemes, and reports four data values in a single notification-enabled Generic Attribute Profile (GATT) characteristic: (1) field data measured during the X gradient, (2) field data measured during the Y gradient, (3) field data measured during the Z gradient and (4) temperature value measured by the internal temperature sensor of the nRF integrated circuit. An nRF52 development kit was used to program the iMAG before encapsulation. The sensor was configured via the I2C interface to operate in the single-measurement mode, with measurement events processed and verified through the interrupt pin (ODINT). Furthermore, the random-access memory of the iMAG’s nRF chip was redefined to take into account the added GATT characteristics and services. The low-frequency oscillator configuration was also redefined to redirect all the low-frequency operations to the internal 32.768 kHz RC oscillator instead of the external 32.768 kHz oscillator, which was omitted from the iMAG PCB to conserve space. For debugging, the sensor’s measurements were logged through a UART terminal program via the UART connection on the nRF52 development kit, which was externally controlled from a smartphone. The debugging interface was disabled in the iMAG before encapsulation with polydimethylsiloxane. For a completely encapsulated iMAG, the peripheral BLE application first initializes the GATT interface, generic access profile parameters, BLE stack and custom service. It then initializes the 400 kHz I2C interface with the sensor, applies a reset and begins advertising. The transmit power was set to 4 dBm to ensure maximum connectivity with the external client (receiver board). To get a robust system, the speed and stability of the connection between the client and iMAG are of paramount importance. However, the high attenuation of 2.4 GHz BLE signals by body tissue demands a high advertising rate and faster connection interval to maximize the connection probability, resulting in higher power consumption. For the batteries used with the current iMAG (SR626W), an advertising rate of 2.5 s and connection interval of 50 ms were used to achieve two weeks of battery life. Once a BLE connection is established, the iMAG remains in the low-power mode until an external event/notification triggering a measurement is received. Power optimization for iMAG Power measurements indicate that the iMAG consumed (1) 10 μA in the idle mode and (2) a maximum spike of 15 mA during the advertising mode with an average of 440 μA across all the three BLE channel transmissions. When connected with the receiver board, the iMAG consumed 120 μA in the standby mode, and an average of 250 μA when requesting a measurement from the magnetic sensor. We minimized the power consumed by the iMAG to ensure that the maximum continuous current was within the discharge limits of the battery. All non-necessary peripherals were deactivated, including the BLE general-purpose input/output pins, all interrupt pins and general board support package module. Furthermore, d.c./d.c.-mode power management was enabled on iMAG’s nRF to make it switch automatically between the on-chip d.c./d.c. regulator and the low-dropout regulator, depending on the instantaneous load. Such a regulation protocol is more power efficient, particularly in the presence of high-power transmit radio spikes. The 10 μH and 15 nH inductors needed for the d.c./d.c.-mode power regulation mechanism were soldered onto the PCB in accordance with the regulator’s impedance requirements. This resulted in notable power saving (Supplementary Table 1 ). With the above configuration, the iMAG consumed (1) 8 μA in the idle mode and (2) a maximum spike of 8 mA during the advertising mode with an average of 230 μA across all the three BLE channel transmissions. When connected with the receiver board, the iMAG consumed 80 μA in the standby mode and an average of 180 μA when requesting a measurement from the sensor. Supplementary Table 1 summarizes all the power-related results. Receiver board configuration An nRF52 development board (receiver board) is configured as a BLE client that scans, detects and requests data from the target server (iMAG). On detecting the vendor-specific universally unique-identifier service address associated with an iMAG, the central client automatically assigns all the handles representing the X , Y and Z fields, as well as the temperature GATT characteristics, and initializes all the notification-based procedures. Furthermore, the central client application initializes simultaneous advertising events such that an external smartphone can connect to it and remotely activate event notifications (called as ‘ping’). The ping signal is relayed by the central client board to the iMAG devices to trigger the magnetic field measurements at their appropriate times (Fig. 2g–i and Supplementary Fig. 15 ). With its general-purpose input/output pins connected to the gradient coils’ ENA (enable) switches, the central client activates the required sequence of coil combinations (Fig. 3e ) to generate the magnetic field gradients. The receiver board is connected to a laptop via USB; using a simple UART protocol, the board displays the received field-data values in real time as they are streamed from the connected iMAGs. All the GATT characteristic tables and byte-array sizes were matched between the server and client to ensure the accurate transmission of all data values. The central client is capable of requesting and receiving data from multiple iMAGs, each at a distinct location in the FOV. Spatial resolution The complete definition of the spatial localization resolution (Δ x ) obtained by our system, in each of the three dimensions, is given by equation ( 5 ) 25 : $$\Delta x = \frac{{\Delta B_{\mathrm{eff}}}}{G}\left\{ {1 + \sqrt {\left( {\frac{{\delta G_\mathrm{i}}}{G}} \right)^2 + \left( {\frac{{\delta G_\mathrm{s}}}{G}} \right)^2} } \right\},$$ (5) $$\Delta B_{\mathrm{eff}} = \sqrt {(\Delta B_i)^2 + (\Delta B_j)^2 + (\Delta B_k)^2}.$$ (6) Here Δ B eff is the effective resolution that the sensor can achieve as a magnetic field measurement is performed. Also, G is the applied magnetic field gradient, which is determined by the current in the electromagnets and their geometrical structure. Two major noise sources have been identified in G : (1) ∂ G i , the error caused by field interpolation; (2) ∂ G s , the error caused by variations in supply current 25 . The goal is to keep the contributions by these two error sources below 5% so that the right-hand side in equation ( 5 ) reduces to ≈Δ B eff / G . Gradient coils’ design and assembly The gradient required along each axis is described by equation ( 4 ). Using the coil combinations described in Fig. 3 for the X , Y and Z gradients, monotonically varying magnetic field magnitudes are generated in the FOV 25 , 48 , 49 , 50 , 51 , 52 . A single field measurement takes <1 ms. However, to ensure that the ping signal is received by the iMAG device and the measured data are transmitted to the receiver over the Bluetooth protocol, a 50 ms time window is required (Fig. 3e ). The 10 ms rise and fall times are due to the L / R time constant of the coils and the switching requirement of the d.c. power supplies. The three gradient coils are assembled using a 50-stranded 32 AWG Litz wire (Fig. 3f ). The Z coil consists of two layers, each with 80 turns. Each elongated half of the X and Y coils consists of two layers, with 68 turns per layer. Finally, the three coils are stacked together concentrically to give a single planar structure measuring 60 × 60 × 2 cm 3 . For applications requiring a bigger FOV, the physical dimensions can be correspondingly scaled for all the coils. More layers can be added to generate a proportionately higher FOV and/or gradient strength. The d.c. current is another parameter for vertically scaling the FOV. As gradient G increases, the position resolution given in equation ( 1 ) improves. Gradient coil’s characterization The coils in this work were characterized using a setup comprising linear actuators that move in the X , Y and Z directions and by measuring the magnetic field at every 1 cm step (Supplementary Fig. 11 ). Points between the 1 cm steps are interpolated in MATLAB (R2020a). This results in a finely characterized FOV with steps of 1 mm in X , Y and Z , such that the interpolation error of ∂ G i (equation ( 5 )) causes a <5% variation in G . To reduce the effect of sensor noise from 15 μT pp (measured in the lab) to ≤1 μT pp , the sensor averages 200 measurements at each location. The Earth’s ambient magnetic field is also measured at each location and subtracted from the gradient coil’s field. The corrected field values are then stored in the LUT. Each step comprising all the measurements and movement of the actuators takes <15 s, requiring ten days to completely characterize the 40 × 40 × 40 cm 3 FOV with 1 cm increments. The technique for software-based characterization of the coils that requires a much lower time is discussed elsewhere 25 . Also, the entire characterization process needs to be performed only once for a given set of coils, since the magnetic field values for an arbitrary d.c. current can be obtained by linearly scaling the field values stored in the LUT. Gradient coil’s controller board It is crucial to have a constant d.c. current from the power supply into the coils to minimize ∂ G s in equation ( 5 ), which is achieved by designing a controller board. A top-level schematic of the board’s circuit is shown in Supplementary Fig. 13 . Here V REF and R 1 together set the value of the d.c. current since I d.c. = V REF / R 1 . An n-channel metal–oxide–semiconductor field-effect transistor driver M 1 (FDL100N50F) rated for 500 V and 100 A is used for handling the high d.c. current coming into the coils. Also, R 1 is chosen with high temperature stability (MP930-0.020-5%) to ensure a thermally stable current value. Search algorithm For each measurement, the iMAG first measures the Earth’s magnetic field (±30 to ±60 μT) to cancel its effect from the gradient coil’s field. The corrected field values transmitted wirelessly by the iMAG are received by the receiver board and given as an input to the 3D search algorithm implemented in MATLAB, which outputs the corresponding closest position coordinate (Supplementary Fig. 14 ). The details of the algorithm are described elsewhere 25 . X -gradient variation in FOV Variation in the X gradient along the Z axis is shown in Fig. 3c . Since the field strength gradually reduces as distance from the surface increases, G X achieves the highest value of 25.00 mT m –1 at Z = 4 cm and monotonically reduces to 5.24 mT m –1 at Z = 20 cm. Variation in the X gradient along the Y axis is shown in Fig. 3d . The circular Z coil results in a field magnitude along the X axis that is non-homogeneous across the Y coordinate. The global coil centre ( Y = 30 cm) has the highest Z gradient and the highest field magnitude, which gradually falls as Y is increased or decreased 25 . This effect also manifests in the field profile when both X and Z coils are simultaneously powered, where the maximum G X = 14.14 mT m –1 occurs at Y = 32 cm and falls to 5.65 mT m –1 at Y = 0 (Fig. 3d ). Communication range study The communication range between the iMAG and receiver board is evaluated in vitro using different solutions (HCl, NaCl, SGF, SIF and porcine gastric fluid). SGF is prepared as follows: 0.2% w/v NaCl, 0.7% v/v HCl, buffered to a pH of 1.28. The SIF contains more of pancreatin enzyme (composition of lipase, amylase and protease), as opposed to ionic salts and acids (sodium oleate, sodium taurocholate, sodium phosphate and NaCl) 53 , 54 . For all the tests, a polydimethylsiloxane-encapsulated iMAG is submerged in a cylindrical beaker such that the solution is uniformly distributed around it. The receiver board (nRF52-DK) is kept outside in air and moved as far as possible before losing connection with the iMAG. All the range values beyond 1.0 m are denoted as 1.1 m since the board is not required to be moved farther than 1.0 m under any test scenario. For in vivo tests, the iMAG was placed in the gastric cavity of a pig (Yorkshire swine, 60 kg) with an endoscope (Olympus). The iMAG was held with an endoscopic snare, moved around the stomach to coat it with gastric juice and held in the stomach while communicating with the external receiver. Communication time study The communication time and long-term stability of the iMAG were first evaluated under in vitro settings. The iMAG was submerged in an HCl solution (pH 1.5) for two weeks and was communicated with every few hours. The iMAG pill at the end of two weeks was fully functional with no electrical or mechanical damages. This time study was done using HCl to mimic the harsh acidic environment of the stomach as it represents the most extreme case in the entire GI tract. We next performed time studies using other solutions such as SGF, SIF and porcine gastric juice, but those were conducted only for a few days (24–48 h) since the iMAG survived the HCl environment for two weeks. In vitro testing During in vitro localization, a 20 × 20 × 15 cm 3 of saline (9 g NaCl per litre of water) tank is placed on top of the gradient coil’s stack (Supplementary Fig. 2 ). As the iMAG is localized relative to the global origin (0,0,0; Fig. 3a ), the error plots (Fig. 3h ) show that the error values occur in increments of 1 mm. This is because the LUT (created during characterization) stores field values corresponding to spatial coordinates that are 1 mm apart. For relative localization using another iMAG, the error is computed by subtracting the X , Y and Z components of the decoded distance vector from the ground-truth vector, which results in non-integer values (Fig. 3i ) with a peak of <5 mm. In vivo testing All the experiments were conducted in accordance with the procedures approved by the Massachusetts Institute of Technology Committee on Animal Care. We chose a swine model due to anatomical similarities of their GI tract to humans as well as their wide usage in GI-tract device evaluation 31 . We observed no adverse effects during the experiments. We administered the iMAG to female Yorkshire swine, 35–65 kg (Tufts). To deliver the iMAG, we placed the swine on a liquid diet 24 h before the procedure and fasted the swine overnight. We sedated them with an intramuscular injection of Telazol (tiletamine/zolazepam) (5.00 mg kg –1 ), xylazine (2.00 mg kg –1 ) and atropine (0.05 mg kg –1 ), as well as supplemental isoflurane (1–3% in oxygen), if needed, via a face mask. An orogastric tube or overtube was placed with the guidance of a gastric endoscope and remained in the oesophagus to ease the passage of the device. In the first experiment, two iMAG devices at a known distance apart were passed through the overtube and placed into the insufflated stomach (Fig. 4b–e ). Although the swine were fasted, some of them still possessed food in their stomach during iMAG delivery. In the second experiment, an iMAG was inserted into the insufflated stomach through the overtube and left to pass through the GI tract (Fig. 4f–p ). Magnetic field measurements were made in the chute followed by an X-ray scan to determine residency time of the devices as well as for any evidence of GI perforation (pneumoperitoneum). This was repeated on all the subsequent test days (Monday/Wednesday/Friday) until the ingested iMAG was excreted. Additionally, during the retention of devices, the animals were clinically evaluated for normal feeding and stooling patterns. No evidence of pneumoperitoneum on X-ray nor any changes in feeding or stooling patterns were observed. In the third experiment, to validate our FI model (Fig. 5a–c ), an iMAG was tethered on the tip of a catheter and placed into the rectum. The iMAG’s location was repeatedly scanned (using both magnetic field measurements and X-rays) as the device was gradually pulled out in 1–2 cm increments. Similar steps were executed during the final experiment to examine the potential interference caused by magnetic materials surrounding the iMAG (Fig. 5d–i ), with magnetic beads placed in the rectum before iMAG’s insertion. During all the in vivo tests, the receiver board was kept close to the animal’s abdominal region and connected to a computer displaying the received magnetic field values from the iMAG devices. Animals also had a reference iMAG secured on the skin using a Tegaderm patch. After performing magnetic field measurements in the chute (Fig. 4a ), the animal was physically carried to the X-ray scanner every time a scan was required. During this motion, it was hard to keep the position of the iMAGs intact due to the relative movement of the animal’s organs. Additionally, to get two orthogonal X-ray scans to compute the 3D interdevice distance, the animal was manually rotated by 90°, which does not result in perfect orthogonality. Therefore, comparisons with X-ray scans show a difference of >5 mm for localization in the colon (Fig. 4m,n ), which is an artefact resulting from the X-ray acquisition methodology. Software J-Link RTT Viewer (V6.62a) was used for collecting the raw magnetic field values from the iMAG pills and nRF receiver. MATLAB (R2020a) was used for position decoding using the field values. Arduino Mega 2560 and Arduino IDE (1.8.19) were used for FOV characterization and LUT creation, respectively. MATLAB (R2020a) and GraphPad Prism (9.4.1) were used for data analysis and plotting. Illustrations were made in Adobe Illustrator (26.4.1) and Microsoft PowerPoint (16.69.1). Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The data that support the findings of this study are available from the corresponding authors upon reasonable request. Code availability Codes used in this study are available from the corresponding authors upon request. | Engineers at MIT and Caltech have demonstrated an ingestible sensor whose location can be monitored as it moves through the digestive tract, an advance that could help doctors more easily diagnose gastrointestinal motility disorders such as constipation, gastroesophageal reflux disease, and gastroparesis. The tiny sensor works by detecting a magnetic field produced by an electromagnetic coil located outside the body. The strength of the field varies with distance from the coil, so the sensor's position can be calculated based on its measurement of the magnetic field. In the new study, the researchers showed that they could use this technology to track the sensor as it moved through the digestive tract of large animals. Such a device could offer an alternative to more invasive procedures, such as endoscopy, that are currently used to diagnose motility disorders. "Many people around the world suffer from GI dysmotility or poor motility, and having the ability to monitor GI motility without having to go into a hospital is important to really understand what is happening to a patient," says Giovanni Traverso, the Karl van Tassel Career Development Assistant Professor of Mechanical Engineering at MIT and a gastroenterologist at Brigham and Women's Hospital. Traverso is one of the senior authors of the new study, along with Azita Emami, a professor of electrical engineering and medical engineering at Caltech, and Mikhail Shapiro, a professor of chemical engineering at Caltech and an investigator of the Howard Hughes Medical Institute. Saransh Sharma, a graduate student at Caltech, and Khalil Ramadi, a former MIT graduate student and postdoc who is now an assistant professor of bioengineering at New York University, are the lead authors of the paper, which appears today in Nature Electronics. A magnetic sensor GI motility disorders, which affect about 35 million Americans, can occur in any part of the digestive tract, resulting in failure of food to move through the tract. They are usually diagnosed using nuclear imaging studies or X-rays, or by inserting catheters containing pressure transducers that sense contractions of the GI tract. The MIT and Caltech researchers wanted to come up with an alternative that would be less invasive and could be done at the patient's home. Their idea was to develop a capsule that could be swallowed and then send out a signal revealing where it was in the GI tract, allowing doctors to determine what part of the tract was causing a slowdown and better determine how to treat the patient's condition. To achieve that, the researchers took advantage of the fact that the field produced by an electromagnetic coil becomes weaker, in a predictable way, as the distance from the coil increases. The magnetic sensor they developed, which is small enough to fit in an ingestible capsule, measures the surrounding magnetic field and uses that information to calculate its distance from a coil located outside the body. "Because the magnetic field gradient uniquely encodes the spatial positions, these small devices can be designed in a way that they can sense the magnetic field at their respective locations," Sharma says. "After the device measures the field, we can back-calculate what the location of the device is." To accurately pinpoint a device's location inside the body, the system also includes a second sensor that remains outside the body and acts as a reference point. This sensor could be taped to the skin, and by comparing the position of this sensor to the position of the sensor inside the body, the researchers can accurately calculate where the ingestible sensor is in the GI tract. An example of the size of the sensor. Credit: MIT The ingestible sensor also includes a wireless transmitter that sends the magnetic field measurement to a nearby computer or smartphone. The current version of the system is designed to take a measurement any time it receives a wireless trigger from a smartphone, but it can also be programmed to take measurements at specific intervals. "Our system can support localization of multiple devices at the same time without compromising the accuracy. It also has a large field of view, which is crucial for human and large animal studies," Emami says. The current version of the sensor can detect a magnetic field from electromagnetic coils within a distance of 60 centimeters or less. The researchers envision that the coils could be placed in the patient's backpack or jacket, or even the back of a toilet, allowing the ingestible sensor to take measurements whenever it is in range of the coils. Location tracking The researchers tested their new system in a large animal model, placing the ingestible capsule in the stomach and then monitoring its location as it moved through the digestive tract over several days. In their first experiment, the researchers delivered two magnetic sensors attached to each other by a small rod, so they knew the exact distance between them. Then, they compared their magnetic field measurements to this known distance and found that the measurements were accurate to a resolution of about 2 millimeters—much higher than the resolution of previously developed magnetic-field-based sensors. Next, the researchers performed tests using a single ingestible sensor along with an external sensor attached to the skin. By measuring the distance from each sensor to the coils, the researchers showed that they could track the ingested sensor as it moved from the stomach to the colon and then was excreted. The researchers compared the accuracy of their strategy with measurements taken by X-ray and found that they were accurate within 5 to 10 millimeters. "Using an external reference sensor helps to account for the problem that every time an animal or a human is beside the coils, there is a likelihood that they will not be in exactly the same position as they were the previous time. In the absence of having X-rays as your ground truth, it's difficult to map out exactly where this pill is, unless you have a consistent reference that is always in the same location," Ramadi says. This kind of monitoring could make it much easier for doctors to determine what section of the GI tract is causing a slowdown in digestion, the researchers say. "The ability to characterize motility without the need for radiation, or more invasive placement of devices, I think will lower the barrier for people to be evaluated," Traverso says. The researchers now hope to work with collaborators to develop manufacturing processes for the system and further characterize its performance in animals, in hopes of eventually testing it in human clinical trials. | 10.1038/s41928-023-00916-0 |
Biology | Invasive insect that kills grapes could reach California wine region by 2027 | Chris Jones et al, Spotted lanternfly predicted to establish in California by 2033 without preventative management, Communications Biology (2022). DOI: 10.1038/s42003-022-03447-0 Journal information: Communications Biology | https://dx.doi.org/10.1038/s42003-022-03447-0 | https://phys.org/news/2022-06-invasive-insect-grapes-california-wine.html | Abstract Models that are both spatially and temporally dynamic are needed to forecast where and when non-native pests and pathogens are likely to spread, to provide advance information for natural resource managers. The potential US range of the invasive spotted lanternfly (SLF, Lycorma delicatula ) has been modeled, but until now, when it could reach the West Coast’s multi-billion-dollar fruit industry has been unknown. We used process-based modeling to forecast the spread of SLF assuming no treatments to control populations occur. We found that SLF has a low probability of first reaching the grape-producing counties of California by 2027 and a high probability by 2033. Our study demonstrates the importance of spatio-temporal modeling for predicting the spread of invasive species to serve as an early alert for growers and other decision makers to prepare for impending risks of SLF invasion. It also provides a baseline for comparing future control options. Introduction Niche models are useful for estimating the potential future distribution of invasive species, based on climatic conditions in their native range and similar conditions in an introduced range 1 , 2 , 3 . However, these models are not temporally dynamic and typically do not integrate information about species’ biology 4 , and therefore they cannot predict the timing of arrival or simulate how a species may disperse to new areas, like process-based models can 5 , 6 . Models that are both spatially and temporally dynamic are needed to forecast where and when the spread of non-native pests and pathogens is likely to occur, to provide advance information for natural resource managers who are trying to proactively minimize ecological and economic impacts. In the United States, an invasive pest of high management concern is the spotted lanternfly (SLF, Lycorma delicatula ), a planthopper native to Asia that can kill plants directly by feeding on phloem and indirectly by facilitating the growth of a light-blocking leaf mold 7 . SLF was first detected in Pennsylvania in 2014 and has since spread to eleven surrounding states. The species feeds in high densities on a wide range of commercially valuable plants, including fruit trees, hops ( Humulus sp.), and grapes ( Vitis sp.) 8 , 9 , and poses a threat to the vineyard-based economies of the western US. Using niche modeling, researchers have identified the grape-growing regions of California and Washington as highly suitable, climatically, for SLF invasion 1 . When SLF might be expected to reach the region, however, remains unknown. US grape production is valued at ~$6.5 billion (accounting for 36% of the annual production value of all non-citrus fruit grown in the US), with more than one million acres in grape production 10 . California alone produces 82% of the US grape crop 10 . Federal and state agencies tasked to protect US agricultural and forestry products from pests like SLF have many potential control options to consider, from eradication and containment to a slow-the-spread management regime. To prepare control plans, resource managers must be able to predict how SLF would spread if left uncontrolled and when it would likely reach vulnerable areas. Essentially, they must be able to answer the question: if all efforts to contain or eradicate SLF in the eastern US were stopped, when would the species reach the western US? We used process-based modeling to address this question, simulating the spatial and temporal dynamics of SLF spread in the US and forecasting the timing of its arrival across the country. Specifically, we used a model called PoPS, Pest or Pathogen Spread 11 , to predict the spread of SLF at yearly time steps from its current introduced range in the Mid-Atlantic US over the next thirty years, forecasting where and when the pest would establish assuming no control to limit its spread. This simulation will provide a baseline for decision-makers to compare with other simulations that test different management strategies. We compared our predictions to a map of potential SLF distribution generated through niche modeling 1 . We also used county-level economic data for grapes and eight other commodities to identify, by year, the financial risk of SLF invasion in the US. Results Model outputs and crop hosts We predict that SLF has a low probability of first reaching the grape-producing counties of California by 2027 and a high probability in some California counties by 2033 (Fig. 1 and Supplementary Movies 1 [Graphics Interchange Format (GIF) image of mean county-level probability] and 2 [GIF of max county-level probability]). SLF will likely spread through the grape-producing region of the state by 2034, placing over 1 billion acres of grape vineyards at risk (Fig. 2 ). In addition to grapes ( Vitis sp.), many other crops are considered at risk from SLF infestation, including almonds ( Prunus subgenus Amygdalus sp.), apples ( Malus sp.), walnuts ( Juglans sp.), cherries ( Prunus subgenus Cerasus sp.), hops ( Humulus sp.), peaches ( Prunus subgenus Amygdalus sp.), plums ( Prunus subgenus Prunus sp.), and apricots ( Prunus subgenus Prunus sp.; Figs. 2 and 3 ) that are included in the National Agricultural Statistics Service 5-year census 8 , 12 . For the top grape-producing counties in California, we plotted the probability of SLF arrival over time (Fig. 4 ). Fig. 1: Spread probability over time using the mean of all raster cells in a county. By 2027 there is a low probability of SLF infestation in California, and by 2033 the first county in California has a high probability of SLF occurrence. Full size image Fig. 2: Crops at risk from SLF and total value. a – i Probability of SLF establishment over time for major crops. j The economic value of each crop. All acreage and economic data are from the USDA National Agricultural Statistics Service 2017 census 10 . Full size image Fig. 3: Crop production for top at-risk commodities. USDA county-level production data in acres for crops from the National Agricultural Statistics Service Census 2017 by county ( a ) grapes, b almonds, c apples, d walnuts, e cherries, f hops, g peaches, h plums, i apricots 10 . Full size image Fig. 4: Probability of SLF establishment in grape-growing counties. a Mean probability of SLF establishment over time (average of all pixel probabilities in the county), based on PoPS output, in the three grape-producing California counties with production >100,000 acres, plus Sonoma and Napa Counties, which produce high-value wine grapes. Asymptotes do not reach 100% probability, because some pixels in each county are unsuitable for SLF and have a 0% probability of establishment; asymptotes are reached when all suitable pixels in a county are predicted to be infested by SLF. Dotted lines represent standard deviation across runs. b Grape acreage under production based on the USDA National Agricultural Statistics Service 2017 census 10 , highlighting the eight counties with the most grape production. Full size image Model comparison to previous MaxEnt suitability Running our model until 2050, we found that our results generally agreed with those produced by the MaxEnt model of Wakie et al. 1 ; Fig. 5 . Both models agreed that SLF would be unlikely in 47.3% of pixels nationwide; they also agreed that SLF would have some probability of occurring in 32.4% of pixels nationwide (see legend in Fig. 5 for details). In 15.6% of pixels nationwide, the MaxEnt model predicted SLF presence, but PoPS did not; of these pixels, 72.6% (11.3% of total pixels) were classified by Wakie et al. 1 as low risk, 22.0% (3.4% of total pixels) as medium risk, and 5.4% (0.8% of total pixels) as high risk. In 4.7% of pixels nationwide, PoPS predicted SLF presence, but the MaxEnt model did not; of these pixels, 41.6% (2.0% of total pixels) were classified by PoPS as low risk, 44.3% (2.1% of total pixels) as medium risk, and 14.1% (0.7% of total pixels) as high risk. Fig. 5: MaxEnt and PoPS model comparison. Comparison of SLF risk predicted by the MaxEnt model of Wakie et al. 1 versus PoPS output for the year 2050. The percentage of total land area in each risk category is provided in parentheses in the legend. Full size image Discussion Niche modeling (often with MaxEnt) is a very common technique used to examine the potential range of an introduced species 2 , 3 , 13 , 14 , but it has limited utility for management planning, because it cannot predict the likely timing of species establishment. Temporal estimates of pest or pathogen spread are currently rare (except see refs. 5 , 6 , 11 , 15 ), even though predicting the timing of pest or pathogen arrival is essential for management planning. Here we show that spatial–temporal modeling can produce similar spatial predictions to niche modeling, with the important added value of identifying the year at which a pest is likely to reach a particular location and possibly impact economically valuable commodities. Our analysis of SLF spread in the contiguous US highlights the large acreage of at-risk commodities that are likely to be infested if SLF were allowed to spread uncontrolled, providing a reasonable baseline to compare to different management scenarios and guidance for identifying locations for early surveillance. If SLF spread were unmitigated, we expect the pest to establish across much of the US by 2037. The main pathway to the West Coast is accidental human-mediated transport, given that SLF lays its eggs on shipping material, stone, railroad cars, and even vehicles 7 . Railroads present a high-risk pathway for long-distance SLF dispersal 7 , 16 , 17 , and so our analysis simulated accidental transport along these rail networks, a modification of our original PoPS model. By knowing the timing of arrival of a potentially damaging insect such as SLF, decision-makers can identify places to enact surveillance and information campaigns (particularly around rail hubs) and growers can begin to compare alternative management scenarios and prepare the necessary precautions to prevent local pest populations from damaging valuable crops. We are working with multiple state Departments of Agriculture and the US Department of Agriculture’s Animal and Plant Health Inspection Service (USDA APHIS) to run custom scenarios in PoPS for individual states; these tailored modeling efforts account for state-specific management budgets and strategies so that managers can compare the potential effectiveness of different treatment strategies that are realistic for them. Although the potential impacts of SLF on many hosts are largely unknown, grape vineyards can be completely lost when impacted by both heavy feeding by SLF and low winter temperatures. Estimates of when and where SLF will arrive can be used to prioritize removal of tree of heaven ( Ailanthus altissima ), SLF’s presumed primary host. For example, in the next 10 years, removing tree of heaven near vineyards in the eastern US (Fig. 3 ) might be prioritized, followed by removal near vineyards in California just before the pest is expected to reach the state. Such management action would likely adjust the probability estimates we forecasted (Fig. 2 ), by reducing the acreage of commodities in high-probability areas (Fig. 3 ). If the tree of heaven removal is successful at slowing spread, SLF arrival probabilities in Fig. 2 would be shifted to the right, i.e., later in time; if removal prevented SLF from arriving to specific counties, these probabilities in Fig. 2 would also be shifted down, i.e., lower probabilities at all time steps. Methods Model structure We used the PoPS (Pest or Pathogen Spread) Forecasting System 11 version 2.0.0 to simulate the spread of SLF and calibrated the model (Fig. 6 ) using Approximate Bayesian Computation (ABC) with sequential Markov chain and a multivariate normal perturbation kernel 18 , 19 . We simulated the reproduction and dispersal of SLF groups (at the grid cell level) rather than individuals, as exact measures of SLF populations are not the goal of surveys conducted by USDA and state departments of agriculture. Reproduction was simulated as a Poisson process with mean β that is modified by local conditions. For example, if we have 5 SLF groups in a cell, a β value of 2.2, and a temperature coefficient of 0.7, our modified β value becomes 1.54 and we draw five numbers from a Poisson distribution with a λ value of 1.54. β and dispersal parameters were calibrated to fit the observed patterns of spread. For this application of PoPS, we replaced the long-distance kernel ( α 2) with a network dispersal kernel based on railroads, along which SLF and tree of heaven are commonly observed 7 . For each SLF group dispersing, if a railroad is in the grid cell with SLF, we used a Bernoulli distribution with mean of γ (probability of natural dispersal) to determine if an SLF group dispersed via the natural Cauchy kernel with scale ( α ) or along the rail network. This network dispersal kernel accounts for dispersal along railways if SLF is present in a cell containing a rail line. The network dispersal kernel added three new parameters to the PoPS model: a network file that contained the nodes and edges, minimum distance that each railcar travels, and the maximum distance that each railcar travels. Unlike typical network models, which simulate transport simply between nodes, our approach allows for SLF to disembark a railcar at any point along an edge, more closely mimicking their actual behavior. This network therefore captures the main pathway of SLF long-distance dispersal, i.e., along railways. Fig. 6: Model structure for spotted lanternfly (SLF, Lycorma delicatula ). Unused modules in the PoPS model are gray in the equation. a The number of pests that disperse from a single host under optimal environmental conditions ( β ) is modified by the number of currently infested hosts ( I ) and environmental conditions in a location ( i ) at a particular time ( t ); environmental conditions include seasonality ( X ) and temperature ( T ) (see supplementary Fig. 3 for details on temperature). Dispersal is a function of gamma (γ), which is the probability of short-distance dispersal (alpha-1, α 1 ) or long-distance via the rail network ( N ( d min , d max )). For the natural-distance Cauchy kernel, the direction is selected using 0-359 with 0 representing North. For the network kernel, the direction along the rail is selected randomly, and then travel continues in that direction until the drawn distance is reached. Once SLF has landed in a new location, its establishment depends on environmental conditions ( X , T ) and the availability of suitable hosts (number of susceptible hosts [ S ] divided by total number of potential hosts [ N ]). b We used a custom host map for tree of heaven ( Ailanthus altissima) to determine the locations of susceptible hosts. The number of newly infested hosts (ψ) is predicted for each cell across the contiguous US. Full size image Spotted lanternfly model calibration We used 2015–2019 data (over 300,000 total observations including both positive and negative surveys) provided by the USDA APHIS and the state Departments of Agriculture of Pennsylvania, New Jersey, Delaware, Maryland, Virginia, and West Virginia to calibrate model parameters ( β , α 1, γ , d min , d max ). The calibration process starts by drawing a set of parameters from a uniform distribution. Simulated results for each model run are then compared to observed data within the year they were collected, and accuracy, precision, recall, and specificity are calculated for the simulation period. If each of these statistics is above 65% the parameter set is kept. This process repeats until 10,000 parameter sets are kept; then, the next generation of the ABC process begins: the mean of each accuracy statistic becomes the new accuracy threshold, and parameters are drawn from a multivariate normal distribution based on the means and covariance matrix of the first 10,000 kept parameters. This process repeats for a total of seven generations. Compared to the 2020 and 2021 observation data (over 100,000 total observations including both positive and negative surveys), the model performed well, with an accuracy of 84.4%, precision of 79.7%, recall of 91.55%, and specificity of 77.6%. In contrast, a model run using PoPS’ previous long-distance kernel (α2) instead of the network dispersal kernel had an accuracy of 76.5%, precision of 68.1%, recall of 92.68%, and specificity of 57.2%. We applied the calibrated parameters and their uncertainties (Fig. 7 ) to forecast the future spread of SLF, using the status of the infestation as of January 1, 2020 as a starting point and data for temperature and the distribution of SLF’s presumed primary host (tree of heaven, Ailanthus altissima ) for the contiguous US at a spatial resolution of 5 km. Fig. 7: Parameter distributions. a Reproductive rate ( β ), b natural dispersal distance ( α 1), c percent natural dispersal ( γ ), d minimum distance ( d min) , e maximum distance ( d max ). Full size image Weather data Overwinter survival of SLF egg masses, and therefore spread, is sensitive to temperature (see ref. 2 ). To run a spread model in PoPS, all raw temperature values are first converted to indices ranging 0–1 to describe their impact on a species’ ability to survive and reproduce. We converted daily Daymet 20 temperature into a monthly coefficient ranging 0–1 (Supplementary Fig. 1 ) and then rescaled from 1 to 5 km by averaging 1-km pixel values. We used weather data 1980–2019 and randomly drew from those historical data to simulate future weather conditions in our simulations, to account for uncertainty in future weather conditions. Tree of heaven distribution mapping SLF is known to feed on >70 species of mainly woody plants 7 , but tree of heaven is commonly viewed as necessary, or at least highly important, for SLF spread. Young nymphs are host generalists, but older nymphs and adults strongly prefer tree of heaven (in Korea 21 ; in Pennsylvania, US 22 ), and experiments in captivity 23 and in situ 9 have shown that adult survivorship is higher on the tree of heaven and grapevine than other host plants, likely due to the presence and proportion of sugar compounds important for SLF survival 23 . Secondary compounds found in tree of heaven also make adult SLF more unpalatable to avian predators 24 , and researchers have hypothesized that these protective compounds may be passed on to eggs 21 . For these reasons, tree of heaven is widely considered the primary host for SLF and linked to SLF spread 1 , 25 . We, therefore, used tree of heaven as the host in our spread forecast. We estimated the geographic range of tree of heaven using the Maximum Entropy (MaxEnt) model 26 , 27 . We chose to use niche modeling because tree of heaven has been in the US for over 200 years and is well past the early stage of invasion at which niche models perform poorly; instead, tree of heaven is well into the intermediate to equilibrium stage of invasion, when niche models perform well 28 . We obtained 19,282 presences for tree of heaven in the US from BIEN 29 , 30 and EDDmaps 31 and selected the most important variables from an initial MaxEnt model of all 19 WorldClim bioclimatic variables 32 . Our final climate variables were mean annual temperature, precipitation of the coldest quarter, and precipitation of the driest quarter. Given that tree of heaven is non-native and invasive in the US, prefers open and disturbed habitat, and is commonly found along roadsides and in urban landscapes 33 , we also included distance to major roads and railroads as an additional variable in our model, to account for the presence of disturbed habitat as well as approximate urbanization and anthropogenic degradation. For each 1-km cell in the extent, we calculated distance to the nearest road and nearest railroad using the US Census Bureau’s TIGER data set of primary roads and railroads 34 . We used our final MaxEnt model to generate the probability of the presence of tree of heaven for each 1-km cell, then reset all cells with a probability ≤0.2 to a value of 0 to minimize overprediction of the tree of heaven locations (because cells ≤0.2 contained less than 1% of the presences used to build the model). We rescaled the remaining probability values 0–1. We used 10% of the tree of heaven presence data to validate the model, which performed well: 95% of the validation data set locations had a probability of presence greater than 65%. We then rescaled the 1-km MaxEnt output to 5 km using the mean value of our 1-km cells, in order to reduce computational time. Forecasting spotted lanternfly We used the Daymet temperature data and distribution of tree of heaven to simulate SLF spread with PoPS, assuming no further efforts to contain or eradicate either tree of heaven or SLF. We ran the spread simulation 10,000 times from 2020 to 2050 for the contiguous US. After running all 10,000 iterations, we created a probability of occurrence for each cell for each year by dividing the number of simulations in which a cell was simulated as being infested in that year by 10,000 (the total number of simulations). This gave us a probability of occurrence per year. We downscaled our probability of occurrence per year from 5 km to 1 km and set the probability to 0 in 1-km pixels with no tree of heaven occurrence. Data for mapping and comparison We compared our probability of occurrence map in 2050 to the SLF suitability map created by Wakie et al. 1 using niche modeling to see how well the two modeling approaches would agree if SLF were allowed to spread unmanaged (Fig. 5 ). Wakie et al. 1 categorized pixels below 8.359% as unsuitable, between 8.359% and 26.89% as low risk, between 26.89% and 51.99% as medium risk, and above 51.99% as high risk. To facilitate comparison, we used this same schema to categorize pixels as low, medium, or high probability of spread. We converted the yearly raster probability maps to county-level probabilities in order to examine the yearly risk to crops in counties. We performed this conversion using two methods: (1) the highest probability of occurrence in the county (Supplementary Movie 2 ) and (2) the mean probability of occurrence in the county (Fig. 1 and Supplementary Movie 1 ). The first method provides a simple, non-statistical estimate of the probability of SLF presence by assigning the county the value of the highest cell-level probability; the second accounts for all of the probabilities of the cells in the county and typically results in a higher county-level probability. We used USDA county-level production data 10 for grapes, almonds, apples, walnuts, cherries, hops, peaches, plums, and apricots to determine the amount of production at risk each year (Fig. 2 ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The SLF occurrence data that support the findings of this study are owned by USDA APHIS and made available to us through a cooperative agreement and are protected by confidential agreements with property owners, so cannot be made publicly available. These data can be obtained from USDA APHIS if the researcher obtains a cooperative agreement with USDA APHIS that allows them access to these data. The other data we used are publicly available and can be downloaded from iNaturalist, EDDMaps, DayMet, BIEN, and the US Census Bureau. Code availability We used the R version of PoPS ( ) and specifically version 2.0 ( ) which includes the network kernel that allowed for simulating the movement of spotted lanternfly along rail lines. This repository uses renv which allows the exact versions of each package used for this analysis to be installed. | The spotted lanternfly, an invasive insect that can kill grapevines and damage other crops, has a chance of first reaching the wine-producing counties of California in five years, according to a new analysis from North Carolina State University researchers. In the study published in Communications Biology, researchers used a computer simulation tool to predict the timing of the spread of the spotted lanternfly, Lycorma delicatula, across the United States if efforts to control its spread are stopped. They predicted there is a high probability of the insect spreading to North Carolina by 2027, and a chance of the insect first reaching California's grape-producing counties that same year. "This is a big concern for grape growers; it could lead to billions of dollars of losses in the agricultural sector," said the study's lead author Chris Jones, research scholar with the NC State Center for Geospatial Analytics. "With this study, we have a baseline that we can use to evaluate the effect of different management strategies." The spotted lanternfly is native to Asia. It was first identified in the United States in Pennsylvania in 2014. Since then, it has spread to at least 11 other states. The invasive insect can damage or destroy commercially valuable crops such as grapes, apples, almonds, walnuts, cherries, hops, and peaches, as well as certain trees. It kills plants by directly feeding on them, and can also damage them by leaving behind a residue known as "honeydew" that helps mold grow. California, which produces 82% of the nation's grapes, has been identified along with Washington state as a "highly suitable" climate for the spotted lanternfly. "It's hard to say in advance exactly what the spotted lanternfly's impact will be on the grape-producing regions of the West Coast, since we only have data from cold-producing regions," Jones said. "In Pennsylvania, we've seen vineyard losses from the double whammy of cold and from spotted lanternflies feeding on the vines. But we do know producers can also experience losses because of the mold growth alone." To predict the insect's spread, researchers used a computer simulation tool known as "PoPS," or the Pest or Pathogen Spread forecasting system, which they have developed to track any type of pest or disease. Their system combines information on climate conditions suitable for spread of the pest with data on where cases have been recorded, the reproductive rate of the pest and how it moves in the environment. New data helps their forecasting system get better at predicting future spread. To track spotted lanternfly, researchers used observations gathered by state and federal agriculture and pest experts between 2015 and 2019. They considered railroad networks to be a "high risk" pathway for the spotted lanternfly to spread to the West Coast, as it is a hitchhiking pest that can lay its eggs on many different surfaces, including vehicles. People can accidentally move the insect when they transport shipping material, stone, railroad cars, or even vehicles where the insect has laid its eggs. "The main reason we included proximity to the railroad network in our model is because it is highly correlated with long-distance dispersal," Jones said. In addition to accounting for proximity to a rail line in the insect's spread, researchers also assumed that the insect requires the presence of the invasive host tree—the tree of heaven or Ailanthus altissima—for most of the insect's reproduction. The researchers said the tree often grows along highways and rail lines. "We assumed that the spotted lanternfly needs tree of heaven to complete its life cycle," Jones said. "The presence of tree of heaven, along with rail networks, seem to be two factors that could drive spread to California. The temperature there is relatively suitable across the state." Researchers expect the spotted lanternfly will be established throughout much of the United States by 2037 if all efforts to control it are stopped. In California, the researchers said the insect has a low probability of first reaching the grape-producing counties by 2027, and a high probability by 2033. They expect it will likely spread through the grape-producing region by 2034. "When we say there is a 'high' probability of reaching those counties, we mean there are multiple counties that have a probability of more than 50%," Jones said. They predicted there would be a high probability of the insect spreading in North Carolina by 2027. The spotted lanternfly has already been identified in Virginia, and North Carolina forestry experts are tracking it closely. Researchers said they are working to raise awareness among growers here. "We're conducting risk assessments and developing accompanying maps right now to raise awareness among the North Carolina growers and other managers to help them prioritize management strategies," said study co-author Ross Meentemeyer, Goodnight Distinguished Professor of Geospatial Analytics at NC State. Researchers said their findings could help state and federal officials working to protect U.S. crops like grapes, an industry valued at approximately $6.5 billion nationwide. Their findings provide a baseline for comparison for their management strategies. In addition, officials can use their computer modeling system to test the outcome of different management strategies. "We hope this helps pest managers prepare," Jones said. "If they can start early surveillance, or start treating as soon as the spotted lanternfly arrives, it could slow the spread to other areas. They could also pre-emptively remove tree of heaven around vineyards." | 10.1038/s42003-022-03447-0 |
Medicine | Machine intelligence accelerates research into mapping brains | Carlos Enrique Gutierrez et al. Optimization and validation of diffusion MRI-based fiber tracking with neural tracer data as a reference, Scientific Reports (2020). DOI: 10.1038/s41598-020-78284-4 Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-020-78284-4 | https://medicalxpress.com/news/2020-12-machine-intelligence-brains.html | Abstract Diffusion-weighted magnetic resonance imaging (dMRI) allows non-invasive investigation of whole-brain connectivity, which can reveal the brain’s global network architecture and also abnormalities involved in neurological and mental disorders. However, the reliability of connection inferences from dMRI-based fiber tracking is still debated, due to low sensitivity, dominance of false positives, and inaccurate and incomplete reconstruction of long-range connections. Furthermore, parameters of tracking algorithms are typically tuned in a heuristic way, which leaves room for manipulation of an intended result. Here we propose a general data-driven framework to optimize and validate parameters of dMRI-based fiber tracking algorithms using neural tracer data as a reference. Japan’s Brain/MINDS Project provides invaluable datasets containing both dMRI and neural tracer data from the same primates. A fundamental difference when comparing dMRI-based tractography and neural tracer data is that the former cannot specify the direction of connectivity; therefore, evaluating the fitting of dMRI-based tractography becomes challenging. The framework implements multi-objective optimization based on the non-dominated sorting genetic algorithm II. Its performance is examined in two experiments using data from ten subjects for optimization and six for testing generalization. The first uses a seed-based tracking algorithm, iFOD2, and objectives for sensitivity and specificity of region-level connectivity. The second uses a global tracking algorithm and a more refined set of objectives: distance-weighted coverage, true/false positive ratio, projection coincidence, and commissural passage. In both experiments, with optimized parameters compared to default parameters, fiber tracking performance was significantly improved in coverage and fiber length. Improvements were more prominent using global tracking with refined objectives, achieving an average fiber length from 10 to 17 mm, voxel-wise coverage of axonal tracts from 0.9 to 15%, and the correlation of target areas from 40 to 68%, while minimizing false positives and impossible cross-hemisphere connections. Optimized parameters showed good generalization capability for test brain samples in both experiments, demonstrating the flexible applicability of our framework to different tracking algorithms and objectives. These results indicate the importance of data-driven adjustment of fiber tracking algorithms and support the validity of dMRI-based tractography, if appropriate adjustments are employed. Introduction Diffusion-weighted magnetic resonance imaging (dMRI) generates images based on anisotropic diffusion of water molecules. Diffusion in the brain is constrained in a direction-dependent manner by obstacles such as nerve fibers and membranes. This leads to anisotropic diffusion patterns in dMRI images that can be used to estimate structural brain connectivity in a non-invasive way 1 , 2 , 3 , 4 , 5 . dMRI-based tractography can trace whole-brain connectivity to more fully reveal network organization 6 , 7 , 8 , its relationship with functions 9 , 10 , 11 , mental and neurological disorders 12 , 13 , 14 , 15 , and computational modeling 16 . However, there are fundamental limitations, namely, the lack of directionality of connections and the difficulty of estimating crossing fiber orientations in voxels of low spatial resolution 17 , 18 . These and other practical issues cause failures in tracking fibers (low sensitivity or low true positive rate) 19 , 20 , 21 , especially in tracking long-distance connections 22 , 23 , 24 , and tracking wrong fibers (low specificity or high false positive rate) 20 , 25 , 26 . Unfortunately, all of these potentially contribute to erroneous reconstruction of connectomes. Various efforts have been made to improve the accuracy of reconstructions. Global tractography 27 , 28 , 29 provides whole-brain connectivity that consistently explains dMRI data by optimizing a global objective function. Compared to conventional seed-based fiber tracking, it achieved better qualitative results on phantom data 27 . However, both seed-based and global fiber tracking algorithms have a number of parameters that are difficult to determine because of unknown biophysical variables. Japan’s Brain/MINDS project (Brain Mapping by Integrated Neurotechnologies for Disease Studies) 30 intends to build a multi-scale marmoset brain map and mental disease models. The project has assembled a high-resolution marmoset brain atlas 31 , and is conducting systematic anterograde tracer injections to analyse brain connectivity, while obtaining functional, structural, and diffusion MRI for most individuals. All data are mapped to a common brain space. This gives us a unique opportunity to verify the accuracy of dMRI-based fiber tracking using neuronal tracer data, reconstructed with the marmonet pipeline 32 as a reference. Here we propose a general framework for optimization and validation of dMRI-based fiber tracking algorithms in reference to neuronal tracer data from multiple injection sites. Because fiber tracking should satisfy multiple performance criteria, we use multi-objective optimization (MOO) in the first stage and then use multiple criteria decision analysis (MCDA) to select a set of standard parameters. We test the effectiveness of our framework in two experiments. In the first experiment, we use a probabilistic streamline-based algorithm iFOD2 33 and consider the region-level true positive rate (TPR) and false positive rate (FPR) as criteria. In the second experiment, we take a global tracking algorithm 27 and incorporate more elaborate criteria: (1) distance-weighted coverage, (2) the true/false positive ratio, (3) projection coincidence, and (4) commissural passage. We optimize the parameters using 10 brain samples and then test their capacity for generalization using 6 brain samples that were not used for optimization. Our implementation code for processing multiple brain samples in parallel is compatible with HPC (high-performance computing) clusters as well as desktop PCs, and publicly available. Results Brain/MINDS marmoset connectome data We use neural tracer data from 20 marmosets collected in the Brain/MINDS project for this study (see Fluorescent neural tracer data at “ Methods ” section). An anterograde tracer was injected in the left prefrontal cortex, at different points for each animal, and neuron projection pathways as well as their target regions were quantified based on tracer voxel density in fine 500 or coarse 104 parcellation in the Brain/MINDS atlas 31 . We consider an injection region connected to a target region when at least one injection tracer image has signal in both regions. This is the first version of a neural tracer-based connectome computed by the marmonet pipeline 32 in the project. For optimization and validation, we took data from 16 animals that had both tracer and dMRI data. Experiments evaluate dMRI-based fiber tracking against multiple objectives, by comparisons with tracer at different levels of resolution: brain region-level and voxel-level. Objectives can be unrelated to tracer. An example of an anatomical constraint is defined as objective in the 2nd experiment. Seed-based tracking with region-level criteria In the first experiment, we take the probabilistic streamline-based algorithm iFOD2 33 (second-order integration over Fiber Orientation Distributions), which is the default tractography algorithm of MRtrix3 34 . Three important parameters are optimized: (a) angle : the maximum angle between successive steps of the algorithm; (b) cutoff : the FOD amplitude for terminating fibers; (c) minlength : the minimum length, in mm, of any fiber. The number of seeds (1000 \(\times\) number of output fibers) and all other parameters are kept at their default values. Streamline seeds are placed randomly all over the dMRI. The number of output fibers is fixed at 300,000. Criteria for evaluation An important issue in comparing dMRI-based fiber tracking and anterograde neural tracer data is that the former does not reflect the projection direction. Comparisons assume that regions are connected independently of tracer directionality. dMRI-based fibers connected to a tracer injection site can include both incoming and outgoing axons to the site. Thus, if we take anterograde tracing as a reference, it is natural to have additional “false positive” fibers. Four objective functions measuring brain-region connectome similarities consider fitting to both individual tracer data and group tracer data in terms of TP and FP (Fig. 1 a). dMRI-based matrices are built for each fiber tracking result in a standard brain space, by assigning each streamline to all regions it intersects. Before comparison, dMRI- and tracer-based matrices are log-transformed and normalized. Matrix binarization, preserving connections from 10 to 100%, is included as a preceding step to TPR and FPR calculation. Individual objectives (i) \(TPR_I\) and (ii) \(FPR_I\) . Obtained by comparing individual injection site-region pairs connected by streamlines for each brain. Thus, fibers intersecting the injection region and the tracer of the same animal were arranged as matrices of 1 injection site \(\times\) 500 targets parcellation for matching. Group objectives (iii) \(TPR_G\) and (iv) \(FPR_G\) . Obtained by mapping fiber tracking output to the group of 20 injection sites \(\times\) 500 targets parcellation for each brain, and comparing against the Brain/MINDS marmoset connectome data. Figure 1 Criteria for evaluation. ( a, b ) show evaluation criteria for the 1st (iFOD2) and 2nd (global tracking) experiments. dMRI-based fiber tracking results are mapped to the standard brain space and intersected spatially with the injection site, allowing extraction of a subset of fibers. The full tractogram is used to compute group \(TPR_G\) and \(FPR_G\) (iFOD2), projection coincidence with the target hemisphere \(f_3\) and the commissural passage \(f_4\) (global tracking). The subset of fibers is used for individual \(TPR_I\) and \(FPR_I\) (iFOD2), the distance-weighted coverage \(f_1\) and true/false positive ratio \(f_2\) objectives (global tracking). Global tracking includes more elaborated criteria, with positive voxels weighted by two factors extracted from neural tracer data, the distance to the injection site center \(d_i\) and the voxel intensity \(w_i\) . Figure created using The MRtrix viewer 3.0.1 ( ) and Inkscape 1.0beta2 ( ). Image datasets are part of the Brain/MINDS project (see Data availability section). Full size image Multi-objective optimization In order to account for trade-offs between multiple objectives, instead of optimizing a scalar criterion using the weighted sum of objectives, we took the multi-objective optimization (MOO) approach to find the Pareto-optimal set, or Pareto front, where no objective values can be improved without degrading some other objective values. For our experiment, the non-dominated sorting genetic algorithm II (NSGA-II) 35 was arranged for parallel optimization of 10 brains (training set). An optimization process runs per brain while, cooperatively, it sends winner parameters to other processes in each generation (see Optimization and Code implementation at “ Methods ” section). Optimization identified multi-dimensional Pareto fronts, one per brain, which evolved similarly and converged to a common region. They are visualized in Fig. 2 as pairwise comparisons of objectives. The competition of \(TPR_G\) versus \(FPR_G\) and \(TPR_I\) versus \(FPR_I\) pushed results toward the upper-left region (ideal region), clearly seen in \(TPR_G\) versus \(FPR_G\) , where the latest evolutionary results peek out from the early made ROC curve (dotted circle). \(TPR_G\) versus \(FPR_G\) performance suggests that individual brain variability is weakened by connectome-based group objectives. Spatial coverage improved, as seen in Fig. 3 a and Supplementary Fig. S1 a, where fiber tracking by iFOD2 (in red) covers larger areas of the neural traces (in green) by the optimized parameters. Fiber length increased as well, from a default value of 8.13 mm to an optimized value of around 12.2 mm, on average. Figure 2 Objective function optimization for iFOD2. Pair-wise visualization of the optimization of four objective functions: \(TPR_G\) and \(FPR_G\) from the comparison between connectomes of \(20 \times 500\) , and \(TPR_I\) and \(FPR_I\) from the comparison between individual connectomes of \(1 \times 500\) . Our framework drives objectives toward the Pareto-front in the upper-left direction for the competing TP versus FP objectives. \(FPR_G\) versus \(TPR_G\) exposes a peak of optimal solutions (dotted circle). \(FPR_I\) versus \(FPR_G\) evinces the capability of our framework for controlling FP growth, maintaining values close to 0, at the bottom-left region. Best solutions, detected by MCDA, are shown as red x markers. Full size image Figure 3 Examples of tracked fibers by optimized and default parameters. Unoccluded visualization of spatial relationships between fluorescent tracer signals (green) and tractography (red) for 3 injection sites: (1, 2) from the training set; (3) from unseen marmoset subjects. Their overlap (yellow) shows common voxels, while red fibers correspond to “false positives”. Improved results for both, ( a ) iFOD2 and ( b ) global tracking algorithms, show enlarged overlap and longer fibers connecting sub-cortical and projection areas. Figure created using FluoRender 2.24 ( ) and Inkscape 1.0beta2 ( ). Image datasets are part of the Brain/MINDS project (see Data availability section). Full size image Multiple criteria decision analysis for standard parameters To assess trade-offs between objectives and to determine which combination performs best for each brain (Fig. 2 , red x markers) and for the training set, we used Multiple Criteria Decision Analysis (MCDA). Objectives, denoted as f ’s, are considered the multiple criteria. Given an optimized brain, each f interval [ min ( f ), max ( f )] is divided into 10 equal sub-intervals and corresponding parameter settings are rated from 1 (worst) to 10 (best). Ratings are averaged across f ’s with equal weighting for each f and brain, and the parameter set with the maximum score is selected as the individual winner(s) for the brain. An evaluation-averaged result from 5 fiber tracking runs using default parameters for the training set, and compared against the average of individual winners: \(TPR_G\) improved from \(0.3\pm 0.11\) to \(0.5\pm 0.07\) and \(TPR_I\) from \(0.2\pm 0.09\) to \(0.34\pm 0.07\) . In the case of FP objectives, the optimization kept values down, with no substantial changes: \(FPR_G\) from \(0.023\pm 0.037\) to \(0.04\pm 0.03\) and \(FPR_I\) from \(0.005\pm 0.006\) to \(0.01\pm 0.006\) . The restrictive effect of FP related objectives is seen in \(FPR_I\) versus \(FPR_G\) (Fig. 2 ), where the best solutions are located in the desired bottom-left area. Standard parameters were calculated as the mean and standard deviation of the best solution parameters for the 10 brains: angle : \(32.2\pm 6.3\) , cutoff : \(0.05\pm 0.012\) , and minlength : \(4.8\pm 2.5\) . Validation by test set data Standard settings are validated by performing 5 fiber tracking runs on the training and test sets, averaging objective values for each set, and comparing with the corresponding default performance (Fig. 4 a). \(TPR_G\) improved notably from \(0.32\pm 0.13\) to \(0.472\pm 0.14\) (test set) and from \(0.3\pm 0.11\) to \(0.46\pm 0.12\) (training set). Both performances are similar to those of individual winners above, which suggests the robustness of optimized parameters in enhancement of wide-brain, region-level connections. \(TPR_I\) advanced to better values; however, different performances are evident between test and training sets, possibly due to individual variability of the brains. \(FPR_I\) and \(FPR_G\) growth was controlled efficiently, with values close to 0. \(FPR_G\) moved slightly from \(0.03\pm 0.05\) to \(0.07\pm 0.082\) (test set), and from \(0.023\pm 0.037\) to \(0.065\pm 0.05\) (training set). Low values of FPR along with a fixed fiber density demonstrate the efficiency of the framework in constraining the dominance of FP . Best solutions were mapped onto the \(TPR_G\) versus \(FPR_G\) ROC curve (Fig. 4 b, dark-blue x markers). The objectives, by MCDA decision criteria, were equally weighted in the solution selection process. An example of differently weighted criteria, in which TPR weights are defined slightly above other objectives, shifts winners to better values of \(TPR_G\) (blue circles). On the other hand, weights on FPR ’s shift winners to better values of \(FPR_G\) (blue squares). Spearman’s rank correlation coefficients from \(20\times 500\) connectome comparisons are color coded. Best solutions, on average, reached a correlation of \(0.67 \pm 0.05\) . Figure 4 Performance on test data and comparison with training results for iFOD2. ( a ) Comparison of objective’s performance between training and test sets (average values from 5 runs for each brain) shows improvement of \(TPR_I\) and \(TPR_G\) , and growth control of \(FPR_I\) and \(FPR_G\) for the optimized generic settings. \(TPR_G\) reveals similar results for test and training sets, suggesting good generalization capabilities of optimized parameters for full-connectome estimation. ( b ) ROC space and Spearman’s rank correlation coefficient (color coded) for dMRI- versus neural tracer-based connectomes ( \(TPR_G\) versus \(FPR_G\) ). Best individual solutions (from MCDA) are indicated by dark-blue x markers. Examples of differently weighted criteria during MCDA selection process are presented as blue circles and squares. The former case implements higher weights on TPR ’s objectives, shifting solutions to better values of \(TPR_G\) , while the latter shifts to better values of \(FPR_G\) by assigning higher weights to the FPR ’s objectives. Full size image Global fiber tracking with fiber-passage criteria In the second experiment, we take the global fiber tracking algorithm 27 , which tracks long-range connections better than seed-based methods (DMFC-fiberCup at MICCAI’2009). We explore the major parameters: width \(\sigma\) , length l , weight w , chemPot c and connlike L 27 (see Global tractography and parameter selection at “ Methods ” section). Criteria for evaluation Fitting can be quantified for axon trajectories at the voxel level or for projection targets at the brain-region level. An important issue in dMRI-based fiber tracking is the difficulty of tracking long connections, such as cross-hemisphere or sub-cortical connections. Accordingly, we consider the following four objective functions (Fig. 1 b): (i) distance-weighted coverage, (ii) the true/false positive ratio, (iii) projection coincidence, and (iv) commissural passage, as explained below. (i) Distance-weighted coverage \(f_1=TPR^w_v=\frac{\sum _i^{N_{TP}}P_i}{\sum _i^{N_P}{P_i}}\) . Here, \(P_i=\frac{d_i}{max(d)} \times \frac{w_i}{max(w)}\) is a positive voxel in the 3D tracer image reconstruction that is weighted by voxel fluorescence intensity \(w_i\) and the distance \(d_i\) from the voxel to the center of the injection region. This objective is maximized and uses \(d_i\) and \(w_i\) to promote long-range connections, with voxels strongly connected to the injection region. \(N_{TP}\) is the total number of true positive voxels found in the comparison, and \(N_P\) the total number of positive voxels in the tracer data. (ii) True/false positive ratio \(f_2=\frac{TPR^w_v}{FPR_v+\epsilon }\) . Here, \(FPR_v\) is the false positive rate at the voxel-level, and \(\epsilon\) is the tolerance term calculated empirically and given by \(\epsilon =0.006\times \frac{\mu _P}{\mu _N}\) , with \(\mu _N\) equal to the average number of true negative TN voxels within individual whole-brain masks for the training data set, and \(\mu _P\) , similarly, the mean number of true positive TP voxels. \(\mu _N\) is a large number. \(\epsilon\) provides the minimum acceptable value of \(FPR_v\) , considering for example, that tractography results would be adequate, even if up to 0.6 \(\%\) of the TP are missed and counted as FP . Our optimization used \(\epsilon =0.0013\) . Maximization of this objective drives \(TPR^w_v\) growth and maintains \(FPR_v\) below a reasonable level, helping to constrain the dominance of FP 26 . We observed cases in which small increments of \(FPR_v\) resulted in maximization of (ii); thus, we added (i) cost explicitly to adjust (ii) in the right direction. (iii) Projection coincidence \(f_3=r_{contra}\) , the Spearman’s rank correlation coefficient between neural tracer and dMRI tractography-based connectome matrices for the contralateral-hemisphere of the brain. This objective function promotes accuracy of long cross-hemisphere projections. Global tractography was run twice with the same parameters, and results were averaged and mapped to the tracer-based connectome matrix of 20 injection regions \(\times\) 104 targets parcellation. Both matrices were log-transformed and normalized. (iv) Commissural passage \(f_4=\frac{P_{out}}{V_{out}}\) . While direction-insensitive dMRI fiber tracking should yield many “false positives” in reference to anterograde neural tracers, some estimated paths are impossible, such as those crossing hemispheres outside of commissural areas. This criterion uses a binary mask at the midline, covering voxels outside anatomical commissures, such as the corpus callosum and the cerebellum. \(P_{out}\) is the number of voxels for fibers crossing the mid-line outside commissures, and \(V_{out}\) is the total number of positive voxels of the mask. This objective is targeted for minimization, and supports the non-dominance of FP . \(f_4\) is additionally evaluated as \(f_4^*=\frac{P_{in}}{P_{in}+P_{out}}\) , where \(P_{in}\) counts the voxels of fibers passing through the anatomical commissures. \(f_4^*\) provides the proportion of true anatomical reconstructions at the commissures and the optimization accuracy for the interconnection of the two sides of the brain. Multi-objective optimization We took the same MOO approach using NSGA-II 35 as in the previous experiment (see Optimization at “ Methods ” section). The process optimizes several brains in parallel; however, because of computational demands for the global tracking algorithm, we added parallelization at the fitness function calculation and prepared the code for HPC clusters. In this way, we perform several global tracking runs simultaneously (see Code implementation at “ Methods ” section). To verify the consistency and convergence of optimized parameters across subjects, we visualize the evolution of the five parameters and four objectives for all ten training samples (Supplementary Fig. S2 b). Optimization started with parameters at their default values (dotted line) and widely explored values within the defined search ranges. Over generations, parameters for all brains converged to similar loci while improving the objectives. width , weight and chemPot converged to almost the same value (see late iterations), whereas due to brain heterogeneity, length and connlike followed different paths to achieve the best results. This serves as an indicator of parameter robustness for generalization. We chose standard parameters (the generic setting) by considering trade-offs between objectives (see choice of standard parameters by MCDA below) and using the mean and standard deviation of the best-scoring parameters (shown by red dots and bars in Supplementary Fig. S2 b and Table 1 ). To evaluate optimization of multiple objectives, we visualize the pair-wise evolution of objectives (Fig. 5 ). Multiple Pareto frontiers were developed (1 per brain), which are most clearly seen in \(f_1\) versus \(f_2\) with dotted lines passing through the Pareto’s extremes (maximum value of f ). This may be caused by subject individuality; however, systematic sharing of “champions” enabled the algorithm to achieve optimal parameters in a similar locus among brains. Competing goals \(f_1\) , \(f_2\) , and \(f_3\) were “pushed” by the optimization from the lower-left (default parameters) to the upper-right region (optimized parameters) as seen in \(f_1\) versus \(f_2\) , \(f_1\) versus \(f_3\) and \(f_2\) versus \(f_3\) . \(f_4^*\) maintained the proportion of valid fibers connecting hemispheres, a critical condition when the number of fibers increased and the tractography became denser. \(f_1\) versus \(f_4^*\) , \(f_2\) versus \(f_4^*\) , and \(f_3\) versus \(f_4^*\) indicate that \(99\%\) of the crossing fibers passed through valid commissural voxels. Results of fiber tracking with and without parameter optimization are visualized by overlapping dMRI-based fiber-density maps (red) with neural tracer data (green) (Fig. 3 b and Supplementary Fig. S1 b). Default settings generate sparse coverage, characterized by a few short fibers connected to the injection region. In contrast, tractography with optimized parameters presents expanded overlap with tracer signals, demonstrating higher sensitivity. Longer fibers were connected not only to neighboring high-concentration neural tracer regions, but extended to cross-hemisphere areas and distant areas within the same hemisphere. The true/false positive ratio \(f_2\) and the commissural passage \(f_4\) allow control of the volatile growth of FP , while sensitivity and long-range connections are supported by the distance-weighted coverage \(f_1\) and the projection coincidence \(f_3\) . We monitored the number and mean length of fibers estimated by tractography in the course of optimization (Supplementary Fig. S2 a). Both metrics increased from their default values of approximately 50, 000 fibers and 10 mm to optimized values of about 200, 000 fibers and 17 mm (see fiber length performance for a brain example at Supplementary Fig. S3 ). Higher fiber density helped to increase sensitivity in comparisons with tracer data, while longer fibers promote distant connections between source-target pairs. However, fiber density must be constrained to avoid unrealistic results, controlled in our framework by \(f_2\) and \(f_4\) . Figure 5 Objective function optimization for global tracking. Pair-wise visualization of the optimization of the four proposed objective functions: \(f_1\) : distance-weighted coverage, \(f_2\) : true/false positive rate, \(f_3\) : projection coincidence, and \(f_4^*\) : commissural passage. Our framework drives objectives toward the Pareto front in the upper-right direction. MCDA-based best objective trade-offs across brains are shown as red x markers. The standard setting is computed as their mean and standard deviation. Full size image Choice of standard parameters by MCDA We used MCDA to select the best trade-off solutions and the standard set of parameters as in the previous experiment. Rated parameters were arranged in a matrix of \(40 \times m\) , where 40 is the arrangement of the 4 objectives \(\times\) 10 brains and m is the number of parameter settings over the optimization. After averaging rates across f ’s, the maximum scored parameters were selected as the winner(s) for the brain (Supplementary Fig. S4 ). Finally, the standard set of parameters is obtained using the mean and standard deviation of the winning parameters for the 10 brains. The result is shown in Table 1 along with the default parameters. Table 1 Standard parameters for global tracking obtained by multi-objective optimization and MCDA over multiple marmoset brains. Full size table Validation To validate the effectiveness of optimized parameters above, we compared training and test datasets in terms of the proposed objectives, for default and optimized parameters. First, considering only the training set, we performed 5 global tractography runs for each default and optimized setting. In the latter case, each value is drawn from a normal distribution, with its mean and standard deviation as described in Table 1 . Tractography results are averaged for each brain and shown along with the performance of MCDA selected winners, for comparison (Supplementary Fig. S5 ). For individual winners and common standard parameters, on average, \(f_1\) obtained values of \(0.067\pm 0.036\) and \(0.024\pm 0.012\) , \(f_2\) values of \(11.24\pm 1.98\) and \(7.38\pm 1.88\) , \(f_3\) \(0.68\pm 0.016\) and \(0.62\pm 0.06\) , and \(f_4^*\) 0.99 and 0.99, respectively. The standard parameters generalize well for improving cross-hemisphere projections ( \(f_3\) ) and commissural passage ( \(f_4^*\) ). For \(f_1\) and \(f_2\) , although the standard parameters achieved lower scores than the winners, they outperformed the default settings. Compared to the results with default parameters, on average, \(f_1\) , \(f_2\) and \(f_3\) advanced from their low values ( \(0.003\pm 0.002\) , \(2.3\pm 1.4\) and \(0.4\pm 0.05\) , respectively) to considerably better, optimized values (as shown above), reaching a superior distance-weighted coverage \(f_1\) , while constraining false positives through \(f_2\) and \(f_4\) . \(f_4^*\) showed similar results for the three sets of parameters. For the default case, coverage is low, and few fibers were generated, which leads to a high value of \(f_4^*\) . However, when \(f_1\) increased by optimization, many more fibers were estimated. A high value of \(f_4^*\) indicates a similar level of accuracy at the commissural passage. Generalization capability of optimized parameters is also evaluated on 6 unseen marmoset brains (test set, Fig. 6 a). We ran tractography 5 times using default parameters and standard optimized parameters. Results show improvement for \(f_1\) , \(f_2\) and \(f_3\) , for all brains. \(f_1\) improved on average from \(0.0001\pm 0.0002\) to \(0.006\pm 0.006\) , \(f_2\) from \(0.08\pm 0.18\) to \(3.2\pm 2.7\) and \(f_3\) from \(0.28\pm 0.1\) to \(0.573\pm 0.06\) . As expected, \(f_4^*\) showed similar results of about 0.99. Figure 6 b summarizes the averaged performance for training and test data sets, showing similar results. The objective \(f_3\) exposes better generalization performance. Optimized parameters improved results in terms of the desired objectives for both cases, validating the proposed standard parameter settings. The improvements are clearly recognized in Supplementary Fig. S6 for a brain sample, which visualizes in high-resolution the ground-truth neuronal tracer signal (green) 3D reconstruction, and the global tracking fibers (red) in contact with the injection region, as density maps. Optimization improves fiber-density map matching with the neuronal tracer. Standard parameters perform similarly with decreased density results. \(f_1\) reports small values as a result of thousands of neural tracer voxels averaging the coincidences with voxels covered by fibers, and the mapping of fibers to a high-resolution space (standard brain). We evaluated the strength-weighted coverage \(f_1^*=\frac{\sum _i^{N_{TP}} w_i}{\sum _i^{N_P} w_i}\) of axonal tracts at the voxel-level for the training set (see Supplementary Fig. S7 ) over the generated parameter settings. The coverage improved on average from 0.9% (default) to 15% (MCDA selected winners). Finally, we evaluated region-level connectome matrices estimated by dMRI-based tractography in reference to the Brain/MINDS marmoset connectome data over the course of the optimization. Tractography-based matrices were mapped to the \(20\times 500\) structure, as with group objectives from the 1st experiment. We calculated Spearman’s correlation, TPR and FPR (Fig. 6 c). The best optimization results (blue x’s) substantially overlap settings close to the ideal ROC point (0.0, 1.0) (green circles), and reported on average: \(FPR=0.33\) , \(TPR=0.78\) , distance to the ideal point \(d=0.163\) , and correlation coefficient \(r=0.724\) . Qualitatively, improvements are recognized by matrix visualization, using coarse-grained parcellation for a brain sample (Supplementary Fig. S8 ). Compared to the sparse connections using default parameters (bottom matrix), tractography using optimized parameters (center matrix) revealed denser and longer connections, enhancing connectivity to projection areas in the right-hemisphere (left half of the matrices) from their origins in the left hemisphere. Optimized dMRI-based tractography can complement the sparse structural network obtained from tracer injections (top matrix). Figure 6 Performance on test data and region-level connectomes. ( a ) Objective function comparison (average values for 5 runs) for 6 additional marmosets shows improvement of \(f_1\) , \(f_2\) and \(f_3\) , and consistency of \(f_4^*\) . ( b ) Performance comparison between training and test data sets for the default and optimized settings. \(f_3\) is the most improved objective; however, improvement of \(f_1\) and \(f_2\) contributed to better results, as well as \(f_4^*\) consistency for denser tractograms. ( c ) Spearman’s rank correlation coefficient (color coded) mapped onto TPR - FPR space, from comparisons of 20 \(\times\) 500 neural tracer- and tractography-based matrices, for the entire optimization process. Optimized tractography results (dark blue x’s) closer to the ideal ROC coordinate (green circles) show high correlation. Full size image Comparison of tracking algorithms and objectives For both, iFOD2 and global tracking algorithms, optimization increased spatial coverage and fiber length (Fig. 3 , Supplementary Fig. S1 ), with better performance for the global tracking case. Fiber length improved on average 4 mm (8–12 mm) for iFOD2 and 7 mm (10–17 mm) for global tracking. Comparison between neural tracer- and dMRI-based connectomes (Fig. 4 b versus Fig. 6 c) exposes lower values of \(TPR_G=0.5\) and \(FPR_G=0.04\) for iFOD2 against global tracking performance of about \(TPR_G=0.78\) and \(FPR_G=0.33\) . Figure 7 Whole-brain dMRI-based optimized and default connectomes. Square dMRI-based matrix comparison for one brain-subject example, using optimized (left) and default (right) parameters for ( a ) iFOD2 and ( b ) global tracking algorithms. Full size image The main causes of these results are: global tracking implemented “tolerance” for FP , while iFOD2 optimized objectives to explicitly control FP . Tolerance is relevant because dMRI-based tractography finds both incoming and outgoing fibers to and from an ROI, compared to an anterograde tracer-based connectome of only outgoing fibers. Some “false” positives are reasonable. In the global tracking experiment, a tolerance term \(\epsilon\) is specially implemented by \(f_2\) ; however, constraints on FP at voxel-level in comparisons with neural-tracer 3D reconstructions ( \(f_2\) ) and at the commissural passage ( \(f_4\) ), provided additional FP tolerance for fibers estimated outside the boundaries implicitly defined by the objectives, namely unconnected fibers from the injection region, fibers outside the coverage of tracer references, and fibers not crossing commissures. On the other hand, the iFOD2 case minimized group and individual FP , at region-level connectomes built from tractograms with a fixed fiber density. Nevertheless, Spearman’s rank correlation coefficients (average of the best solutions) for both cases reveal similarities: \(r=0.67\) (iFOD2) and \(r=0.724\) (global tracking). Consequently, connectivity is enhanced, not only from/to injected regions, but brain-wide (Figs. 3 , 7 ), showing richer connection estimates for optimized cases. This demonstrates that better connectomes can be achieved by applying our framework, independently of changes in fiber density (see the fixed density case Fig. 7 a). This suggests that despite using fractional references from tracer injections at the left prefrontal cortex, whole-brain connectivity can be improved. Results of the two experiments demonstrate the general applicability of our framework to different fiber-tracking algorithms and evaluation criteria, and confirm the importance of objective design for improving fiber tracking (see “ Discussion ” section). Discussion We optimized and validated parameters of fiber tracking algorithms 27 , 33 by exploiting fluorescent tracer and dMRI data from the same marmoset brains in the Brain/MINDS project 30 . To address competing goals of sensitivity and specificity for multiple brains, we took a parallel, multi-objective optimization framework. Optimization was based on an NSGA-II evolutionary approach and implemented champion parameter sharing across brains to promote parameter generalization while maximizing objectives (Fig. 8 ). For the iFOD2 algorithm, four objective functions (Fig. 1 a) were used for region-level assessments: two group objectives ( \(TPR_G\) and \(FPR_G\) rates, for comparisons with Brain/MINDS tracer connectome data from 20 marmosets), and two individual objectives ( \(TPR_I\) and \(FPR_I\) rates, for comparisons with individual tracer injection data). Optimization constrained FP efficiently to values below 4% on average (Fig. 2 ), driving the correlation with Brain/MINDS connectome areas to 67% (Fig. 4 b), and resulted in important increases of TP (Fig. 4 a). With respect to default performance, \(TPR_G\) improved from 30% to 50% (Fig. 2 ), and the average fiber length from 8 mm to 12 mm (Fig. 3 a, Supplementary Fig. S1 a). Improvements were independent of growth/loss of fiber density. Instead, they relied entirely on better parameters. For the global tracking algorithm, we developed four objective functions (Fig. 1 b); two voxel-level objectives ( \(f_1\) : distance-weighted coverage, \(f_2\) : true/false positive ratio), a region-level objective ( \(f_3\) : projection coincidence), and an anatomical constraint ( \(f_4\) : commissural passage). During optimization, while constraining impossible fibers at the commissural passage and controlling the growth of false positives, our framework improved dMRI-based fiber tracking performance with respect to default values: average fiber length from 10 to 17 mm (Fig. 3 b, Supplementary Fig. S1 b, S2 and S3 ), voxel-wise coverage of axonal tracts from 0.9% to 15% (Supplementary Fig. S7 ), and correlation of target areas from 40% to 68% (Supplementary Fig. S5 ). Originally, we started this effort by optimizing a single objective function, such as \(C^2 = FPR_v^2+(1-\frac{\sum _i^{N_{TP}}d_i}{\sum _i^{N_P}d_i})^2\) , where the second term is the normalized sum of distances from TP voxels to the center of mass of the injection region, similar to \(f_1\) , but using only \(d_i\) as a weighting factor. However, results for the combined single-objective function by the co-variance matrix-adaptation evolution strategy (CMA-ES) 36 , 37 were unsatisfactory, with a huge density of fibers, dominance of false positives, and many fibers crossing hemispheres outside the commissures. An important feature of our work is that comparisons of dMRI and reference data are performed in parallel for multiple brains, which can account for individual variability. From the multiple Pareto-optimal solution for multiple brains, we used an MCDA method to select a standard set of parameters (see Multiple criteria decision analysis for standard parameters, and Table 1 ). Excluding brain samples used for optimization from the test set, we verified that the standard parameters substantially improve fiber tracking performance compared to the default parameters (Figs. 4 , 6 , 7 and Supplementary S8 ). Standard parameters generalized better on objectives evaluating connectome similarities ( \(TPR_G\) , Fig. 4 ) and correlations ( \(f_3\) , Fig. 6 a,b), whereas the effect from individual subject variability is noticed in objectives measuring local features. Improvements were similar for both experiments, but more prominent for the global tracking algorithm with more elaborated objectives and tolerance for FP . However, the 1st experiment verified improvements on a widely used tracking algorithm (MRtrix3), with simpler objective functions and lower computational requirements (see Code implementation at “ Methods ” section). Results on unseen subjects demonstrate the generalizability of the standard parameters to marmoset. Although both experiments used reference data from 20 tracer injections at the prefrontal cortex, improved tracking was not limited to that area, but to the whole-brain, as illustrated in extended fibers in Fig. 3 and brain-wide region-to-region connectomes in Fig. 7 . The Brain/MINDS marmoset connectivity map is an on-going effort. New reference data are expected in the short term; thus, our framework will re-run optimizations on complete data sets, setting the standard parameters reported here as initial conditions. In addition, an important follow-up work will be to verify whether the same solution applies to diseased animals, making new comparisons between dMRI and tracer data of those marmoset subjects. Our optimization and validation framework can be flexibly applied to different tracking algorithms and objective functions, as demonstrated in the two reported experiments, as well as to different species. Complete tracer data sets exist for mice 38 and macaques 39 , although having similar tracer data from human subjects would be difficult, our framework allows integration of multiple biological constraints 40 . Applying the method to other species will be important, not only for improving current results, but for verifying consistency/scaling of optimal parameters across species. The implementation code is available to the scientific community for improving accuracy and reliability of dMRI-based fiber tracking. The framework allows assimilation of additional data as references. Recently, Zhang et al. 41 proposed optimization of dMRI-based fiber tracking using the region-level coincidence with neural tracer data in the CoCoMac database 39 and matching of fiber orientations with myelin staining data from a single macaque brain 42 . They took the average of Youden’s index (the sum of sensitivity and specificity) 43 for connected regions and the coincidence index of fiber orientation as the criterion, and performed a grid search in a two-dimensional parameter space of a fiber tracking algorithm 44 . Other possible references include molecular cues to targets 45 and connectivity reported by electrophysiological experiments 46 . Multiple references are desirable, and the framework manages them in a data-driven manner. We think that more comparisons are better, despite low dMRI resolution and lack of directionality. Comparisons are beneficial in a wider sense from potentially improving cross fiber issues to clarifying the limitations of fiber tracking. How to define the best objective functions from the available data, especially when the data sources are not strictly the ground-truth, but an approximation, poses new challenges. Our optimization defined equally weighted objectives to mitigate well-known issues of dMRI fiber tracking, but differently weighted objectives may work better. Objective functions can have one of two roles: “promoter” functions that maximize mapping between reference and estimated data and “constrainer” functions that minimize assumed incorrect data mapping. A suitable definition of objectives will play an important role in avoiding over-promoted or under-penalized results. Incorporation of hybrid objectives, such as the True/False positive ratio, may suffice to mitigate unbalanced optimization. Other important factors in choosing objective functions include whether to take global features, such as wide-brain connectome similarities or local features like axon trajectory mapping, at voxel-level. Objective functions designed on top of noisy and partial observations of the ground-truth should allow tolerance for “false positives”, as in the case of incoming fibers to the injection region for anterograde neural tracer data. We designed the multi-objective framework to equally improve important objectives while providing tolerance, considering that cohesion of optimized objectives in trade-off solutions leads to constraint of authentic undesirable fiber tracking estimations. In that context, an additional challenge is how to choose the best solution from a multi-dimensional Pareto front. We took a multi-criterion decision analysis (MCDA) that implements criteria weighting and scoring. MCDA is useful when some objectives need to gain more than others due their relative importance, unbalanced condition or deficient objective set-up. Conclusion We proposed a flexible framework that improves dMRI-based fiber tracking by multi-objective optimization using neural tracer data as a reference. The framework runs with data from multiple brains cooperatively and in parallel. It was tested on different tractography algorithms, parameters, and objectives, and showed improvements in terms of defined objectives and other criteria for training and test data sets. Multiple objective functions were designed to address critical issues in dMRI tractography. For iFOD2 algorithm, the parallel optimization process constrained successfully false positives, while increasing sensitivity. For global tracking algorithm, it promoted sensitivity, strong, long-range connections and high correlation with contralateral projection areas, while controlling unrealistic fibers at the commissural passage and false positives in comparison with neural tracer. These results indicate the importance of optimization and validation of dMRI-based fiber tracking algorithms and also raise concerns about connectome studies that lack validation of fiber tracking algorithms. There is a real opportunity to exploit multi-modal data being generated by multiple global brain projects to establish reliable methods for inferring brain structures, functions, and their relationships. Our work provides the framework to implement it. Methods Statement on the use of experimental animals Marmosets were not directly used in the present work. Imaging data were obtained in a separate collaborative study, and will be made available upon publication of the corresponding study. Although there was no direct use of experimental animals, we want to emphasize that fluorescent neural tracer experiments and diffusion-weighted magnetic resonance imaging in Brain/MINDS were conducted with approval of the Animal Experiment Committee of RIKEN, in compliance with all required regulatory and ethical guidelines. Optimization The non-dominated sorting genetic algorithm II (NSGA-II) 35 is arranged for parallel optimization of the training set (Fig. 8 ). 1st experiment initial settings Parameters \(\theta\) =[angle, cutoff, minlength] are initialized by their default values \(\mu _{\theta }=[45,0.1,2.0]\) , while exploration ranges are settled heuristically with lower [10, 0.01, 1.0] and upper [90, 1.0, 18.0] bounds, respectively. A population M of size 8 is drawn from random uniform distributions with mean \(\mu _{\theta }\) and standard deviation \(\sigma _{\theta }=0.01\) , except for \(\sigma _{\text{ cutoff }}=0.001\) . Each element \(M_i\) of M , called an individual , is an array of length 3, corresponding to the parameters to optimize \(\theta\) . 2nd experiment initial settings Parameters \(\theta\) = [width \(\sigma\) , length l , weight w , chemPot c , connlike L ] 27 (see Global tractography and parameter selection at “ Methods ” section) are initialized to their default values \(\mu _{\theta }=[0.1,0.3,0.133,0.2,0.5]\) 27 , and the exploration is defined within heuristically determined lower [0.01, 0.24, 0.01, 0.05, 0.5] and upper [0.15, 0.65, 0.22, 0.6, 6.0] bounds. A population M of size 8 is drawn from random uniform distributions with mean \(\mu _{\theta }\) and standard deviation \(\sigma _{\theta }=0.01\) except for \(\sigma _{weight}=0.001\) . Each individual \(M_i\) is an array of length 5. Generational process Fitness values \(f(M_i)\) ’s of the initial population M are calculated and the generational NSGA-II 35 -based process begins. Depending on fitness values, tournament, dominance-based selection between 2 individuals \(M_i\) is performed. If the \(f(M_i)\) ’s pair does not inter-dominate, selection is accomplished by evaluating the crowding distance 35 . With repetition, the tournament selects 8 offspring. We choose to invalidate the fitness of the offspring and perform crossover and mutation directly. Crossover picks individuals at even positions of the offspring array and pairs them with individuals in odd positions. Crossover uses simulated binary crossover 47 , which is applied to each pair with probability \(cxp=0.2\) of matching two individuals. Mutation is applied to all individuals among the offspring using a polynomial approach 47 . Offspring fitness values are calculated. Then, from the combined set of parents and offspring, the next generation of 8 elements is selected based on fitness values and spread 35 . In addition, the best individual is selected from the combined set as the local “champion.” Champions are shared among brains to promote convergence of parameters in a similar locus. A process barrier is used as a synchronization step to allow \(n=10\) training brains to receive \((n-1)=9\) champions. Once all champions are shared, the process barrier is set to “OFF” and the process continues. From the next generation set, the 3 dominant individuals are selected by tournament 35 and added to the champion set. Crossover with \(cxp=1\) is applied to the extended champion set by matching even- with odd-positioned individuals, as in the preceding matching step. We process fitness values for the original and matched champions and a final selection of the best 8 individuals from the total set “next generation + original champions + matched champions” is used to upgrade the next generation set M . From M , in like manner, offspring are selected and the process continues for another generation. 1st experiment adjustments From the 8th evolution, bounds were constrained to [25, 0.05, 1.0] (lower) and [55, 0.5, 10.0] (upper) to accelerate the optimization process. 2nd experiment adjustments The process explored several parameter values widely, and after several iterations, it gradually exposed a bifurcation of the inspection. Most of the parameters roughly followed an exploration path on each side of the default value. In order to decide which path leads to advancement of objectives, we compared objective values (Supplementary Fig. S2 b). The comparison helps to constrain exploration by reducing searching intervals toward better values and less computation time, speeding-up optimization. The new exploration lower [0.01, 0.32, 0.01, 0.01, 0.1, ] and upper [0.10, 0.65, 0.13, 0.22, 3.0] bounds, achieved parameter stability after approximately the 20th iteration. Both optimizations ran for 10 brains in parallel (training set) and stopped when slight changes in parameters produced almost no change in objective values, reaching \(E=32\) generations. Because optimization calculates fitness values twice for each generation (3 times for the initial one), the total number of iterations for each brain was \(E^*=32\times 2+1\) . Figure 8 Multi-objective optimization (MOO) process. ( a ) From the initial population of parameters (parents, light blue dots), fitness values are obtained. Tournament selection (selection 1) creates offspring (purple dots). Crossover and mutation are performed on offspring and fitness values are calculated. From the combined set of parents and offspring, selection of the best elements (selection 2) creates the next generation (green dots). The best elements are shared between brain optimization processes (red dots). Most dominant elements (selection 3) are taken from the next generation and mixed with champions via a crossover operation. After obtaining objective values for matched elements and original champions, the next generation is upgraded by selecting the best elements from the joint set “next generation + original champions + matched champions” and sent as a parent for the next iteration. ( b ) One MOO runs for each brain of the training data set, ( c ) sharing the i -generation champion with all MOO-processes (black arrows) and receiving external champions as well (red arrows). Figure created using The MRtrix viewer 3.0.1 ( ) and Inkscape 1.0beta2 ( ). Image datasets are part of the Brain/MINDS project (see Data availability section). Full size image Code implementation The method reported here was implemented on a cluster HPC computer for global tracking algorithms. It processes several brains in parallel while sharing champion settings over the generations. Separate jobs are generated (1 job per brain) and synchronized for sharing. Jobs keep running evolutionary processes and were tested on a single core with low memory. An additional parallelization of the fitness function was added due to the computational challenges of global tracking. It allows several runs of global tracking at the same time. A single global tracking run takes from 1 to 3 h for the initial generations. Fiber density and length increase gradually while improving the parameters. Then, every run becomes computationally expensive. For each fitness function calculation, the synchronized jobs dispatch 8 “heavy” jobs (1 job per individual parameter setting). A “heavy” job uses more than 1 core and requires higher memory for reading data sources (masks, neural tracer reconstructions, dMRI, atlas, injection regions), performing global tracking n times, calculating objective functions, and recording results (jobs information, parameters, tractograms, density maps, champions, connection matrices, objective values) in a folder-organized structure. By this method, our framework parallelization is implemented at the level of individual brains and global tracking runs. The whole optimization process took around 4 \(\sim\) 5 weeks. Additionally, an alternative portable implementation is made available for desktop PC’s, targeting commonly used tractography approaches that do not require important computing resources. This version exploits mpi4py 48 to run parallel evolutionary processes, one per brain, while the fitness function runs sequentially within each process. Champion sharing and process synchronization are implemented as well. The iFOD2 algorithm optimization used this version of the code, obtaining results in less than 50% of the HPC implementation running time. Beside the fiber tracking algorithm, the complexity of objective functions and the number of fibers to generate may change the performance. Fluorescent neural tracer data Segmented neural tracer 3D images (Figs. 3 , 8 b, Supplementary Fig. S1 and S6 a) were generated by marmonet 32 . Marmonet is the Brain/MINDS AI-driven pipeline for automated segmentation of tracer signals. It incorporates state-of-the-art machine learning techniques based on artificial convolutional neural networks 49 and robust image registration. Raw images show the fluorescent signal of an anterograde tracer, a protein-based virus that tracks axons from injection region cells to their point of termination. Images are taken using two-photon microscopes, TissueCyte 1000 or TissueCyte 1100. Initially, they show several patterns, shapes, contrasts, and intensities. After marmonet pre-processing, image stitching, and segmentation, high-contrast results of the injection region and its center, corresponding cell bodies, and axon tracers are obtained. Segmentation results include voxel-intensity weighting from the raw tracer signal. All processed images are mapped from their \(1.39 \times 1.34 \times 50\, \upmu \mathrm{m}^3\) resolution to the Brain/MINDS reference image space of \(100 \times 100 \times 200 \, \upmu \mathrm{m}^3\) resolution. Tracer injection regions and their centers as 3D reconstructions were used in our optimization as well. Despite differences between neural tracer and dMRI tractography, important voxel-level features from the 3D tracer segmentation images were exploited by the framework to improve fiber tracking results. We considered voxel intensity and its distance to the injection region as important features to promote strong, long-range connections in dMRI-based tractography. Thus, we assumed both features as common characteristics. Diffusion MRI dMRI data were generated by ex-vivo marmoset experiments. Marmosets were perfusion-fixed (Table 2 ) and cranial brains were extracted. Brains were immersed in PFA reagent for 2–3 days, which was then replaced with PBS reagent. MRI imaging was performed on brains immersed in fluorinert liquid. A 9.4-Tesla small-animal MR scanner was used, controlled with a Bruker Paravision 6.0.1. The solenoid coil had an inner diameter of 28 mm. Diffusion imaging was accomplished using a spin-echo diffusion-weighted, echo-planar imaging sequence with repetition time \(TR= 4000\,\mathrm{ms}\) , echo time \(TE = 21.8\,\mathrm{ms}\) , and b -value \(= 5000\,\mathrm{s/mm}^2\) . The acquisition matrix was \(190 \times 190 \times 105\) over a \(38 \times 38 \times 21\,\mathrm{mm}^3\) field-of-view (FOV), resulting in a native isotropic image resolution of \(200\, \upmu \mathrm{m}\) . The diffusion sampling protocol included 128 unique diffusion directions and 2 non-diffusion-weighted (b0) measurements (the first b0 image was removed because it usually contains noise). Total acquisition time was 2 h 40 min per sample. Table 2 Characteristics of marmoset brains used in this study. The same brains were handled for tracer injections and dMRI imaging. Full size table Pre-processing dMRI data, bvec and bval files, and individual whole-brain masks were acquired from the Brain/MINDS dMRI-pipeline. dMRI was de-noised using MRtrix3 34 in 3 steps. First we applied dwidenoise , which exploits data redundancy in the PCA domain using random matrix theory 50 , 51 ; secondly mrdegibbs removed Gibbs ringing artifacts by local subvoxel-shifts 52 . Finally, a mask filter was applied to the whole-brain mask, eroding 2 voxels to remove noise at the boundaries and to constrain abnormal fiber growth during fiber tracking. Injection region masks were dilated 2 voxels to improve detection of fibers contacting them, as support against potential bias in the registration and injection region detection. For registration tasks we used b0 images and advanced normalization tools ANTs 53 . Density maps Evolutionary optimization requires comparison of fiber-density maps in standard brain space against neural tracer data (Fig. 1 b). A fiber-density map is built for each individual (a particular parameter setting) using MRtrix3 commands. First, duplicated fiber tracking results are transferred from dMRI space to standard brain space by normalization mapping ( tcknormalise or tcktransform ). In the latter space, tractograms are intersected with the corresponding tracer injection region using tckedit . The resulting subset of fibers, as well as the complete tractogram, are converted to density maps by tckmap and averaged over the duplicated tractography runs. The density map corresponding to the subset of fibers is used for computation of \(f_1\) and \(f_2\) . Similarly, \(f_4^*\) is measured by the intersection of the commissural mask with the density map of the complete tractogram. Voxel weighting Each voxel of \(TPR^w_v\) ( \(f_1\) and \(f_2\) ) is weighted with 2 factors obtained from neural tracer data, the distance \(d_i\) and intensity \(w_i\) (Fig. 1 b). The center of the injection region contains few voxels. Refinement to a unique voxel is performed by summing all x, y, and z-coordinates and dividing each sum by the corresponding number of voxels, giving a unique 3D position. The updated center is used to calculate distances \(d_i\) from all TP voxels to the injection center. Distances \(d_i\) are normalized by the maximum observed distance. Neural tracer 3D images provide voxel intensities \(w_i\) , which are associated with connection strengths from the injection region. Similarly, \(w_i\) are normalized by the maximum observed intensity. Global tractography and parameter selection For the second experiment, dMRI-based tractography was performed using a global tracking algorithm 27 . This method provides the whole-brain connectivity configuration that optimally fits the acquired data 27 , 28 , 29 . The optimization applied is such that each particle (also called a segment) tries to mimic the source data, promoting its closeness to the measurement in anisotropic areas (e.g., the white matter), and inferring information in ambiguous isotropic areas (e.g., gray matter) by neighboring anisotropic areas. We selected this algorithm due to its documented reliability in terms of position, tangent directions, and curvature of reconstructed fibers with a phantom dataset at the DMFC-fiberCup at MICCAI’2009. However, it requires optimization for specific anatomy or species. Global tracking does not use pre-defined seed(s), requiring no human intervention. Fibers are built with small line segments that form chains during tractographic optimization, and their number and orientation are adjusted to match data obtained from high angular resolution diffusion imaging (HARDI). From the set of segments and their connections, a predicted MR-signal is computed. Connection behavior between segments is controlled by internal energy from two parameters selected as relevant to our optimization: length l is the fiber segment length, and connlike L is the likeliness that two segments link together (also known as connection potential). External energy measures the difference between the current and predicted diffusion-weighted HARDI signals. From the external energy we designated as important parameters: the weight w contribution, and the width \(\sigma\) of the prototype-signal of each segment. In addition, two more parameters were considered: the chemPot 2 c (cost of adding a particle) and chemPot 1 (similar to chemPot2, also known as the particle potential, which regulates the number and distribution of particles). To test the significance of the selected parameters, we pre-evaluated them by running global tracking on 3 brains and assessing the fiber number and length variability caused by a single parameter change, while keeping others fixed at their default values (Supplementary Fig. S9 ). Weight , width , length and connlike produced changes in fiber density and length. However, changes of chemPot 2 and chemPot 1 values, produced almost no effect on fiber density and length, practically unnoticeable in the latter case. Therefore, we selected the first 4 parameters and chemPot 2 (renamed as chemPot ) for optimization. Data availability Optimization process code is publicly available on github ( ). It can be adapted to other fiber tracking algorithms, data sources, and objective functions. The global tracking algorithm is available at . Datasets (neural tracer, dMRI, standard brain, atlas, neural tracer connectome, masks) will be made available as part of the Brain/MINDS project in the near future (data portal site: ). All other data presented in this study are available from the corresponding author upon request. | Scientists in Japan's brain science project have used machine intelligence to improve the accuracy and reliability of a powerful brain-mapping technique, a new study reports. Their development, published on December 18 in Scientific Reports, gives researchers more confidence in using the technique to untangle the human brain's wiring and to better understand the changes that accompany neurological or mental disorders such as Parkinson's or Alzheimer's disease. "Working out how all the different brain regions are connected—what we call the connectome of the brain—is vital to fully understand the brain and all the complex processes it carries out," said Professor Kenji Doya, who leads the Neural Computation Unit at the Okinawa Institute of Science and Technology Graduate University (OIST). To identify connectomes, researchers track nerve cell fibers that extend throughout the brain. In animal experiments, scientists can inject a fluorescent tracer into multiple points in the brain and image where the nerve fibers originating from these points extend to. But this process requires analyzing hundreds of brain slices from many animals. And because it is so invasive, it cannot be used in humans, explained Prof. Doya. However, advances in magnetic resonance imaging (MRI) have made it possible to estimate connectomes noninvasively. This technique, called diffusion MRI-based fiber tracking, uses powerful magnetic fields to track signals from water molecules as they move—or diffuse—along nerve fibers. A computer algorithm then uses these water signals to estimate the path of the nerve fibers throughout the whole brain. But at present, the algorithms do not produce convincing results. Just as photographs can look different depending on the camera settings chosen by a photographer, the settings—or parameters—chosen by scientists for these algorithms can generate very different connectomes. "There are genuine concerns with the reliability of this method," said Dr. Carlos Gutierrez, first author and postdoctoral researcher in the OIST Neural Computation Unit. "The connectomes can be dominated by false positives, meaning they show neural connections that aren't really there." Furthermore, the algorithms struggle to detect nerve fibers that stretch between remote regions of the brain. Yet these long-distance connections are some of the most important for understanding how the brain functions, Dr. Gutierrez said. The green represents nerve fibers detected by injecting a fluorescent tracer at a single point. The red represents nerve fibers detected using a diffusion MRI-based fiber tracking algorithm. Only the nerve fibers that also connected up to the point where the tracer was injected are shown. The yellow represents nerve fibers that were detected using both techniques. The results show that the optimized algorithm performed better than the default algorithm, not only on a brain it was trained on, but on a previously unseen brain. The optimized algorithm detected a higher number of fibers and also fibers that stretched longer distances. Credit: OIST In 2013, scientists launched a Japanese government-led project called Brain/MINDS (Brain Mapping by Integrated Neurotechnologies for Disease Studies) to map the brains of marmosets—small nonhuman primates whose brains have a similar structure to human brains. The brain/MINDS project aims to create a complete connectome of the marmoset brain by using both the non-invasive MRI imaging technique and the invasive fluorescent tracer technique. "The data set from this project was a really unique opportunity for us to compare the results from the same brain generated by the two techniques and determine what parameters need to be set to generate the most accurate MRI-based connectome," said Dr. Gutierrez. In the current study, the researchers set out to fine-tune the parameters of two different widely used algorithms so that they would reliably detect long-range fibers. They also wanted to make sure the algorithms identified as many fibers as possible while minimally pinpointing ones that were not actually present. Instead of trying out all the different parameter combinations manually, the researchers turned to machine intelligence. To determine the best parameters, the researchers used an evolutionary algorithm. The fiber-tracking algorithm estimated the connectome from the diffusion MRI data using parameters that changed—or mutated—in each successive generation. Those parameters competed against each other and the best parameters—the ones that generated connectomes that most closely matched the neural network detected by the fluorescent tracer—advanced to the next generation. The researchers tested the algorithms using fluorescent tracer and MRI data from ten different marmoset brains. But choosing the best parameters wasn't simple, even for machines, the researchers found. "Some parameters might reduce the false positive rate but make it harder to detect long-range connections. There's conflict between the different issues we want to solve. So deciding what parameters to select each time always involves a trade-off," said Dr. Gutierrez. (Top left) The image shows all the estimated fibers in the whole brain of a marmoset using a diffusion MRI-based fiber tracking algorithm with generic set of optimized parameters. (Top right) The image shows the same marmoset brain but the connectome is generated using the same algorithm with default parameters. There are noticeably fewer fibers. (Bottom) The two matrices show the strength of connection (density of nerve fibers) between one brain region and another brain region. The left matrix shows that the algorithm with the genetic set of optimized parameters detected a higher density of nerve fibers connecting the brain regions compared to the right matrix, which shows that the default algorithm detected a much lower density of nerve fibers. Credit: OIST Throughout the multiple generations of this "survival-of-the-fittest" process, the algorithms running for each brain exchanged their best parameters with each other, allowing the algorithms to settle on a more similar set of parameters. At the end of the process, the researchers took the best parameters and averaged them to create one shared set. "Combining parameters was an important step. Individual brains vary, so there will always be a unique combination of parameters that works best for one specific brain. But our aim was to come up with the best generic set of parameters that would work well for all marmoset brains," explained Dr. Gutierrez. The team found that the algorithm with the generic set of optimized parameters also generated a more accurate connectome in new marmoset brains that weren't part of the original training set, compared to the default parameters used previously. The striking difference between the images constructed by algorithms using the default and optimized parameters sends out a stark warning about MRI-based connectome research, the researchers said. "It calls into question any research using algorithms that have not been optimized or validated," cautioned Dr. Gutierrez. In the future, the scientists hope to make the process of using machine intelligence to identify the best parameters faster, and to use the improved algorithm to more accurately determine the connectome of brains with neurological or mental disorders. "Ultimately, diffusion MRI-based fiber tracking could be used to map the whole human brain and pinpoint the differences between healthy and diseased brains," said Dr. Gutierrez. "This could bring us one step closer to learning how to treat these disorders." | 10.1038/s41598-020-78284-4 |